Just last Monday I was commending Mosaic for their “StoryWriter” language model with its huge 65,000 token context window. In contrast, OpenAI’s forthcoming 32,000 token large context version of GPT-4 no longer looks so impressive.

Well…

Anthropic:

We’ve expanded Claude’s context window from 9K to 100K tokens… The average person can read 100,000 tokens of text in ~5+ hours, and then they might need substantially longer to digest, remember, and analyze that information. Claude can now do this in less than a minute.

[…]

Beyond just reading long texts, Claude can help retrieve information from the documents that help your business run. You can drop multiple documents or even a book into the prompt and then ask Claude questions that require synthesis of knowledge across many parts of the text.

With Google’s announcements last week and Anthropic’s steady stream of improvements, I am beginning to wonder what OpenAI has up its sleeve.