An often overlooked limitation for chatbots is memory. While it’s true that the AI language models that power these systems are trained on terabytes of text, the amount these systems can process when in use — that is, the combination of input text and output, also known as their “context window” — is limited. For ChatGPT it’s around 3,000 words. There are ways to work around this, but it’s still not a huge amount of information to play with.
Now, AI startup Anthropic (founded by former OpenAI engineers) has hugely expanded the context window of its own chatbot Claude, pushing it to around 75,000 words. As the company points out in a blog post, that’s enough to process the entirety of The Great Gatsby in one go. In fact, the company tested the system by doing just this — editing a single sentence in the novel and asking Claude to spot the change. It did so in 22 seconds.
You may have noticed my imprecision in describing the length of these context windows. That’s because AI language models measure information not by number of characters or words, but in tokens; a semantic unit that doesn’t map precisely onto these familiar quantities. It makes sense when you think about it. After all, words can be long or short, and their length does not necessarily correspond to their complexity of meaning. (The longest definitions in the dictionary are often for the shortest words.) The use of “tokens” reflects this truth, and so, to be more precise: Claude’s context window can now process 100,000 tokens, up from 9,000 before. By comparison, OpenAI’s GPT-4 processes around 8,000 tokens (that’s not the standard model available in ChatGPT — you have to pay for access) while a limited-release full-fat model of GPT-4 can handle up to 32,000 tokens.
Right now, Claude’s new capacity is only available to Anthropic’s business partners, who are tapping into the chatbot via the company’s API. The pricing is also unknown, but is certain to be a significant bump. Processing more text means spending more on compute.
"Everyone loves reading. In principle, anyway. Nobody is against it, right? Surely, in the midst of our many quarrels, we can agree that people should learn to read, should learn to enjoy it and should do a lot of it. But bubbling underneath this bland, upbeat consensus is a simmer of individual anxiety and collective panic. We are in the throes of a reading crisis. (...)
Just what is reading, anyway? What is it for? Why is it something to argue and worry about? Reading isn’t synonymous with literacy, which is one of the necessary skills of contemporary existence. Nor is it identical with literature, which designates a body of written work endowed with a special if sometimes elusive prestige.
Reading is something else: an activity whose value, while broadly proclaimed, is hard to specify. Is any other common human undertaking so riddled with contradiction? Reading is supposed to teach us who we are and help us forget ourselves, to enchant and disenchant, to make us more worldly, more introspective, more empathetic and more intelligent. It’s a private, even intimate act, swathed in silence and solitude, and at the same time a social undertaking. It’s democratic and elitist, soothing and challenging, something we do for its own sake and as a means to various cultural, material and moral ends. (...)
But nothing is ever so simple. Reading is, fundamentally, both a tool and a toy. It’s essential to social progress, democratic citizenship, good government and general enlightenment. It’s also the most fantastically, sublimely, prodigiously useless pastime ever invented. Teachers, politicians, literary critics and other vested authorities labor mightily to separate the edifying wheat from the distracting chaff, to control, police, correct and corral the transgressive energies that propel the turning of pages. The crisis is what happens either when those efforts succeed or when they fail. Everyone likes reading, and everyone is afraid of it."
Now, AI startup Anthropic (founded by former OpenAI engineers) has hugely expanded the context window of its own chatbot Claude, pushing it to around 75,000 words. As the company points out in a blog post, that’s enough to process the entirety of The Great Gatsby in one go. In fact, the company tested the system by doing just this — editing a single sentence in the novel and asking Claude to spot the change. It did so in 22 seconds.
You may have noticed my imprecision in describing the length of these context windows. That’s because AI language models measure information not by number of characters or words, but in tokens; a semantic unit that doesn’t map precisely onto these familiar quantities. It makes sense when you think about it. After all, words can be long or short, and their length does not necessarily correspond to their complexity of meaning. (The longest definitions in the dictionary are often for the shortest words.) The use of “tokens” reflects this truth, and so, to be more precise: Claude’s context window can now process 100,000 tokens, up from 9,000 before. By comparison, OpenAI’s GPT-4 processes around 8,000 tokens (that’s not the standard model available in ChatGPT — you have to pay for access) while a limited-release full-fat model of GPT-4 can handle up to 32,000 tokens.
Right now, Claude’s new capacity is only available to Anthropic’s business partners, who are tapping into the chatbot via the company’s API. The pricing is also unknown, but is certain to be a significant bump. Processing more text means spending more on compute.
by James Vincent, The Verge | Read more:
Image: Anthropic
[ed. See also: Everyone Likes Reading. Why Are We So Afraid of It? (NYT):]"Everyone loves reading. In principle, anyway. Nobody is against it, right? Surely, in the midst of our many quarrels, we can agree that people should learn to read, should learn to enjoy it and should do a lot of it. But bubbling underneath this bland, upbeat consensus is a simmer of individual anxiety and collective panic. We are in the throes of a reading crisis. (...)
Just what is reading, anyway? What is it for? Why is it something to argue and worry about? Reading isn’t synonymous with literacy, which is one of the necessary skills of contemporary existence. Nor is it identical with literature, which designates a body of written work endowed with a special if sometimes elusive prestige.
Reading is something else: an activity whose value, while broadly proclaimed, is hard to specify. Is any other common human undertaking so riddled with contradiction? Reading is supposed to teach us who we are and help us forget ourselves, to enchant and disenchant, to make us more worldly, more introspective, more empathetic and more intelligent. It’s a private, even intimate act, swathed in silence and solitude, and at the same time a social undertaking. It’s democratic and elitist, soothing and challenging, something we do for its own sake and as a means to various cultural, material and moral ends. (...)
But nothing is ever so simple. Reading is, fundamentally, both a tool and a toy. It’s essential to social progress, democratic citizenship, good government and general enlightenment. It’s also the most fantastically, sublimely, prodigiously useless pastime ever invented. Teachers, politicians, literary critics and other vested authorities labor mightily to separate the edifying wheat from the distracting chaff, to control, police, correct and corral the transgressive energies that propel the turning of pages. The crisis is what happens either when those efforts succeed or when they fail. Everyone likes reading, and everyone is afraid of it."