Anthropic claims its latest model is best-in-class
OpenAI rival Anthropic is releasing a powerful new generative AI model called Claude 3.5 Sonnet. But itâs more an incremental step than a monumental leap forward.
Claude 3.5 Sonnet can analyze both text and images as well as generate text, and itâs Anthropicâs best-performing model yet â at least on paper. Across several AI benchmarks for reading, coding, math and vision, Claude 3.5 Sonnet outperforms the model itâs replacing, Claude 3 Sonnet, and beats Anthropicâs previous flagship model Claude 3 Opus.
Benchmarks arenât necessarily the most useful measure of AI progress, in part because many of them test for esoteric edge cases that arenât applicable to the average person, like answering health exam questions. But for what itâs worth, Claude 3.5 Sonnet just barely bests rival leading models, including OpenAIâs recently launched GPT-4o, on the benchmarks Anthropic tested it against.
Alongside the new model, Anthropic is releasing what itâs calling Artifacts, a workspace where users can edit and add to content â e.g. code and documents â generated by Anthropicâs models. Currently in preview, Artifacts will gain new features, like ways to collaborate with larger teams and store knowledge bases, in the near future, Anthropic says.
Focus on efficiency
Claude 3.5 Sonnet is a bit more performant than Claude 3 Opus, and Anthropic says that the model better understands nuanced and complex instructions, in addition to concepts like humor. (AI is notoriously unfunny, though.) But perhaps more importantly for devs building apps with Claude that require prompt responses (e.g. customer service chatbots), 3.5 Sonnet is faster. Itâs around twice the speed of 3 Opus, Anthropic claims.
Vision â analyzing photos â is one area where Claude 3.5 Sonnet greatly improves over 3 Opus, according to Anthropic. 3.5 Sonnet can interpret charts and graphs more accurately and transcribe text from âimperfectâ images, such as pics with distortions and visual artifacts.
Michael Gerstenhaber, product lead at Anthropic, says that the improvements are the result of architectural tweaks and new training data, including AI-generated data. Which data specifically? Gerstenhaber wouldnât disclose, but he implied that Claude 3.5 Sonnet draws much of its strength from these training sets.
âWhat matters to [businesses] is whether or not AI is helping them meet their business needs, not whether or not AI is competitive on a benchmark,â Gerstenhaber told TechCrunch. âAnd from that perspective, I believe Claude 3.5 Sonnet is going to be a step function ahead of anything else that we have available â and also ahead of anything else in the industry.â
The secrecy around training data could be for competitive reasons. But it could also be to shield Anthropic from legal challenges â in particular challenges pertaining to fair use. The courts have yet to decide whether vendors like Anthropic and its competitors, like OpenAI, Google, Amazon and so on, have a right to train on public data, including copyrighted data, without compensating or crediting the creators of that data.
So, all we know is that Claude 3.5 Sonnet was trained on lots of text and images, like Anthropicâs previous models, plus feedback from human testers to try to âalignâ the model with usersâ intentions, hopefully preventing it from spouting toxic or otherwise problematic text.
What else do we know? Well, Claude 3.5 Sonnetâs context window â the amount of text that the model can analyze before generating new text â is 200,000 tokens, the same as 3 Sonnet. Tokens are subdivided bits of raw data, like the syllables âfan,â âtasâ and âticâ in the word âfantasticâ; 200,000 tokens is equivalent to about 150,000 words.
And we know that Claude 3.5 Sonnet is available today. Free users of Anthropicâs web client and the Claude iOS app can access it at no charge; subscribers to Anthropicâs paid plans Claude Pro and Claude Team get 5x higher rate limits. 3.5 Sonnet is also live on Anthropicâs API and managed platforms like Amazon Bedrock and Google Cloudâs Vertex AI.
âClaude 3.5 Sonnet is really a step change in intelligence without sacrificing speed, and it sets us up for future releases along the entire Claude model family,â Gerstenhaber said.
Claude 3.5 Sonnet also drives Artifacts, which pops up a dedicated window in the Claude web client when a user asks the model to generate content like code snippets, text documents or website designs. Gerstenhaber explains: âArtifacts are the model output that puts generated content to the side and allows you, as a user, to iterate on that content. Letâs say you want to generate code â the artifact will be put in the UI, and then you can talk with Claude and iterate on the document to improve it so you can run the code.â
The bigger picture
So whatâs the significance of Claude 3.5 Sonnet in the broader context of Anthropic â and the AI ecosystem, for that matter?
Claude 3.5 Sonnet shows that incremental progress is the extent of what we can expect right now on the model front, barring a major research breakthrough. The past few months have seen flagship releases from Google (Gemini 1.5 Pro) and OpenAI (GPT-4o) that move the needle marginally in terms of benchmark and qualitative performance. But there hasnât been a leap of matching the leap from GPT-3 to GPT-4 in quite some time, owing to the rigidity of todayâs model architectures and the immense compute they require to train.
As generative AI vendors turn their attention to data curation and licensing in lieu of promising new scalable architectures, there are signs investors are becoming wary of the longer-than-anticipated path to ROI for generative AI. Anthropic is somewhat inoculated from this pressure, being in the enviable position of Amazonâs (and to a lesser extent Googleâs) insurance against OpenAI. But the companyâs revenue, forecasted to reach just under $1 billion by year-end 2024, is a fraction of OpenAIâs â and Iâm sure Anthropicâs backers donât let it forget that fact.
Despite a growing customer base that includes household brands such as Bridgewater, Brave, Slack and DuckDuckGo, Anthropic still lacks a certain enterprise cachet. Tellingly, it was OpenAI â not Anthropic â with which PwC recently partnered to resell generative AI offerings to the enterprise.
So Anthropic is taking a strategic, and well-trodden, approach to making inroads, investing development time into products like Claude 3.5 Sonnet to deliver slightly better performance at commodity prices. 3.5 Sonnet is priced the same as 3 Sonnet: $3 per million tokens fed into the model and $15 per million tokens generated by the model.
Gerstenhaber spoke to this in our conversation. âWhen youâre building an application, the end user shouldnât have to know which model is being used or how an engineer optimized for their experience,â he said, âbut the engineer could have the tools available to optimize for that experience along the vectors that need to be optimized, and cost is certainly one of them.â
Claude 3.5 Sonnet doesnât solve the hallucinations problem. It almost certainly makes mistakes. But it might just be attractive enough to get developers and enterprises to switch to Anthropicâs platform. And at the end of the day, thatâs what matters to Anthropic.
Toward that same end, Anthropic has doubled down on tooling like its experimental steering AI, which lets developers âsteerâ its modelsâ internal features; integrations to let its models take actions within apps; and tools built on top of its models such as the aforementioned Artifacts experience. Itâs also hired an Instagram co-founder as head of product. And itâs expanded the availability of its products, most recently bringing Claude to Europe and establishing offices in London and Dublin.
Anthropic, all told, seems to have come around to the idea that building an ecosystem around models â not simply models in isolation â is the key to retaining customers as the capabilities gap between models narrows.
Still, Gerstenhaber insisted that bigger and better models â like Claude 3.5 Opus â are on the near horizon, with features such as web search and the ability to remember preferences in tow.
âI havenât seen deep learning hit a wall yet, and Iâll leave it to researchers to speculate about the wall, but I think itâs a little bit early to be coming to conclusions on that, especially if you look at the pace of innovation,â he said. âThereâs very rapid development and very rapid innovation, and I have no reason to believe that itâs going to slow down.â
Weâll see.