Gemini’s data-analyzing abilities aren’t as good as Google claims
One of the selling points of Googleâs flagship generative AI models, Gemini 1.5 Pro and 1.5 Flash, is the amount of data they can supposedly process and analyze. In press briefings and demos, Google has repeatedly claimed that the models can accomplish previously impossible tasks thanks to their âlong context,â like summarizing multiple hundred-page documents or searching across scenes in film footage.
But new research suggests that the models arenât, in fact, very good at those things.
Two separate studies investigated how well Googleâs Gemini models and others make sense out of an enormous amount of data â think âWar and Peaceâ-length works. Both find that Gemini 1.5 Pro and 1.5 Flash struggle to answer questions about large datasets correctly; in one series of document-based tests, the models gave the right answer only 40% 50% of the time.
âWhile models like Gemini 1.5 Pro can technically process long contexts, we have seen many cases indicating that the models donât actually âunderstandâ the content,â Marzena Karpinska, a postdoc at UMass Amherst and a co-author on one of the studies, told TechCrunch.
Geminiâs context window is lacking
A modelâs context, or context window, refers to input data (e.g., text) that the model considers before generating output (e.g., additional text). A simple question â âWho won the 2020 U.S. presidential election?â â can serve as context, as can a movie script, show or audio clip. And as context windows grow, so does the size of the documents being fit into them.
The newest versions of Gemini can take in upward of 2 million tokens as context. (âTokensâ are subdivided bits of raw data, like the syllables âfan,â âtasâ and âticâ in the word âfantastic.â) Thatâs equivalent to roughly 1.4 million words, two hours of video or 22 hours of audio â the largest context of any commercially available model.
In a briefing earlier this year, Google showed several pre-recorded demos meant to illustrate the potential of Geminiâs long-context capabilities. One had Gemini 1.5 Pro search the transcript of the Apollo 11 moon landing telecast â around 402 pages â for quotes containing jokes, and then find a scene in the telecast that looked similar to a pencil sketch.
VP of research at Google DeepMind Oriol Vinyals, who led the briefing, described the model as âmagical.â
â[1.5 Pro] performs these sorts of reasoning tasks across every single page, every single word,â he said.
That might have been an exaggeration.
In one of the aforementioned studies benchmarking these capabilities, Karpinska, along with researchers from the Allen Institute for AI and Princeton, asked the models to evaluate true/false statements about fiction books written in English. The researchers chose recent works so that the models couldnât âcheatâ by relying on foreknowledge, and they peppered the statements with references to specific details and plot points thatâd be impossible to comprehend without reading the books in their entirety.
Given a statement like âBy using her skills as an Apoth, Nusis is able to reverse engineer the type of portal opened by the reagents key found in Ronaâs wooden chest,â Gemini 1.5 Pro and 1.5 Flash â having ingested the relevant book â had to say whether the statement was true or false and explain their reasoning.
Tested on one book around 260,000 words (~520 pages) in length, the researchers found that 1.5 Pro answered the true/false statements correctly 46.7% of the time while Flash answered correctly only 20% of the time. That means a coin is significantly better at answering questions about the book than Googleâs latest machine learning model. Averaging all the benchmark results, neither model managed to achieve higher than random chance in terms of question-answering accuracy.
âWeâve noticed that the models have more difficulty verifying claims that require considering larger portions of the book, or even the entire book, compared to claims that can be solved by retrieving sentence-level evidence,â Karpinska said. âQualitatively, we also observed that the models struggle with verifying claims about implicit information that is clear to a human reader but not explicitly stated in the text.â
The second of the two studies, co-authored by researchers at UC Santa Barbara, tested the ability of Gemini 1.5 Flash (but not 1.5 Pro) to âreason overâ videos â that is, search through and answer questions about the content in them.
The co-authors created a dataset of images (e.g., a photo of a birthday cake) paired with questions for the model to answer about the objects depicted in the images (e.g., âWhat cartoon character is on this cake?â). To evaluate the models, they picked one of the images at random and inserted âdistractorâ images before and after it to create slideshow-like footage.
Flash didnât perform all that well. In a test that had the model transcribe six handwritten digits from a âslideshowâ of 25 images, Flash got around 50% of the transcriptions right. The accuracy dropped to around 30% with eight digits.
âOn real question-answering tasks over images, it appears to be particularly hard for all the models we tested,â Michael Saxon, a PhD student at UC Santa Barbara and one of the studyâs co-authors, told TechCrunch. âThat small amount of reasoning â recognizing that a number is in a frame and reading it â might be what is breaking the model.â
Google is overpromising with Gemini
Neither of the studies have been peer-reviewed, nor do they probe the releases of Gemini 1.5 Pro and 1.5 Flash with 2-million-token contexts. (Both tested the 1-million-token context releases.) And Flash isnât meant to be as capable as Pro in terms of performance; Google advertises it as a low-cost alternative.
Nevertheless, both add fuel to the fire that Googleâs been overpromising â and under-delivering â with Gemini from the beginning. None of the models the researchers tested, including OpenAIâs GPT-4o and Anthropicâs Claude 3.5 Sonnet, performed well. But Googleâs the only model provider thatâs given context window top billing in its advertisements.
âThereâs nothing wrong with the simple claim, âOur model can take X number of tokensâ based on the objective technical details,â Saxon said. âBut the question is, what useful thing can you do with it?â
Generative AI broadly speaking is coming under increased scrutiny as businesses (and investors) grow frustrated with the technologyâs limitations.
In a pair of recent surveys from Boston Consulting Group, about half of the respondents â all C-suite executives â said that they donât expect generative AI to bring about substantial productivity gains and that theyâre worried about the potential for mistakes and data compromises arising from generative AI-powered tools. PitchBook recently reported that, for two consecutive quarters, generative AI dealmaking at the earliest stages has declined, plummeting 76% from its Q3 2023 peak.
Faced with meeting-summarizing chatbots that conjure up fictional details about people and AI search platforms that basically amount to plagiarism generators, customers are on the hunt for promising differentiators. Google â which has raced, at times clumsily, to catch up to its generative AI rivals â was desperate to make Geminiâs context one of those differentiators.
But the bet was premature, it seems.
âWe havenât settled on a way to really show that âreasoningâ or âunderstandingâ over long documents is taking place, and basically every group releasing these models is cobbling together their own ad hoc evals to make these claims,â Karpinska said. âWithout the knowledge of how long context processing is implemented â and companies do not share these details â it is hard to say how realistic these claims are.â
Google didnât respond to a request for comment.
Both Saxon and Karpinska believe the antidotes to hyped-up claims around generative AI are better benchmarks and, along the same vein, greater emphasis on third-party critique. Saxon notes that one of the more common tests for long context (liberally cited by Google in its marketing materials), âneedle in the haystack,â only measures a modelâs ability to retrieve particular info, like names and numbers, from datasets â not answer complex questions about that info.
âAll scientists and most engineers using these models are essentially in agreement that our existing benchmark culture is broken,â Saxon said, âso itâs important that the public understands to take these giant reports containing numbers like âgeneral intelligence across benchmarksâ with a massive grain of salt.â