The top AI announcements from Google I/O
Googleās going all-in on AI ā and it wants you to know it. During the companyās keynote at its I/O developer conference on Tuesday, Google mentioned āAIā more than 120 times. Thatās a lot!
But not all of Googleās AI announcements were significant per se. Some were incremental. Others were rehashed. So to help sort the wheat from the chaff, we rounded up the top new AI products and features unveiled at Google I/O 2024.Ā
Generative AI in Search
Google plans to use generative AI toĀ organize entire Google Search results pages.
What will AI-organized pages look like? Well, it depends on the search query. But they might show AI-generated summaries of reviews, discussions from social media sites like Reddit and AI-generated lists of suggestions, Google said.
For now, Google plans to show AI-enhanced results pages when it detects a user is looking for inspiration ā for example, when theyāre trip planning. Soon, itāll also show these results when users search for dining options and recipes, with results for movies, books, hotels, ecommerce and more to come.
Project Astra and Gemini Live
Google isĀ improving its AI-powered chatbotĀ GeminiĀ so that it can better understand the world around it.
The company previewed a new experience in Gemini called Gemini Live, which lets users have āin-depthā voice chats with Gemini on their smartphones. Users can interrupt Gemini while the chatbotās speaking to ask clarifying questions, and itāll adapt to their speech patterns in real time. And Gemini can see and respond to usersā surroundings, either via photos or video captured by their smartphonesā cameras.
Gemini Live ā which wonāt launch until later this year ā can answer questions about things within view (or recently within view) of a smartphoneās camera, like which neighborhood a user might be in or the name of a part on a broken bicycle. TheĀ technical innovations driving Live stem in part from Project Astra, a new initiative within DeepMind to create AI-powered apps and āagentsā for real-time, multimodal understanding.
Google Veo
Googleās gunning for OpenAIāsĀ SoraĀ withĀ Veo, an AI model that can create 1080p video clips around a minute long given a text prompt.Ā
Veo can capture different visual and cinematic styles, including shots of landscapes and time lapses, and make edits and adjustments to already generated footage. The model understands camera movements and VFX reasonably well from prompts (think descriptors like āpan,ā āzoomā and āexplosionā). And Veo has somewhat of a grasp on physics ā things like fluid dynamics and gravity ā which contribute to the realism of the videos it generates.Ā
Veo also supports masked editing for changes to specific areas of a video and can generate videos from a still image, a la generative models likeĀ Stability AIās Stable Video. Perhaps most intriguing, given a sequence of prompts that together tell a story, Veo can generate longer videos ā videos beyond a minute in length.
Ask Photos
Google Photos is getting an AI infusion with the launch of an experimental feature,Ā Ask Photos, powered by Googleās Gemini family of generative AI models.
Ask Photos, which will roll out later this summer, will allow users to search across their Google Photos collection using natural language queries that leverage Geminiās understanding of their photoās content ā and other metadata.
For instance, instead of searching for a specific thing in a photo, such as āOne World Trade,ā users will be able to perform much more broad and complex searches, like finding the ābest photo from each of the National Parks I visited.ā In that example, Gemini would use signals including lighting, blurriness and lack of background distortion to determine what makes a photo the ābestā in a given set and combine that with an understanding of the geolocation info and dates to return the relevant images.
Gemini in Gmail
Gmail users will soon be able toĀ search, summarize and draft emails, courtesy of Gemini ā as well as take action on emails for more complex tasks, like helping process returns.Ā
In one demo at I/O, Google showed how a parent who wanted to catch up on what was going on at their childās school could ask Gemini to summarize all the recent emails from the school. In addition to the body of the emails themselves, Gemini will also analyze attachments, such as PDFs, and spit out a summary with key points and action items.
From a sidebar in Gmail, users can ask Gemini to help them organize receipts from their emails and even put them in a Google Drive folder, or extract information from the receipts and paste it into a spreadsheet. If thatās something you do often ā for example, as a business traveler tracking expenses ā Gemini can also offer to automate the workflow for use in the future.
Detecting scams during calls
GoogleĀ previewed an AI-powered featureĀ to alert users to potential scams during a call.Ā
The capability, which will be built into a future version of Android,Ā uses Gemini Nano, the smallest version of Googleās generative AI offering, which can be run entirely on-device, to listen for āconversation patterns commonly associated with scamsā in real time.Ā
No specific release date has been set for the feature. Like many of these things, Google is previewing how much Gemini Nano will be able to do down the road sometime. We do know, however, that the feature will be opt-in ā which is a good thing. While the use of Nano means the system wonāt be automatically uploading audio to the cloud, the system is still effectively listening to usersā conversations ā a potential privacy risk.
AI for accessibility
Google isĀ enhancing its TalkBack accessibility feature for Android with a bit of generative AI magic.
Soon, TalkBack will tap Gemini Nano to create aural descriptions of objects for low-vision and blind users. For example, TalkBack might refer to an article of clothing as, āA close-up of a black and white gingham dress. The dress is short, with a collar and long sleeves. It is tied at the waist with a big bow.ā
According to Google, TalkBack users encounter around 90 or so unlabeled images per day. Using Nano, the system will be able to offer insight into content ā potentially forgoing the need for someone to input that information manually.