Skip to main content

Google’s AI just got ears

The Google Gemini AI logo.
Google

AI chatbots are already capable of “seeing” the world through images and video. But now, Google has announced audio-to-speech functionalities as part of its latest update to Gemini Pro. In Gemini 1.5 Pro, the chatbot can now “hear” audio files uploaded into its system and then extract the text information.

The company has made this LLM version available as a public preview on its Vertex AI development platform. This will allow more enterprise-focused users to experiment with the feature and expand its base after a more private rollout in February when the model was first announced. This was originally offered only to a limited group of developers and enterprise customers.

Recommended Videos

1. Breaking down + understanding a long video

I uploaded the entire NBA dunk contest from last night and asked which dunk had the highest score.

Gemini 1.5 was incredibly able to find the specific perfect 50 dunk and details from just its long context video understanding! pic.twitter.com/01iUfqfiAO

— Rowan Cheung (@rowancheung) February 18, 2024

Please enable Javascript to view this content

Google shared the details about the update at its Cloud Next conference, which is currently taking place in Las Vegas. After calling the Gemini Ultra LLM that powers its Gemini Advanced chatbot the most powerful model of its Gemini family, Google is now calling Gemini 1.5 Pro its most capable generative model. The company added that this version is better at learning without additional tweaking of the model.

Gemini 1.5 Pro is multimodal in that it can interpret different types of audio into text, including TV shows, movies, radio broadcasts, and conference call recordings. It’s even multilingual in that it can process audio in several different languages. The LLM may also be able to create transcripts from videos; however, its quality may be unreliable, as mentioned by TechCrunch.

When first announced, Google explained that Gemini 1.5 Pro used a token system to process raw data. A million tokens equate to approximately 700,000 words or 30,000 lines of code. In media form, it equals an hour of video or around 11 hours of audio.

There have been some private preview demos of Gemini 1.5 Pro that demonstrate how the LLM is able to find specific moments in a video transcript. For example, AI enthusiast Rowan Cheung got early access and detailed how his demo found an exact action shot in a sports contest and summarized the event, as seen in the tweet embedded above.

However, Google noted that other early adopters, including United Wholesale Mortgage, TBS, and Replit, are opting for more enterprise-focused use cases, such as mortgage underwriting, automating metadata tagging, and generating, explaining, and updating code.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
ChatGPT prototypes its next strike against Google Search: browsers
ChatGPT on a laptop

ChatGPT developer OpenAI may be one step closer to creating a third-party search tool that integrates the chatbot into other websites as primary feature. If the project comes to fruition, OpenAI could target Google as both a search engine and web browser.

A source told The Information the project is a search tool called NLWeb, Natural Language Web, and that it is currently in a prototype phase. OpenAI has showcased the prototype to several potential partners in travel, retail, real estate, and food industries, with Conde Nast, Redfin, Eventbrite, and Priceline being named by brand. The tool would enable ChatGPT search features onto the websites of these brands' products and services.

Read more
Google’s Gemini wants to get to know the real you
Using Gemini AI on the Google Pixel 9.

Google has announced that it is rolling out a new feature for Gemini that will enable the chatbot to remember specific details about its users and recall those facts in later conversations.

"This helps Gemini provide even more helpful and relevant responses, tailored precisely to your needs," the company wrote in the new feature's release notes Tuesday.

Read more
This open-source alternative to ChatGPT just got serious
The beta Canvas feature on Mistral

French AI startup Mistral announced Monday that it is incorporating a half-dozen new features and capabilities into its free generative AI work assistant, dubbed le Chat (French for "the cat"), that will put the open-source chatbot on par with leading frontier models from OpenAI and Anthropic.

Le Chat can now search the web and provide cited sources, similar to what Perplexity and SearchGPT both offer. Mistral's chatbot now also offers a Canvas feature akin to Claude's Artifacts where users can modify and edit content and code. What's more, le Chat can now generate images thanks to an integration with Black Forest Labs' Flux Pro, the same image generator that powers Grok-2's capabilities.

Read more