Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 4679

Google gives its Gemini AI chatbot a longer memory and new skills

$
0
0

Google showed off some new skills in its Gemini AI chatbot at Tuesday’s I/O developer event, and previewed a new Gemini assistant that talks to you like a human assistant. 

The AI advancements are powered by Google DeepMind’s Gemini Pro 1.5 model, which strikes a balance between performance and speed. 

In the short term, subscribers to the Gemini Advanced service tier will get a chatbot that can digest and remember far more information (and more than any other consumer chatbot). Google says Gemini offers a one million-token context window, meaning that it can remember up to 1,500 pages of documents, or summarize 100 emails. A user might upload a lengthy rental agreement, Google says, then ask Gemini questions about rules covering pets or rent disputes. Users will now be able to upload files from Google Drive directly into the chatbot. 

In a live demonstration, Google showed how the chatbot can now more effectively help you plan activities during an upcoming trip. It starts by extracting the details of the upcoming trip (flight times, hotel locations, etc.) from the confirmation emails you received from your airline and hotel via Gmail. After gathering some information from you about what you and your family enjoy, it might suggest some attractions near your hotel (based on Google Maps data) that are likely to fit the bill. Gemini uses its reasoning and planning skills to consider logistical questions, such as whether or not a proposed agenda leaves enough time for travel between activities.  

Google says Gemini users will also see improvements in how well the chatbot understands images. For instance, you can upload a picture of a math problem and get step-by-step instructions on how to solve it. 

Gemini Live 

The search giant’s most interesting announcement was something called Gemini Live, a more advanced version of the Gemini assistant that speaks to you in a relatively natural style.

Google says “Live” users (i.e. Gemini Advanced service tier subscribers) can choose from a number of voices. Because Gemini Pro is fast, you can interrupt the assistant and it will stop and wait for more information before continuing, just like a human assistant would do. You might ask Live to help you prepare for a job interview. The assistant might provide feedback on your résumé and relevant strengths for the job, or even role-play an interview and give feedback on your responses.

Google says it will add a visual element later this year allowing Live to talk with a user about images it “sees” through a phone camera. 

Gemini Live isn’t available yet (it will arrive “in the coming months,” Google says), but it can be seen as the current state of the art in AI assistants. 

“Reasoning is a new capability set that we’ve been working on,” said Sissie Hsiao, vice president at Google and general manager for Gemini, during an interview with Fast Company on Monday. “So applying it not just to text generation or image generation but actually solving problems using other tools, and composing those tools is the next epic.”

Gemini Live shares some of the same features and characteristics with the new ChatGPT powered by GPT-4o that OpenAI announced Monday. Both assistants can process images, hold human-sounding conversations with the user, remember large amounts of information, and reason and plan.


Viewing all articles
Browse latest Browse all 4679

Trending Articles