My December birthday used to be overshadowed by holiday celebrations. I make sure my kids' birthday is celebrated.
- today, 12:29 PM
- businessinsider.com
- 0
Google on Wednesday gave the public and developers a taste of the second generation of its Gemini frontier models, and a preview of some of the agents it will power.
The new Gemini 2.0 family of models is designed to power new AI agents that understand more than just text, and reason and complete tasks with more autonomy. Google described how the new models will improve an experimental agent called Project Astra, which lets AI process information seen through a camera. It previewed another experimental agent, now called Project Mariner that’s designed to perform web tasks on behalf of the user.
“[O]ur next era of models [is] built for this new agentic era,” said Google CEO Sundar Pichai in a blog post Wednesday. “With new advances in multimodality–like native image and audio output–and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.” The term “universal assistant” implies an AI agent with artificial general intelligence (AGI), or the ability to do most tasks as well or better than humans. Experts say the industry is anywhere from two to 10 years away from realising that aspiration.
Google isn’t yet unveiling the largest and most capable of its Gemini 2.0 models. That may come in another announcement in January. For now it’s releasing to developers an experimental version of a smaller and faster variant called Gemini Flash 2.0. “It’s our workhorse model with low latency and enhanced performance at the cutting edge of our technology, at scale, Google Deepmind CEO Demis Hassabis says in a blog post.
Gemini 2.0 Flash, Hasabis says, is twice as fast as its predecessor model, 1.5 Flash, and significantly smarter. He says the new model is multimodal, meaning it can process and output text, images, and audio. The “experimental” version supports multimodal input but only text output. Flash 2.0 is also capable of calling on external tools like Google search, or tools made by other companies, as well as execute computer code.
Consumer users can get in on the fun, too. Gemini chatbot users can now choose to have the chatbot powered by the Flash 2.0 (experimental) model. Google says it’ll put Gemini 2.0 models under the hood of more of its apps and services next year.
Gemini’s second generation is focused on powering AI agents capable of taking steps on their own and calling on resources they need. The models can take a very large set of instructions and (multimodal) file inputs from the user, then use planning, reasoning, and function-calling (such as conducting a web search) to produce an answer.
The wider skill set is showcased in a couple of experimental agents, one for a mobile device and one for a web browser.
At the company’s developer event earlier this year Google demonstrated a multimodal agent called Project Astra that can react and reason on real-time video seen through a phone camera, as well as audio (including language) it hears through the device’s microphones. Gemini 2.0, Google says, will give the agent better conversational skills and the ability to call on Google Search and Maps. Astra is nowhere near being released to the public, however.
Gemini 2.0 will enable another experiment called Project Mariner, an agent that understands the images, text, code, and other elements within a browser window, then performs tasks based on that input via a Chrome browser extension. Google says the agent, which is available only to a group of “trusted testers,” is often slow and inaccurate today, but will improve rapidly.
“If Gemini 1.0 was about organizing and understanding information,” Pichai said in his blog post, “Gemini 2.0 is about making it much more useful.”
No comments