Google Introduces Gemini 2.0: the new AI model for the agentic era
- Key Highlights from Sundar Pichai’s Announcement on Gemini 2.0
- The Evolution of AI
- Focus on Information: Google’s core mission has always been to organize and make information accessible and useful.
- Gemini 1.0 Impact: Introduced last December, it brought native multimodality and long context understanding, spanning text, images, video, audio, and code.
- Introducing Gemini 2.0
- Advanced Multimodality: Includes native image and audio output alongside native tool use.
- Agentic Models: Built to understand the world better, think multiple steps ahead, and act under user supervision.
- Gemini 2.0 Features
- Flash Model Availability: Starting today for all Gemini users.
- Deep Research: A feature in Gemini Advanced to assist with exploring complex topics and compiling reports using advanced reasoning.
- Transforming Search with AI
- AI Overviews: Already helping 1 billion users ask complex questions.
- Gemini 2.0 Enhancements: Adds advanced reasoning, multimodal queries, coding, and support for complex math.
- Technical Foundation
- Built on Trillium TPUs: Sixth-gen custom hardware powered 100% of Gemini 2.0’s training and inference, now available for developers.
- Vision for the Future
- Universal Assistant: Gemini 2.0 aims to make information not just organized but actionable and transformative.
- Looking Ahead
- Broader Rollouts: Advanced capabilities in Search and AI Overviews to reach more languages and regions in 2024.
- Continued AI Innovation: Decade-long investments in AI stack are driving this new era of intelligent tools and agents.
- “Gemini 2.0 represents a leap in AI capabilities, focusing on utility and action. It’s not just about understanding but enabling users to achieve more,” says Sundar Pichai.