Google has officially launched its latest AI models, including Gemini 2.0 Pro Experimental and Gemini 2.0 Flash Thinking. These new models aim to enhance AI reasoning, coding abilities, and overall understanding.
The move comes as competition in AI intensifies, with Chinese AI company DeepSeek gaining attention for its affordable and high-performing reasoning models. Google’s release positions Gemini 2.0 Flash Thinking as a direct competitor, making it available through the Gemini app for wider adoption.
Gemini 2.0 Pro serves as the next-generation model, following the release of Gemini 1.5 Pro last year. According to Google, this new model surpasses its predecessors in reasoning and problem-solving skills. It is particularly effective in coding and handling complex prompts, and it can execute code and leverage Google Search for more accurate responses.
Gemini 2.0 Pro boasts an impressive two million-token context window, allowing it to process vast amounts of information in a single query. For comparison, it could analyze all seven Harry Potter books at once and still have room to process more data. This model is now available in Vertex AI, Google AI Studio, and the Gemini app for Gemini Advanced subscribers.
Additionally, Google has introduced Gemini 2.0 Flash Thinking, making it available to all users in the Gemini app. This model was initially announced in December and is now accessible for broader use. To compete with DeepSeek’s cost-efficient AI solutions, Google has also launched Gemini 2.0 Flash-Lite, an upgraded version of Gemini 1.5 Flash. This model offers improved performance while maintaining the same cost and speed.
With these latest advancements, Google is reinforcing its position in the AI space, offering a mix of high-performance and cost-effective models to cater to different needs.