Google makes Gemini 2.0 available to all announces 2.0 Pro Experimental......

Hey Questers ❤️ 



The new update and latest models have showcased some exceptional performance across benchmarks.

Figure 1, view larger image


Google has made its updated Gemini 2.0 Flash generally available through the Gemini API in Google Studio and Vertex AI. With this, developers can now build production applications with 2.0 Flash. Google kicked off the agentic era in December 2024 by launching an experimental version of Gemini 2.0 Flash, which it claimed to be a highly efficient workhorse model for developers with low latency and enhanced performance. 


This year, Google also updated its 2.0 Flash Thinking Experimental in Google AI Studio, which improved its performance by combining the model’s speed with the ability to apply reasoning to much more complex problems. The tech giant has also introduced some updates to 2.0 Flash, including improved performance in key benchmarks. Moreover, image generation and text-to-speech capabilities will be coming soon.


Google has also released an experimental version of Gemini 2.0 Pro, which it claims to be its best model for coding performance and complex prompts. The model is currently available through Google AI Studio, Vertex AI, and in the Gemini App for Gemini Advanced users. 


Besides, Google has also launched a new model, Gemini 2.0 Flash-Lite, which is touted to be cost-efficient in public preview in Google AI Studio and Vertex AI. The 2.0 Flash Thinking Experimental will be available to Gemini app users in the model dropdown on desktop and mobile. 

Figure 2, view larger image


Google has said that all the models will feature multimodal input with text output on release, with more modalities ready for general availability in the coming months. Details about pricing and specifications are on the Google for Developers blog. 


According to Google, the Gemini 2.0 Pro Experimental has the strongest coding performance and ability to handle complex prompts. The model reportedly has better understanding and reasoning of world knowledge than any other model released by Google so far. The model also features the largest context window—2 million tokens—which enables it to analyse and understand vast amounts of information and also the ability to call Google Search and code execution. 


On the other hand, 2.0 Flash-Lite surpasses 1.5 Flash. The Lite model offers improved quality and at the same time maintains the speed and cost of 1.5 Flash. The 2.0 Flash-Lite has outperformed 1.5 Flash on several benchmarks. The 2.0 Flash-Lite model also comes with a 1 million token context window and multimodal input. The model is available on Google AI Studio and Vertex AI in public preview.





Follow for such more informative threads ➡️ @RZ Nitin

Figure 3, view larger image
Tech