Google held the keynote session of its annual developer-focused Google I/O event on Tuesday. During the session, the tech giant heavily focused on new developments in the artificial intelligence (AI) front and introduced various new AI models as well as new features for the existing infrastructure. A major highlight was the introduction of a two million token context window for Gemini 1.5 Pro, which is currently available for developers. A faster variant of Gemini as well as the next generation of Google’s small language models (SML) Gemma 2 was also introduced.
The event was kickstarted by CEO Sundar Pichai who made one of the biggest announcements of the night — the availability of a two million token context window for Gemini 1.5 Pro. The company introduced a one million token context window earlier this year but so far, it was only available to developers. Google has now made it generally available in public preview and can be accessed through Google AI Studio and Vertex AI. Instead, the two million token context window is exclusively available via waitlist to developers using the API and to Google Cloud customers.
With a context window of two million, Google claims, the AI model can process two hours of video, 22 hours of audio, more than 60,000 lines of codes, or more than 1.4 million words in one go. Besides improving contextual understanding, the tech giant has also improved Gemini 1.5 Pro’s code generation, logical reasoning, planning, multi-turn conversation, as well as the understanding of images and audio. The tech giant is also integrating the AI model into Gemini Advanced and Workspace apps.
Google has also introduced a new addition to the family of Gemini AI models. The new AI model, dubbed Gemini 1.5 Flash is a light-weight model that is designed to be faster, more responsive, and cost-efficient. The tech giant said that it has worked on improving its latency to improve its speed. While solving complex tasks would not be its strength, it can do tasks such as summarisation, chat applications, image and video captioning, data extraction from long documents and tables, and more.
Finally, the tech giant announced the next generation of its smaller AI models, Gemma 2. The model comes with 27 billion parameters but can run efficiently on GPUs or a single TPU. Google claims that Gemma 2 outperforms models twice its size. The company is yet to release its benchmark scores.