Google just announced updates to its Gemini, what was a 1.0 and then 1.5 is now 2.0, and they're rolling it out today as a 'flash' version
So far, I can see how to switch it to the Flash Experimental version on web, but not seeing that in my app ... yet.
It's getting to be quite insane how much AI we're using now in our daily lives. I tend to generate social media posts for organizations I work with, or Pixel Studio for an image; throw it into Canva with their AI.
Let alone I threw a bunch of information at NotebookLM for me to easily summarize and then it generated a 20 minute podcast. Wild!
Here's their quick information on 2.0
Today, Google is making AI more useful and accessible with new AI-powered updates. Leading this effort is Gemini 2.0, our latest, most capable AI model yet, designed for the agentic era. With new advances in multimodality, it will enable us to build new AI agents that can think several steps ahead, remember and take actions with users’ guidance.
Building on the success of Gemini 1.0 -launched a year ago-, this new model pushes the envelope even further and brings enhanced performance, and new capabilities like native image and multilingual audio generation, and native intelligent tool use – directly accessing Google products like Search or even executing code. These capabilities make it possible to build agents that can think, remember, plan, and even take action on your behalf.
Gemini 2.0 will power new AI experiences across Google products, making them more helpful and intuitive for everyone.
Below we tell you about the updates presented :
- Gemini 2.0 Flash: This is the first experimental version of the 2.0 models that can create or edit images or generate text in different tones. It will be available to Gemini Advanced and Devs users in AI Studio and Vertex AI. More info here.
- Project Astra: we shared updates on our research prototype of what an AI Universal assistant can be. Powered by Gemini 2.0, Astra combines images, video, and voice into a timeline of events for more natural conversations and efficient information retrieval. More info here.
- Project Mariner: Introducing a new early research prototype built with Gemini 2.0, Mariner reimagines how people interact with the web. It combines Gemini's multimodal understanding capabilities with web interaction to automate tasks, taking action on your behalf. More info here.
- Project Jules: We’re also exploring AI agents that can more directly support developers. Jules is an experimental AI-powered code agent that you can offload tasks to, like resolving bugs and coding challenges, available to a group of trusted testers. More info here.
- Deep Research: a new capability for Gemini Advanced users that utilizes AI to explore complex topics on your behalf and provide findings in a comprehensive report. More info here.
- New AI-powered features for Android that enhance accessibility, creativity, and productivity. These innovations include more detailed image descriptions, intelligent note-taking, seamless file transfers via QR codes, and improved document scanning capabilities. More info here.
- Pixel Drop: bring AI-powered personalization. Gemini offers tailored responses, while new features like smart call screening, auto-organized screenshots, and enhanced audio create a seamless user experience. Also, Gemini Live is now available in more languages. More information here.
To learn more about the new features, visit The Keyword blog.
Comments
Post a Comment