
Google I/O 2025 has ended, leaving us with amazing tech news. This year’s conference showed Google’s big commitment to new ideas. It showed us what tech can do in the future.
The event was a huge hit, with 100 exciting announcements that will change tech. In this article, we’ll dive into these announcements. You’ll learn about the latest and what’s coming next.
Overview of Google I/O2025
Google I/O 2025 has ended, leaving us with lots of exciting news. This year’s event was a huge success. It focused on artificial intelligence, Android updates, and Google Cloud improvements.
The Google I/O 2025 theme visual featured an iridescent “IO” logo against a starry backdrop, reflecting the event’s AI focus. At Google I/O 2025, Google showcased how it’s weaving the latest AI advancements into just about all of its products. Major upgrades are coming to the Gemini AI app, new generative AI tools, and everything in between — including some truly incredible progress in Google’s AI models (plus new ways for you to access them).
Significance of the Google I/O Conference
The conference was a meeting point for tech lovers and experts. It showed us the future of technology. The main topics were innovations in AI, enhancements to Android, and updates to Google Cloud services.
These themes were seen in many of the announcements. They showed Google’s dedication to exploring new tech possibilities.
The announcements made at Google I/O 2025 are very important. They will change the tech world in many ways. For instance, the AI developments in Search and new features in Google Assistant will change how we use technology.
Looking ahead, Google I/O 2025 has given us a roadmap for the tech industry. It emphasizes innovation, teamwork, and exploring new tech frontiers. With its focus on software and event updates, the conference has set the stage for an exciting year.
Innovations in Artificial Intelligence
Artificial intelligence was a big focus at Google I/O 2025. The event showed how AI is changing how we use technology. It covered areas like search, Google Assistant, and making tech more accessible.
AI Developments in Search
Google made big strides in search AI. They’ve created smarter algorithms that give better search results. These AI developments help users find what they need faster and easier.
- Improved search result accuracy
- Enhanced contextual understanding
- Better handling of complex queries
These updates are part of Google’s push to use AI for better user experiences. They’re among the latest tech reveals from Google.New Features in Google Assistant
Google Assistant got a big update, thanks to AI. It’s now more conversational and easy to use. The new features include:
- Enhanced natural language processing capabilities
- More personalized responses based on user behavior
- Integration with more third-party services
These Google Assistant updates aim to make talking to it smoother. They show how AI is making digital interactions better.
AI for Accessibility Initiatives
Google is using AI to make its platforms more accessible. They’re working on:
- Enhanced screen reader capabilities
- Improved image recognition for visually impaired users
- Better language translation services
These efforts show Google’s dedication to using AI for good. They want to make tech available to everyone.
In summary, Google I/O 2025 showed Google’s ongoing AI efforts. They’re working on search, Google Assistant, and accessibility. These changes will greatly impact the tech world.
Below is a list of I/O 2025’s biggest highlights — and the best part is, many of these new features are available to try out today!
Ask Anything with AI in Search
Google Search is getting a major infusion of generative AI. A new “AI Mode” will let you chat with Search to get in-depth answers, use your camera for real-time search interactions, help with tasks like booking tickets, and even analyze data or shop more easily. Here are the highlights coming to Google Search:
AI Mode rolls out in Search (U.S.): Try it now! AI Mode is starting to roll out to everyone in the U.S. right within Search. If you want access immediately, you can opt in via Labs to enable it now.
“Deep Search” for thorough answers: For those questions where you want an even more thorough response, Google is adding Deep Search capabilities to AI Mode in Labs. This will enable deeper research-style answers in AI Mode.
“Search Live” with your camera: Live capabilities from Project Astra are coming to AI Mode in Labs. With the new Search Live (launching this summer), you’ll be able to have a back-and-forth conversation with Search about what you see in real time using your phone’s camera.
Agent-like help for tasks: Google is also bringing agentic capabilities from Project Mariner into AI Mode (via Labs). This starts with letting Search handle tasks like booking event tickets, making restaurant reservations, and scheduling local appointments on your behalf.
Data analysis and visualization: Coming soon: Need help crunching numbers or visualizing info? AI Mode in Labs will be able to analyze complex datasets and create graphics tailored to your query. This is initially focused on sports and finance queries.
AI-powered shopping experience: Google is introducing a new AI Mode shopping experience that combines generative AI with Google’s Shopping Graph. It can help you browse for inspiration, weigh considerations, and find the right product more easily.
Virtual try-on for apparel: Try it now! You can now virtually try on billions of apparel listings by uploading a photo of yourself. This “try on” experiment is rolling out to Search Labs users in the U.S. starting today opt in to give it a go.
“Agentic” checkout in Search: Google showed off a new agentic checkout feature to help you buy products at a price that fits your budget. Just tap “Track price” on a product listing, set your target price, and you’ll get an alert if the price drops to that amount.
AI Overviews usage skyrockets: Google shared some updates on AI Overviews in Search. Since last I/O, the AI Overview feature (the AI snapshot at the top of results) has grown to 1.5 billion monthly users across 200 countries and territories. In other words, Google Search’s generative AI is reaching more people than any other product in the world.
More engagement from AI Overviews: In major markets like the U.S. and India, AI Overviews is driving over 10% increase in Google Search usage for queries where these AI snapshots appear.
Gemini 2.5 coming to Search: Starting this week, Google is upgrading Search’s AI features with Gemini 2.5 is coming to Search . Both AI Mode and AI Overviews in the U.S. will be powered by the newer Gemini 2.5 model, bringing even better AI responses.
Try New, Helpful Features for Gemini
Google’s Gemini AI app is getting a bunch of new tricks to make it more helpful and personal. You can have it quiz you on topics, connect it with your favorite Google apps to take actions, share your camera or screen in chats (now on iOS too), and more. Google is even testing an “Agent Mode” so Gemini can complete tasks for you. They also announced the Gemini app has over 400 million monthly users. Here’s what’s new for the Gemini app:
Interactive quiz mode: Try it now! The Gemini app is now an even better study partner thanks to a new interactive quiz feature. You can literally ask Gemini to “create a practice quiz on <topic>” and it will generate quiz questions to help you study.
Personalized “Live” connections: In the coming weeks, Gemini Live will become more personal by connecting to some of your Google apps. This means mid-conversation you can take actions like adding an event to your Calendar or getting details from Maps. They’re starting with Google Maps, Calendar, Tasks and Keep integration make Gemini Live more personal, with more app connections to come later.
Camera & screen sharing on iOS: Try it now! Beginning today, Gemini Live’s camera and screen sharing capabilities for Gemini Live are rolling out beyond Android — iPhone users can now share their camera view or screen in the Gemini app on iOS.
New “Create” menu in Canvas: Try it now! Google is introducing a Create menu within Canvas (starting today) that helps you explore everything the Gemini app’s Canvas feature can do. It lets you transform text into all kinds of outputs: interactive infographics, web pages, immersive quizzes, and even podcast-style Audio Overviews (in 45 languages!).
Upload files to Deep Research: Try it now! As of today, you can upload PDFs and images directly into Deep Research in the Gemini app. This means your AI research queries can draw from a mix of public information and specific documents you provide.
Deep Research with Drive/Gmail docs: Coming soon, you’ll be able to link your own content like documents from Drive or from Gmail as sources for Deep Research. You’ll even be able to customize what sources it pulls from (for example, focusing on academic papers).
Agent Mode for goal-oriented tasks: Google announced Agent Mode, an experimental feature where you can simply describe your end goal and let Gemini handle the steps to achieve it. An early version of Agent Mode in the Gemini app will be coming soon for Google AI Ultra subscribers.
Gemini in Chrome extension: Try it now! Gemini in Chrome is beginning to roll out on desktop. This lets you use the Gemini AI assistant directly in the Chrome browser. It’s launching for Google AI Pro and Ultra subscribers in the U.S. (with Chrome set to English) on Windows and macOS.
Gemini app hits 400M users: The Gemini app now boasts over 400 million monthly active users – a huge milestone for Google’s AI chatbot.
Learn About Advancements in Gemini Models
Google’s core Gemini AI models are getting big upgrades. They announced that Gemini 2.5 Pro is topping key AI benchmarks, introduced a preview of Gemini 2.5 Flash (a faster model optimized for coding and reasoning), and unveiled an experimental “Deep Think” mode for complex tasks. They’re also improving security (to guard against prompt injection) and even debuted a new text diffusion model. Developers get new goodies too, like thought summaries, “thinking budgets,” and support for an agent-to-agent protocol. Here’s a rundown of advances in the Gemini AI models:
Gemini 2.5 Pro leads benchmarks: With the latest update, Gemini 2.5 Pro is now the world-leading model on the WebDev Arena and LMArena leaderboards for AI performance.
Integrating LearnLM into Gemini: Google is infusing LearnLM directly into Gemini 2.5, making it the world’s top model for learning tasks. In Google’s latest report, Gemini 2.5 Pro outperformed all competitors across every category of learning science principles.
Gemini 2.5 Flash preview: Google introduced a new preview version of its flagship model called Gemini 2.5 Flash. This variant has stronger performance on coding and complex reasoning tasks, and it’s optimized for speed and efficiency.
2.5 Flash availability: Gemini 2.5 Flash is now available to everyone in the Gemini app. Google will also make this model generally available in Google AI Studio for developers and in Vertex AI for enterprises in early June, with 2.5 Pro following soon after.
“Deep Think” mode: Gemini 2.5 Pro will get even smarter with Deep Think, an experimental mode that enhances reasoning for highly complex math and coding problems.
Security upgrades: New capabilities are coming to both 2.5 Pro and 2.5 Flash, including advanced security safeguards. Google’s new security approach significantly increased Gemini’s protection rate against indirect prompt injection attacks during tool use, making the Gemini 2.5 series the most secure models to date.
Project Mariner in API: Google is bringing Project Mariner‘s computer-use abilities into the Gemini API and Vertex AI. Companies like Automation Anywhere, UiPath, Browserbase, Autotab, The Interaction Company, and Cartwheel are already exploring this, and broader availability for developers to experiment is planned for this summer.
Thought summaries for transparency: Both Gemini 2.5 Pro and Flash will now include thought summaries in the Gemini API and in Vertex AI. These summaries take the model’s raw “thought” process and organize it into a clear format with headers, key details, and info about model actions (like when it uses tools).
“Thinking budgets” to control cost: Google launched Gemini 2.5 Flash with thinking budgets, and now it’s extending this feature to 2.5 Pro. Thinking budgets let developers balance latency vs. quality by controlling how many tokens the model can use to “think” before responding (you can even turn the extended reasoning off). Gemini 2.5 Pro with budgets will be generally available in the coming weeks for stable production use, alongside the general availability release of the model.
MCP support for easier integration: Google added native SDK support for Model Context Protocol (MCP) definitions in the Gemini API, making it easier to integrate Gemini with open-source tooling. They’re also exploring hosting MCP servers and other tools to simplify building agentic applications.
New Gemini Diffusion model: Google introduced a new research model called Gemini Diffusion.. This text diffusion model learns to generate text or code by iteratively refining random noise into coherent output – similar to how their current image and video generation models work. Google will continue exploring different approaches to reduce latency across all Gemini models, including a faster 2.5 Flash Lite coming soon.
Access Our AI Tools with New Options
Google is rolling out new subscription plans to access its AI tools, offering higher usage limits and more capabilities. There’s a premium “Google AI Ultra” plan for power users (with tons of usage, 30 TB of storage, and even YouTube Premium included), and a more affordable “Google AI Pro” plan at $19.99/month with advanced features across Gemini and other AI products. They’re also giving college students a special free upgrade for a year. Here are the details:
Google AI Ultra plan: Google introduced Google AI Ultra, a new AI subscription plan with the highest usage limits and access to Google’s most powerful models plus premium features. The Ultra plan also includes 30 TB of cloud storage and comes bundled with YouTube Premium access.
Ultra availability & pricing: Google AI Ultra is available now in the U.S. (more countries coming soon). It’s priced at $249.99 per month, with a special introductory offer of 50% off for your first three months if you’re a first-time user.
Free upgrade for students: College students in the U.S., Brazil, Indonesia, Japan, and the U.K. are eligible for a free upgrade to the premium Gemini AI experience for an entire school year — with more countries to be added soon.
Google AI Pro plan: There’s also Google Al Pro, which for $19.99/month gives you a suite of AI tools with higher limits. The Pro plan upgrades your Gemini app experience and includes products like Flow, NotebookLM, and more, all with special features and increased rate limits compared to the free tier.
Explore Your Creativity with New Generative AI
Google unveiled a host of new generative AI tools for creativity. There’s a new version of their video generator (Veo 3) that can create videos with sound, big improvements to their image model (Imagen 4) for ultra-detailed images, and even a tool called “Flow” to help anyone make AI-generated films. In music, they expanded access to their music models (Lyria 2 and RealTime). Google is also teaming up with Hollywood creatives (like director Darren Aronofsky) on AI storytelling, and rolling out tools like SynthID to watermark and detect AI-generated content. Here are the creative AI highlights:
Veo 3 generative video: Try it now! Google announced Veo 3, the latest version of their generative video model which now produces video with audio. Veo 3 is available starting today in the Gemini app for Google AI Ultra subscribers in the U.S., and it’s also accessible via Vertex AI for developers.
Upgrades to Veo 2: Google added new capabilities to the existing Veo 2 model, including enhanced camera controls (for better scene framing and angles), outpainting (extending imagery beyond the original frame), and the ability to add or remove objects in generated videos.
AI-made short films (Flow TV): Google showcased four new short films created using Veo (along with other AI tools and techniques). You can watch these films from Google’s partners and find other inspiring generative content on Flow TV.
Imagen 4 for image generation: Try it now! Imagen 4 is Google’s latest text-to-image model, and it delivers remarkable fidelity in fine details like skin texture, fur, and intricate patterns. It excels at both photorealistic and abstract styles. Imagen 4 is available today in the Gemini app.
Imagen 4 in more products: Imagen 4 is also integrated into Whisk (Google’s experimental image creation tool), and it’s available to enterprise customers via Vertex AI.
“Fast” Imagen 4 coming: Soon, Imagen 4 will be offered in a Fast version that generates images up to 10× faster than Imagen 3.
Higher resolution images: Imagen 4 can produce images in various aspect ratios and up to 2K resolution, so you can get even higher-quality visuals suitable for printing or large presentations.
Better text in images: The model is significantly better at handling text and typography within images, making it easier to create things like greeting cards, posters, or even comics with legible text.
Introducing Flow (AI filmmaking): Try it now! Flow is a new AI-powered filmmaking tool. Using Google DeepMind’s cutting-edge models, Flow lets you craft cinematic films by controlling characters, scenes, and styles. It effectively enables more people to create visually striking movies with AI assistance.
Flow availability: Flow is available starting today for U.S. users on the Google AI Pro and Ultra plans.
MusicLM (Lyria 2) expansion: Back in April, Google expanded access to its Music AI Sandbox, which is powered by Lyria 2. Lyria 2 is a powerful music composition model that enables endless musical exploration. It’s now available to creators (through YouTube Shorts and to enterprise customers via Vertex AI.
AI-generated vocals: Lyria 2 can arrange rich vocals that sound like anything from a solo singer to a full choir.
Lyria RealTime model: Lyria RealTime is an interactive music generation model that lets anyone create, control, and perform music in real time. This model is now accessible via the Gemini API in Google AI Studio and in Vertex AI.
Hollywood partnership (Primordial Soup): Google announced a partnership between Google DeepMind and Primordial Soup, a new venture founded by filmmaker Darren Aronofsky to explore storytelling innovation. Primordial Soup is producing three short films using Google DeepMind’s generative AI models, tools, and capabilities (including Veo).
First AI film “ANCESTRA”: The first film in this collaboration, titled “ANCESTRA,” is directed by award-winning filmmaker Eliza McNitt and will premiere at the Tribeca Festival on June 13, 2025.
SynthID Detector for AI content: To make it easier to identify AI-generated content, Google announced SynthID Detector, a verification portal that helps quickly determine if an image was watermarked with Google’s SynthID technology.
SynthID usage stats: Since its launch, SynthID has already watermarked over 10 billion pieces of content.
SynthID Detector rollout: Google is starting to roll out the SynthID Detector portal to a small group of early testers. Journalists, media professionals, and researchers can join the waitlist to request access to the SynthID Detector as it expands.
Take a Look at the Future of AI Assistance
Google offered a glimpse into the future of AI assistants and wearable tech. They’re pushing Gemini 2.5 Pro toward becoming a “world model” that can plan and imagine like a human. Project Astra — Google’s prototype universal AI assistant — got major updates (more natural voice output, better memory, and improved computer control), and they demoed how it could assist the blind community and even act as a conversational tutor for students. Google also showed an early look at Android-powered augmented reality: Samsung’s upcoming XR headset and prototype smart glasses that use Gemini to translate language in real time and more. Here are the highlights in AI assistants and AR:
Toward a “world model” AI: Google is working to extend its most advanced multimodal model, Gemini 2.5 Pro, into what they call a “world model” . In essence, this means an AI that can form plans and imagine new experiences by understanding and simulating aspects of the world, much like a human brain does.
Project Astra updates: Updates to Project Astra – Google’s research prototype of a universal AI assistant – were showcased. Astra now has more natural voice output (with native-quality audio), improved long-term memory, and better computer control. Over time, Google plans to bring these capabilities into Gemini Live, new AI experiences in Search, a Live API for developers, and even new form factors like Android-powered AR glasses.
Assisting blind/low-vision users: As part of Project Astra research, Google partnered with the visual interpreting service Aira to prototype an AI system that assists members of the blind and low-vision community with everyday tasks, complementing the tools those users already rely on.
AI tutoring prototype: With Project Astra, Google is also prototyping a conversational tutor that can help students with homework. This AI tutor can follow along with what you’re working on, walk you through problems step-by-step, point out mistakes, and even generate diagrams to explain concepts if you get stuck.
AI tutor coming to products: This experimental tutoring experience will be coming to Google products later this year. (Android users can sign up for the waitlist via the Trusted Tester program for a preview.)
Samsung’s XR headset (Project Moohan): Google gave a peek at the first Android XR device coming later this year: Samsung’s Project Moohan headset. This mixed-reality headset will offer immersive experiences on an “infinite screen.”
Gemini on AR glasses: Google also showed a sneak peek of how Gemini will work on glasses with Android XR . In real-world demos, a wearer could use voice commands with smart glasses for things like messaging friends, making calendar appointments, getting turn-by-turn directions, taking photos, and more.
Live language translation in glasses: In one demo, Google showed two people conversing in different languages via the AR glasses, with live translation happening in near real time. It demonstrated the potential for these glasses to break down language barriers face-to-face.
Prototype AR glasses testing: Google’s prototype Android XR smart glasses are now in the hands of trusted testers. These testers are helping ensure the product is truly useful and that it respects privacy for the wearer and those around them.
Partnerships with eyewear brands: Google is partnering with trendy eyewear brands – starting with Gentle Monster and Warby Parker – to design Android XR glasses that people will actually want to wear all day (i.e. making sure the tech looks good!).
Building an XR glasses ecosystem: Google is expanding its Samsung partnership to go beyond headsets and develop Android XR glasses. Together they’re creating a software platform and reference hardware to enable other companies to build great AR glasses. Developers will be able to start building apps for this new platform later this year.
Communicate Better, in Near Real Time
Google is reinventing how we communicate across distances and languages. Remember Project Starline, that 3D telepresence booth? It’s evolving into a platform called Google Beam, and Google is teaming up with the likes of Zoom and HP to bring the first Beam devices to market for businesses. On the software side, Google Meet got an AI-powered live translation feature that preserves your voice. Here’s what they announced for real-time communication:
Project Starline → Google Beam: A few years ago Google introduced Project Starline, a 3D video chat booth that made it feel like two people were in the same room. Now, that concept is becoming a platform called Google Beam.
Beam devices with Zoom & HP: Google is working with Zoom and HP to bring the first Google Beam devices to market for select customers later this year. They’re also partnering with other industry leaders (like Diversified and AVI-SPL) to roll out Google Beam to businesses and organizations worldwide.
Beam at InfoComm: You’ll even be able to see an early Beam device soon – HP will showcase one of the first Google Beam products at the InfoComm conference in a few weeks.
Real-time speech translation in Meet: Google announced speech translation for Google Meetwhich is available now in Google Meet. This feature translates speech in near real time and, impressively, retains the speaker’s voice quality, tone, and expressiveness. The result is a free-flowing conversation where everyone can understand each other with no language barrier.
Build Better with Developer Launches
Developers got a ton of love at I/O 2025 – Google launched a slew of tools and updates to help build with AI. Highlights include new model previews (like multi-speaker text-to-speech and audio-visual input), a new open multimodal model (Gemma 3n) that runs on everyday devices, an overhauled Google AI Studio, and even an “agentic” mode coming to Colab. They also introduced specialized models for sign language (SignGemma) and healthcare (MedGemma), and announced improvements in Android Studio, Firebase, Flutter, and more. Here’s a rundown of the developer-focused launches:
Gemini developer momentum: Over 7 million developers are now building with Gemini – that’s 5× more than this time last year.
Gemini usage on Cloud up 40×: Gemini’s usage on Vertex AI (Google Cloud’s ML platform) is up 40 times compared to a year ago.
Multi-speaker text-to-speech: Google is releasing new preview models for text-to-speech in Gemini 2.5 Pro and Flash. Uniquely, these support multiple speakers – enabling AI-generated speech with two distinct voices. The output is very expressive (capturing subtle nuances like whispers) and works in 24+ languages, even switching between languages seamlessly.
Live API gets vision & voice: The Live API is introducing a preview of audio-visual input and native audio output for dialogue, so developers can directly build richer conversational experiences that see and speak (think voice assistants that can also analyze images/video).
Jules coding agent (beta): Try it now! Jules is a parallel, asynchronous AI agent for your GitHub repos to help improve and understand your codebase. It’s now open to all developers in beta. With Jules, you can delegate multiple coding tasks from your backlog at once, and even get an audio summary of recent changes in your code.
Gemma 3n multimodal model: Gemma 3n is a new fast, efficient open multimodal model designed to run smoothly on everyday devices like phones, laptops, and tablets. It can handle audio, text, image, and video inputs. Gemma 3n is rolling out initially on Google AI Studio and Google Cloud, with plans to expand it to open-source tools soon.
Google AI Studio updates: Try it now! The Google AI Studio interface has been redesigned with a cleaner UI, integrated documentation, usage dashboards, new sample apps, and a fresh “Generate Media” tab to experiment with Google’s latest generative models (including Imagen, Veo, and native image generation).
Colab goes agentic: Colab (Colaboratory) will soon offer a fully agentic experience. You’ll be able to simply tell Colab what you want to do, and watch as it takes actions in your notebook – fixing errors and transforming code to help you solve tough problems faster, automatically.
SignGemma for sign language: SignGemma is an upcoming open model that translates sign language to spoken-language text (currently best at ASL→English). This will enable developers to build apps and integrations for Deaf and hard-of-hearing users.
MedGemma for medical AI: MedGemma is Google’s most capable open model for multimodal medical text and image comprehension. It’s designed for developers building health applications (for example, analyzing medical images). MedGemma is available now as part of the Health AI Developer Foundations. suite.
Stitch for UI design to code: Stitch is a new AI-powered tool that generates high-quality UI designs and the corresponding front-end code (for web or mobile) from either natural language descriptions or image mockups.
Natural language testing (Journeys): Try it now! Google announced Journeys in Android Studio, which lets developers test critical user journeys by simply describing test steps in natural language. Using Gemini under the hood, it will execute those steps in your app to ensure things work as expected.
Auto dependency upgrades: Version Upgrade Agent in Android Studio is coming soon. This agent will automatically update your project’s dependencies to the latest compatible versions, reading through release notes, building the project, and even fixing any build errors that occur – all on its own.
Google Pay API updates: Google introduced updates across the Google Pay API to help developers enable smoother and safer checkout experiences. Notably, this includes support for Google Pay in Android WebViews, among other improvements for web and mobile payments.
Flutter 3.32: Flutter 3.32 was released with new features to speed up development and enhance app performance/quality. (For example, improved productivity tools and engine optimizations.)
Multi-agent systems (ADK & A2A): Google shared updates on its Agent Development Kit (ADK) and the Vertex AI Agent Engine, along with a new Agent2Agent (A2A) protocol,. A2A enables interactions between multiple AI agents, allowing for more complex multi-agent workflows.
Wear OS 6 Developer Preview: Try it now! The Developer Preview for Wear OS 6 is now available. It introduces the new Material 3 (Expressive) design for watch apps, updated dev tools for building watch faces, richer media playback controls, and a Credential Manager for easier authentication on wearables.
Gemini Code Assist goes GA: Try it now! Google announced that Gemini Code Assist for individuals (and the version for GitHub teams) are now generally available — developers can get started in under a minute. Both the free and paid tiers of Gemini Code Assist are now powered by Gemini 2.5, offering advanced coding help. It assists with tasks like generating visually rich web app components, transforming code, and speeding up edits.
New Code Assist features: One new feature you can now try in Gemini Code Assist is chat history and threads. This lets you easily resume where you left off in a coding conversation and branch into new threads, making the coding assistant more context-aware and interactive.
Firebase AI updates: Firebase announced a host of new features to help developers build AI-powered apps more easily. These include updates to the recently launched Firebase Studio and Firebase AI Logic – tools that let developers integrate AI capabilities into their apps faster.
Google Cloud & NVIDIA community: Google introduced a new Google Cloud and NVIDIA developer community. It’s a dedicated forum where developers can connect with experts from Google and NVIDIA, collaborate, and get guidance on AI development (especially around GPU acceleration and cloud deployment).
Google AI Edge Portal: Google unveiled Google AI Edge Portal in private preview. It’s a new Google Cloud solution for testing and benchmarking on-device machine learning (ML) at scale – essentially helping developers manage and evaluate AI models running on edge devices.
Work Smarter with AI Enhancements
Google announced some nifty AI enhancements to help users work and learn more efficiently. Gmail is getting smarter reply suggestions that adapt to your context and writing style. There’s a new mobile app for NotebookLM so you can generate audio summaries on the go, plus improvements like adjustable lengths for those summaries and even upcoming video summaries. Google also previewed a fun experiment called Sparkify that turns your questions into animated videos, and they’re refining the Learn About conversational learning tool based on feedback. Here are the highlights:
Smarter Gmail replies: Gmail is getting new, AI-powered smart replies that use your own context and tone. These replies will pull from your past emails and Drive files to draft a response that sounds like you, matching your typical writing style. (You’ll be able to try this later in the year.)
Google Vids for Pro/Ultra: Try it now! Google Vids is now available to Google AI Pro and Ultra users. (Google Vids is an AI video tool – presumably allowing video generation or advanced video editing with AI – now unlocked for paid subscribers.)
NotebookLM mobile app: Try it now! Starting today, the NotebookLM app is available on the Google Play Store and Apple App Store. This lets users take Audio Overviews (AI-generated narrated summaries of content) on the go, right from their mobile devices.
Flexible Audio Overviews: Also for NotebookLM, Google is adding more flexibility to Audio Overviews. You’ll be able to choose the ideal length for your AI-generated audio summary – whether you want just a quick overview or a deeper exploration, you’re in control of how detailed it is.
Video Overviews coming: Video Overviews are coming soon to NotebookLM. This feature will turn dense information (like PDFs, docs, images, diagrams, and key quotes) into more digestible narrated video summaries.
NotebookLM preview content: Google even shared one of our NotebookLM notebooks with the public – which included a couple of preview clips of the upcoming Video Overviews feature in action.
Sparkify experiment: Our new Labs experiment Sparkify helps turn your questions into short animated videos, thanks to Google’s latest Gemini and Veo models. These video-generation capabilities will be coming to Google products later this year. In the meantime, you can sign up for the waitlist for a chance to try Sparkify out early.
Improvements to Learn About: Google is also rolling out improvements (based on user feedback) to Learn About, a Labs experiment where conversational AI meets your curiosity. The updates make the experience of learning through dialogue even better and more personalized.
Finally… here are a few numbers:
To wrap up, Google left us with a couple of jaw-dropping stats about AI’s growth:
Massive AI usage growth: As Sundar Pichai shared in the opening keynote, people are adopting AI more than ever before. For example, this time last year Google was processing 9.7 trillion tokens per month across its products and APIs. Now, they’re processing over 480 trillion tokens per month — that’s 50 times more than a year ago.
“AI” mentioned 92 times: And in case you were counting, the word “AI” was said 92 times during the keynote. But funnily enough, the term “Gemini” was apparently mentioned even more, making it the true star of the show!
Community Engagement and Developer Support
At Google I/O 2025, we saw a big push for community engagement and developer support. This marked a new chapter in working together. Google’s efforts to build a stronger developer community were clear at the conference.
New Initiatives for Developers
Google introduced several new plans to help developers. They included better tools and resources for making new apps. These moves are part of Google’s plan to boost community engagement and teamwork among developers.
- Improved documentation and developer guides
- Enhanced support for open-source projects
- New APIs and SDKs for app development
Collaboration with Open Source Communities
Google also showed its dedication to teaming up with open source communities. By working with these groups, Google wants to push innovation and create solutions for many users.
The conference showed off many examples of successful partnerships. These projects have made big contributions to the open-source world.
Conclusion and Expectations for the Future
Google I/O2025 has ended, leaving us with lots of exciting news. Google is always pushing technology forward. They’re making big steps in artificial intelligence, Android, and more.
We’re eager to see how these changes will help the tech world. They promise to make things better for users and help developers grow. Google’s focus on innovation and putting customers first is promising for the future.
The tech world is set to change a lot, thanks to Google’s work in AI, cloud computing, and green tech. We’re looking forward to seeing what’s next for Google and the tech industry. Stay tuned for updates on the latest news.
Before we wrap up, just a quick note — we’ll be diving deeper into many of these new announcements as individual use cases in upcoming articles, so stay tuned!
In the meantime, here’s the full recording of the event — enjoy the show and get ready to explore what’s next in AI and tech with Google!