We’re expanding our risk domains and refining our risk assessment process. AI breakthroughs are transforming our everyday lives, from advancing mathematics, biology and astronomy to realizing the potential of personalized education. As we build increasingly powerful AI models, we’re committed to responsibly developing our technologies and taking an evidence-based approach to staying ahead of emerging…
Can a single AI stack plan like a researcher, reason over scenes, and transfer motions across different robots—without retraining from scratch? Google DeepMind’s Gemini Robotics 1.5 says yes, by splitting embodied intelligence into two models: Gemini Robotics-ER 1.5 for high-level embodied reasoning (spatial understanding, planning, progress/success estimation, tool-use) and Gemini Robotics 1.5 for low-level visuomotor…
Image by Editor | Gemini & Canva
# Introduction
The Google Gemini 2.5 Flash Image model, affectionately known as Nano Banana, represents a significant leap in AI-powered image manipulation, moving beyond the scope of traditional editors. Nano Banana excels at complex tasks such as multi-image composition, conversational refinement, and semantic understanding, allowing it…
In this tutorial, we explore advanced computer vision techniques using TorchVision’s v2 transforms, modern augmentation strategies, and powerful training enhancements. We walk through the process of building an augmentation pipeline, applying MixUp and CutMix, designing a modern CNN with attention, and implementing a robust training loop. By running everything seamlessly in Google Colab, we position…
Acknowledgements This work was developed by the Gemini Robotics team: Abbas Abdolmaleki, Saminda Abeyruwan, Joshua Ainslie, Jean-Baptiste Alayrac, Montserrat Gonzalez Arenas, Ashwin Balakrishna, Nathan Batchelor, Alex Bewley, Jeff Bingham, Michael Bloesch, Konstantinos Bousmalis, Philemon Brakel, Anthony Brohan, Thomas Buschmann, Arunkumar Byravan, Serkan Cabi, Ken Caluwaerts, Federico Casarini, Christine Chan, Oscar Chang, London Chappellet-Volpini, Jose Enrique…
What Do We Mean by “Physical AI”?
Artificial intelligence in robotics is not just a matter of clever algorithms. Robots operate in the physical world, and their intelligence emerges from the co-design of body and brain. Physical AI describes this integration, where materials, actuation, sensing, and computation shape how learning policies function. The term was…
Image by Author | Ideogram
# Introduction
When you’re new to analyzing with Python, pandas is usually what most analysts learn and use. But Polars has become super popular and is faster and more efficient.
Built in Rust, Polars handles data processing tasks that would slow down other tools. It is designed for…
Computer vision moved fast in 2025: new multimodal backbones, larger open datasets, and tighter model–systems integration. Practitioners need sources that publish rigorously, link code and benchmarks, and track deployment patterns—not marketing posts. This list prioritizes primary research hubs, lab blogs, and production-oriented engineering outlets with consistent update cadence. Use it to monitor SOTA shifts, grab…
Our new method could help mathematicians leverage AI techniques to tackle long-standing challenges in mathematics, physics and engineering. For centuries, mathematicians have developed complex equations to describe the fundamental physics involved in fluid dynamics. These laws govern everything from the swirling vortex of a hurricane to airflow lifting an airplane’s wing. Experts can carefully craft…
In this tutorial, we walk step by step through using Hugging Face’s LeRobot library to train and evaluate a behavior-cloning policy on the PushT dataset. We begin by setting up the environment in Google Colab, installing the required dependencies, and loading the dataset through LeRobot’s unified API. We then design a compact visuomotor policy that…