The Cabin as Canvas: Rethinking UX with Gemini in AI-Powered Vehicles
The cabin as canvas: rethinking UX with Gemini in AI-Powered vehicles
Jony and Sam recently introduced the idea of “io” — their vision of a third essential device after the computer and smartphone — through a beautifully articulated manifesto. (Read it here).
As we live through these potentially historic moments in design and technology, we are also witnessing another pivotal announcement: Google’s AI Gemini is making its way into vehicles powered by Android Auto.
While the “io” device from Jony and Sam is being shaped within OpenAI’s ecosystem, Gemini marks Google’s own parallel push to embed intelligence into our everyday tools — beginning with the automobile. These two efforts may stem from different houses, but they share a common origin: a belief that artificial intelligence is no longer just a feature, but the foundation of the next interface revolution.
We’re no longer just adding smart devices into our lives; we’re engaging with them in increasingly meaningful ways. These interactions are becoming deeper, more contextual, and more embedded — not just in our homes and in our hands, but now behind the wheel as well.
Consider this: Americans spend about two years of their lives inside a car. [source] So what does it mean when AI begins to join us during that massive slice of our daily lives?
Gemini’s integration into vehicles isn’t just a new feature — it marks the emergence of a new interaction channel for both users and product teams.
For years, the in-car experience has revolved around static interfaces: dashboards full of buttons, knobs, and more recently, touchscreen menus. But Gemini hints at a future where that interaction becomes fluid, natural, and human-centered.
This shift matters beyond automotive. As AI moves deeper into our environments — homes, wearables, vehicles — every physical space becomes a digital interface, and every product becomes a touchpoint in a broader ecosystem. For UX and PX (Product Experience) teams, the cabin is no longer just an enclosed space; it’s a dynamic, adaptive context for behavior-driven design.
From Static Screens to Fluid Conversations
Traditional car UX relied on a one-size-fits-all control logic: drive, tap, scroll. Gemini challenges this by introducing dynamic, context-aware interaction. The promise is simple but bold: a car that doesn’t just respond but understands.
This means UX designers can no longer focus solely on layout or hierarchy — they now must think about dialogue flow, personality, tone, and multimodal responsiveness. AI shifts the role of the car from passive interface to active co-pilot.
The Cabin Is the New Interface
As cars evolve into mobile living spaces, their interface expectations change dramatically. Drivers don’t just need directions anymore; they want music that matches their mood, summaries of their schedule, suggestions for a scenic detour, and even empathetic responses during long solo drives.
Gemini takes a big step toward making that kind of ambient, helpful, emotionally intelligent experience possible.
And while the idea of a voice assistant in a car isn’t new, what makes Gemini different is its capacity for conversation. It’s not about giving commands. It’s about having a partner that learns, adapts, and predicts what you might need next.
My Perspective: Gemini in a Real Driver’s Hands
This shift begins with Volvo. The Swedish automaker has long been one of Google’s closest partners in the automotive space. As one of the earliest adopters of Android Automotive OS, Volvo has already positioned itself as a testbed for deeper Google integrations. That relationship now continues with Gemini.
<a href="https://medium.com/media/fd22200e974956cb925f2fcbf3ac3faa/href">https://medium.com/media/fd22200e974956cb925f2fcbf3ac3faa/href</a>
At launch, Gemini will be available in select Volvo models such as the EX90 and the new EX30 — vehicles already known for their clean design, minimalist interfaces, and human-centered interior UX.
But this isn’t a Volvo-exclusive story. Google plans to bring Gemini to other automakers using Android Auto and Android Automotive OS, including brands like Honda, Lincoln, Renault, Acura, and more.
So while the first truly AI-augmented in-car experiences will start in a Volvo cockpit, they are part of a broader shift. One where voice, vision, and vehicle converge.
As an experience design enthusiast and a Volvo EX30 driver myself, I’ve begun to see what this future might look like. In fact, I explored this in more detail in a recent piece about why I chose Volvo over Tesla, where I reflected on the design choices and technology embedded in the EX30. Even before Gemini’s arrival, the Google-backed systems in my car already feel more alive, more aware, and more relevant than any other in-car system I’ve used. It’s clear that what’s coming next isn’t just about smarter software — it’s about a redefinition of the experience itself.
Designing the Relationship, Not Just the Interface
For product teams, this means designing not only features, but relationships. How should a car speak? How often? In what tone? How do we design for trust, not just usability?
UX designers will need to develop new toolkits — less about wireframes, more about behavior flows and voice identity. Product strategists must reframe AI from “a feature” to “a collaborator.” And everyone will have to consider the emotional stakes of placing a digital agent between the driver and the road.
Risks, Ethics & Open Questions
As we hand over more autonomy to systems like Gemini, several questions surface that designers and strategists must actively confront:
How do we prevent distraction when voice-based AI becomes more conversational?
What boundaries should exist between personalization and privacy in vehicles?
How do we ensure that AI recommendations remain transparent, unbiased, and safe?
When the AI makes a mistake, who is accountable — and how is that communicated to the driver?
These are not just technical issues, but ethical and emotional ones. If a car becomes a co-pilot, it must also become trustworthy. And that trust has to be designed.
As Finally…
As AI moves inside the car, we’re no longer designing for users who sit still. We’re designing for motion, emotion, and context. We’re designing for moments that unfold while driving at 100 km/h, in silence, in rain, or under stars.
Will OpenAI one day build its own ambient AI product for vehicles — perhaps an “io for the road”? That remains an open question. But what’s clear is this: the era of passive interfaces is closing, and a new chapter of contextual, mobile AI design is opening wide.
And perhaps the most exciting part? We’re just getting started.
A Note on Design’s Evolution
As we reflect on these changes, it’s important to recognize that what we commonly think of as “design,” “UX,” or “UI” is itself evolving. No longer limited to screens, buttons, or layouts — experience design is becoming holistic, spatial, ambient. The car cabin is just one frontier where the object of design shifts from the visible to the invisible, from the static to the behavioral.
This isn’t just a technology shift — it’s a paradigm shift in how we think about interaction, intention, and immersion. And that makes this moment a historic one for everyone working at the intersection of design, AI, and everyday life.
The Cabin as Canvas: Rethinking UX with Gemini in AI-Powered Vehicles was originally published in product.blog on Medium, where people are continuing the conversation by highlighting and responding to this story.





