Smart Displays
Led early conceptual design and established the system framework for the first Google Assistant smart displays.
Led early conceptual design and established the system framework for the first Google Assistant smart displays. Voice-only interfaces had severe limitations for conveying complex information, so we brought Assistant to screens to create a new category that combined natural conversation with rich, glanceable visual responses.
Context & Limitations
Voice interfaces are linear and transient; they struggle with lists, spatial data (like maps), and visual brand identity. You cannot easily compare three different weather forecasts or scroll through a recipe using only audio. Smart Displays bridged this gap, introducing a multimodal paradigm where voice commands yielded structural, visual answers.
Ambient Computing Framework
Designing for all aspects of the experience required a completely new interaction model: system architecture, conversation design, interaction patterns, visual language, and motion design.
Unlike a mobile phone (a 1-foot, personal device) or a TV (a 10-foot, passive device), a Smart Display is a 3-to-5-foot ambient device that lives in shared spaces like kitchens and living rooms. The UI had to be legible from across the room, yet dense enough to be useful when touched. We created a cohesive framework based on "glanceability"—cards that surfaced exactly the right density of information based on the user's proximity and the context of their request.
Impact
Contributed to multiple design patents and the successful launch of a new product category (Home Hub, eventually rebranded as Nest Hub). This effort fundamentally extended the Assistant framework from a disembodied voice into a robust, multimodal OS across mobile, smart displays, and third-party hardware.
Defining a New Modality
Establishing a completely new product category meant there were no existing UI kits or established patterns to fall back on. My leadership process was deeply exploratory: I drove rapid, low-fidelity hardware prototyping and ran extensive spatial user research. I translated those physical insights into the foundational "glanceability" framework that ultimately scaled beyond this single product to become the interaction standard for the entire Assistant hardware ecosystem.
