MS and BS from Rensselaer Polytechnic Institute in Human-Computer Interaction and Information Technology. Started at Google Zurich in 2012, designing conversational interfaces and smart displays. Led Duplex on the web, using ML to automate web browsing for everyday tasks. Before that, shaped Google Maps personal context features and the original Google Trips.
Now at Google Cambridge, focusing on Android System UI and intelligence. The work spans screen context, input methods, and text/image understanding. Less about individual apps, more about making the entire system aware and adaptive. Prototyping with AI coding tools to rapidly validate technical feasibility of design concepts.
The same architectural patterns appear across defense, wellness, and consumer computing: signal fusion over discrete events, probabilistic reasoning over binary states, adaptive density over static interfaces, human-on-loop over human-in-loop.
From Kill Chain to Kill Fabric: Redesigning military C2 for the age of autonomous systems. OPAL-I framework for human-machine decision superiority.
Behavioral operating system that builds user autonomy, not app dependence. Multi-scale signal fusion and adaptive intervention density for health coaching.
Framework for interfaces that breathe with human attention. Information rises and settles based on context, cognitive load, and task complexity.
Memory as lens, not database. How architectural constraints—decaying traces, limited introspection, genuine stakes—create pressure toward emergent self-modeling in autonomous systems.
Leading design and integration of intelligence across Android OS. Screen context, input methods, text and image understanding.
Assistant-mediated fulfillment through automated web browsing.
On-device AI summarization of messaging threads.
First Google Assistant smart displays.