Pixel / Android System UI

Pixel, Android System UI & Intelligence

Leading the design and integration of intelligence efforts across the Android system—screen context, input, text and image intelligence.

Role Staff Designer, Lead
Timeline 2019-Present
Location Google Cambridge
Team Android Platform

Leading design and integration of intelligence efforts across the Android system. The work spans screen context understanding, input methods, and text/image intelligence—making the OS itself aware and adaptive rather than relying on individual apps.

Intelligence should feel inherent to the system, not bolted on. Features process data locally within Private Compute Core, maintaining privacy while enabling deep personalization. The design challenge is making ML-powered predictions feel reliable, controllable, and genuinely useful.

Less about individual apps, more about making the entire system aware and responsive to user context and intent.

System-Level Intelligence

Moving beyond app-specific features to OS-wide capabilities. Screen context informs what actions are available. Input methods adapt to what you're typing. Text selection understands entities and offers relevant actions. The system learns patterns without sending data to servers.

Screen
Context Understanding
System understands what's on screen to surface relevant actions and information
Input
Smart Methods
Keyboard predictions, voice input, and gesture recognition that adapt to context
Text
Entity Recognition
Addresses, phone numbers, events extracted and actionable across all apps
Image
Visual Intelligence
On-device understanding of photos, screenshots, and visual content

All intelligence features run within Android's Private Compute Core—an isolated environment where sensitive data is processed on-device. Models run locally, predictions happen without network calls, and personal patterns never leave the phone.