Mr. StokesProduct Designer

Pixel & Android Intelligence

Leading the design and integration of intelligence efforts across the Android system—screen context, input, text and image intelligence.

Intelligence should feel inherent to the system, not bolted on. Features process data locally within Android's Private Compute Core, maintaining strict privacy boundaries while enabling deep, OS-level personalization. The primary design challenge is making ML-powered predictions feel reliable, controllable, and genuinely useful, shifting the paradigm from manual operation to human-on-the-loop supervision.

Year2019–Present
RoleStaff Designer, Lead
PlatformSystem Intelligence
DomainContext Understanding, App Functions, Visual Intelligence

System-Level Intelligence & Private Compute Core

We moved beyond app-specific features to OS-wide capabilities. By leveraging the Private Compute Core (PCC)—a secure, isolated environment within Android—we designed a system where screen context informs available actions and input methods adapt dynamically. Features like Live Caption, Smart Reply, and Screen Attention process ambient data directly on the device. The system learns patterns locally, ensuring sensitive context never leaves the hardware without explicit user consent. This architectural separation builds the foundational trust required for an intelligent OS.

App Functions & Gemini

App Functions enables Gemini Nano to interact with apps directly on the user's behalf, operating fully within the PCC's privacy framework. Rather than just answering questions, the assistant can take real, complex actions—sending messages, creating calendar events, or adjusting deep system settings across different apps.

The UX challenge here is preserving transparency. We designed the interaction model so users can observe the AI's intended actions, query its reasoning, and intervene or confirm execution. It is about maintaining control while offloading tedious multi-step tasks to the model.

Gemini Teamwork Animation
Fig. 01App Functions enables real interaction through observable execution flows.

Context Understanding

Through the Content Capture API, the system analyzes on-screen content to surface relevant actions without requiring explicit user queries. However, user trust remains paramount. We designed the architecture to automatically recognize highly sensitive contexts—like banking apps, password managers, and private browser tabs—ensuring they are strictly excluded from AI analysis without requiring user intervention.

Defining Interaction Primitives

As a Staff Designer, my focus shifts from designing discrete screens to defining the core interaction primitives that other feature teams build upon. For Android Intelligence, this meant establishing cohesive design patterns for how ML surfaces contextually across the OS. By leading the design architecture—rather than just the visuals—I ensured that our privacy-first models and human-on-the-loop paradigms felt consistent, reliable, and native whether you were typing on a keyboard, sharing an image, or swiping through the system UI.

Android Context Recognition Demo
Fig. 02Screen context informing available actions while respecting strict privacy boundaries.