Leading the design and integration of intelligence efforts across the Android system—screen context, input, text and image intelligence.
Leading design and integration of intelligence efforts across the Android system. The work spans screen context understanding, input methods, and text/image intelligence—making the OS itself aware and adaptive rather than relying on individual apps.
Intelligence should feel inherent to the system, not bolted on. Features process data locally within Private Compute Core, maintaining privacy while enabling deep personalization. The design challenge is making ML-powered predictions feel reliable, controllable, and genuinely useful.
Less about individual apps, more about making the entire system aware and responsive to user context and intent.
Moving beyond app-specific features to OS-wide capabilities. Screen context informs what actions are available. Input methods adapt to what you're typing. Text selection understands entities and offers relevant actions. The system learns patterns without sending data to servers.
All intelligence features run within Android's Private Compute Core—an isolated environment where sensitive data is processed on-device. Models run locally, predictions happen without network calls, and personal patterns never leave the phone.