Confidentiality Note

This case study describes a design exploration conducted within an enterprise AI incubation studio. Certain details have been generalised or omitted to respect confidentiality.


Role & Scope: Role: Design Lead & PM

Scope: Led research, product strategy, and supervision model design for an agentic workspace operating across enterprise systems.

Owned:

Project Overview

This case study describes a confidential design exploration for supervising AI agents that operate across fragmented enterprise systems. The core premise: as agents become capable of taking action across tools and data, the differentiator shifts from “what the model can do” to “what the user can reliably understand, control, and recover from.”

Within an enterprise AI incubation studio, I led research and product design for an early prototype that combined an inspectable agent Tasks surface with a modular workspace canvas. The system aimed to make agent activity legible and safe enough for operational use—through explicit state, provenance, and intervention points—without adding setup burden for time-constrained users.

Context

The AI product landscape is moving from conversational assistants toward systems that can plan, execute, and coordinate multi-step work. In enterprise environments, that work rarely happens in one place: context is distributed across tools and data sources—often with inconsistent data quality and constantly changing state.

This creates a new interaction design challenge: autonomy can reduce manual effort, but it can also increase cognitive load when users must monitor invisible background work, reconcile conflicting sources, or debug unexpected actions. In practice, agentic systems fail not only on capability—but on supervision: users need ways to verify, steer, and safely interrupt behavior without becoming full-time managers of automation.

The Problem

The exploration focused on one central question: