Agentic UX
Every creative tools company is grappling with the same question: what role should AI play in the creative process? At Splice, I led the development of a principled framework that answered it, not with features, but with a design philosophy for building AI creative partners.
The Story
Every creative tools company is grappling with the same question right now: what role should AI play in the creative process? The answers range from full automation (“generate a song from a prompt”) to invisible augmentation (“make the tools smarter without the user noticing”). Most companies are defaulting to one end of this spectrum or the other, driven by what’s technically possible rather than what’s creatively right.
At Splice, I believed we needed a different answer, one rooted in how music actually gets made between real people. Agentic UX was designated a top company priority, and I organized a dedicated offsite to tackle it head-on. I brought together most of my design team along with a product manager and an AI researcher for an intensive week of collaboration. During that offsite, we co-authored the Agentic UX framework: a principled design point of view on how AI should serve creators, framed not as a feature set but as a design philosophy.
The insight that shaped everything came from observing human musical collaboration. When two musicians work together (a vocalist and a producer, a songwriter and a session player) the process is fundamentally conversational. It’s iterative. It’s imprecise by nature. A vocalist hums a melody and says, “Something like this, but more uplifting.” A producer plays back a variation and asks, “Closer?” Communicating what you want to hear goes beyond words: it involves humming, referencing other songs, playing while talking, and filling in each other’s gaps.
This observation became the foundation of our framework. Rather than building AI features that replace this human dynamic (a text prompt that generates a finished track) we would build an AI creative partner that mirrors it. A system that listens, adapts, remembers, and grows with creators. Not artificial creativity, but artificial collaboration.
The Framework
We developed a four-part framework for creative orchestration that became the conceptual backbone of Splice’s AI strategy:
- Context. What can we capture and understand about a user’s creative process? This goes beyond explicit inputs, including audio analysis, DAW project composition, recent interactions, and long-horizon creative patterns.
- Intent. How do we help creators express what they’re trying to do without breaking creative flow? Intent can arrive as text, spoken instructions, humming, referencing a song, or feedback on outputs. The key principle: respect multiple input modalities.
- Actions. What capabilities can we orchestrate to deliver on creative intent? An agentic system combines and sequences capabilities like Search with Sound, sample generation, MIDI transcription, chord detection, and source separation to deliver on complex goals.
- Outcomes. What do we deliver back to the creator, and how does their response shape future interactions? Outcomes aren’t just results; they’re creative ingredients and transparent reasoning about why the system made the choices it did.
The Design Principles
The framework was grounded in six design principles authored during the offsite, not just technical guidelines, but a values statement about the kind of AI creative tools company Splice should be:
- Adapt to nonlinear creative workflows. Music creation is diverse and nonlinear. An agentic system should adapt to a user as it learns about their individual preferences, workflows, and unique creative voice.
- Be educational and adaptive. Engaging with an agentic system should feel educational and informative, meeting the user where they’re at while encouraging them to learn.
- Embrace imprecise input. Design systems that help clarify and refine creative intent rather than punishing ambiguity.
- Respect multiple input modalities. Allow users to choose the input modality they deem best: text, GUIs, singing, instruments, speech, and reference material.
- Preserve creative joy and autonomy. Automation should alleviate tedium and frustration, never erase a user’s personal preferences or sense of autonomy.
- Serve high-level intent. Understand what a creator wants to accomplish and reason about how to combine context and tools, rather than requiring step-by-step instructions for every task.
The Provocations
To make the framework tangible, I led the team in developing a series of UX provocations: high-fidelity concept explorations that showed how the principles would manifest in real product experiences. These weren’t wireframes or feature specs; they were design provocations intended to spark organizational conversation about the future of Splice’s product.
The provocations explored scenarios across the creative workflow: refining a vague idea through conversational iteration, humming to express a melodic concept, referencing a song as creative direction, crafting a synthesizer preset through natural language, and capturing a musical idea through multimodal input. Each was designed to demonstrate a specific principle in action and raise research questions about user behavior, trust, and creative ownership.
I grounded the work in external research: the Center for Humane Technology’s design principles, Anthropic’s constitutional AI approach, Google’s People + AI guidebook, and academic research on how AI affects creative diversity. This was a rigorously informed design position that drew on the best thinking in responsible AI, creative technology, and human-computer interaction.
The Impact
The Agentic UX vision became the design organization’s defining strategic contribution during a period when every part of Splice was asking “what does AI mean for us?” The framework gave the company a principled answer, not just “add AI features” but a coherent philosophy about what kind of AI creative tools company Splice should be.
The distinction between traditional AI features (siloed, single-outcome capabilities) and an agentic system (one that understands intent, reasons about combining context and tools, and adapts over time) provided the product organization with a vocabulary for evaluating AI opportunities. It moved the conversation from “should we add a chatbot?” to “how do we build a creative partner that scales the benefits of human collaboration to millions of creators?”
The design principles influenced product decisions beyond AI features. “Respect multiple input modalities” and “embrace imprecise input” shaped how we thought about search, discovery, and content creation workflows more broadly. The framework proved durable because it was grounded in a deep understanding of creative practice rather than in a specific technology or interaction pattern.
The Bigger Picture
This case study demonstrates the most important and least visible thing a Head of Design does: setting the design point of view that shapes an organization’s strategic direction. The Agentic UX vision wasn’t a feature design; it was a principled position on how AI should serve human creativity.
AI is reshaping every product category, and the companies that navigate this transition well will be the ones whose design leaders can articulate a clear, principled vision for how AI serves their users, not just technically, but philosophically. I’ve already done this work. I’ve already navigated the tension between AI capability and creative autonomy, between automation and human expression, between moving fast on AI and moving thoughtfully.
The Agentic UX framework also demonstrates that I think in systems, not features. An agentic architecture (context, intent, actions, outcomes) is a design system for intelligence. It provides a structure that individual feature teams can build within while maintaining coherence at the platform level. That’s the same kind of thinking a Head of Design brings to any design organization: not just better individual designs, but better frameworks for how design decisions get made.