Amazon Alexa+ AI Agentic Developer Experience Foundations

The Situation

Amazon Alexa+ is an AI-powered personal assistant that gets things done. It was a significant pivot from deterministic to probabilistic customer experiences. This change required a complete 0-to-1 rebuild of the developer experience, with the following objectives:

  • Self-service SDK integration experience
  • Customer experience simulation and debugging
  • Autonomous and directed testing
  • Monitoring and anomaly detection
  • End-to-end agentic inference and assistance
  • Bi-directional updates of information and context
  • Personalization & contextualization based on tasks, job roles, and permissions

My Outcomes

As the Lead AI UX designer, I established a platform overview and design tenets of our developer experience. This defined the human-AI behavioral dynamics, agent instantiation, and primary objectives across the journey.

Platform overview

This overview provided a framework to scope product, design, and engineering initiatives. It focused on three distinct phases of the Alexa skill creation process: building, deploying, and managing. There are developer jobs-to-be-done and expected tools for each of these phases. I defined the balance of responsibilities between humans and agents, and where/how they best collaborate together. I also focused on behind-the-scenes agent orchestration and exploring the roles of judge agents to evaluate, validate, and challenge sub-agent outputs.

Product design tenets

I authored these design tenets to govern product design decisions for the Alexa+ developer experience. This was an important tool to help build consensus around human-AI experiences.

  1. Minimize the gap between intent and outcome
    Developers describe what they want and the system handles the translation. Every interaction starts from meaning, not syntax.
  2. Treat AI as a collaborator, not a feature
    The system maintains context across rapidly shifting technical tasks. This reduces cognitive strain, and never surfaces a problem without an explanation and a path forward.
  3. Every output is a starting point, not a conclusion
    AI-generated results are surfaces for review and refinement, not final answers. The developer’s confirmation is what makes an inference authoritative.
  4. Make intelligence inspectable
    Every inferred value is accompanied by a signal note explaining its source and logic, so developers can evaluate the reasoning behind a result, not just the result itself.
  5. Preserve momentum above all else
    The system scaffolds and anticipates rather than blocks. Developers can work at the speed of innovation, not at the speed of remediation.

My Approach

  • Grounded the work in both primary and secondary research, building a shared understanding of how developers build for Alexa
  • Used design as an early research tool by sketching flows and vibe-coding/designing the UX
  • Benchmarked against leading developer platforms to identify table stakes and areas of opportunity
  • Translated research findings into Jobs-to-be-Done and Developer Experience Outcomes, giving the team a shared language for what success looks like at each stage of the journey
  • Audited the legacy toolset to establish a baseline, identifying which workflows were candidates for agentic automation and which required rethinking from scratch
  • Prioritized the product design roadmap against both urgent internal needs and the longer-term requirements of third-party developers

Key Decisions

  • Prioritization of 1P testing tools and 3P SDK integration experience to decrease operational load
  • Focus on agent orchestrator and web-based tools to decrease technical friction for 3P developers