IBM watsonx Orchestrate

User Research Insights & Recommendations

Comprehensive analysis of Day 0 onboarding experience based on 5 user interviews and stakeholder feedback

Research Period: April 6-10, 2026
Participants: 5 complete interviews
Total Duration: 240 minutes
5
User Interviews
13
Key Findings
6
Stakeholder Inputs
5
User Personas

Executive Summary

Five comprehensive user research interviews were conducted with Client Engineering, Account Technical Leaders, Customer Success, Sales Leadership, and Ecosystem Architects to evaluate the watsonx Orchestrate landing page and day-zero onboarding experience.

The research revealed critical insights about user needs, pain points, and opportunities for improvement across different user personas. Key findings indicate that integration challenges and time to value are the most significant barriers, with users requiring 10-30 minutes to understand product value (target: under 5-10 minutes).

Critical New Insights

  • Experienced users need issue-first dashboards that prioritize critical problems over general analytics
  • Personalization is essential - templates must be tuned to user's specific interests and use cases
  • Day-zero should showcase key features rather than empty analytics for first-time users
  • Chat interface clarity needs improvement - users don't immediately recognize conversational capabilities
  • "Time to First Agent" should be the primary focus, not product education (users have already seen the product)

๐Ÿ‘ฅ Research Participants

Interview Participant(s) Role Duration Date
1 Cameron Seitz, Joseph Kozhaya, Ankitha T C Client Engineering 59 minutes April 6
2 Richard Shannon Account Technical Leader 44 minutes April 6
3 Dana Abu Ali Customer Success Engineer 42 minutes April 7
4 Sai Bezawada Technology Sales Leader 51 minutes April 10
5 Ahmed Azraq Chief Architect, Ecosystem Build 44 minutes April 10

๐Ÿšจ Critical Findings

Critical Integration Challenges

The #1 Pain Point: Connections to external systems (ServiceNow, Salesforce, SAP, Workday, Milvus) is the most significant barrier users face.

  • Unclear ownership: Who creates connections (admin vs AI engineer)?
  • No RBAC separation causes role confusion
  • AI engineers often lack credentials to create connections themselves
  • When blocked, engineers can't build agents that integrate with backend systems
"The AI engineer typically does not have the credentials to connect to the back end systems... they're stuck. They can't do anything right."
โ€” Joseph Kozhaya, Client Engineering

Critical Time to Value

Current State: Users estimate 10-30 minutes to understand Orchestrate's value

Target State: Under 5-10 minutes to see value

  • Too long for effective onboarding
  • Users want to preview agents quickly
  • Need immediate understanding of capabilities
  • Streamline path to first agent preview

High Issue-First Dashboard Needed

Experienced users (Sales Leaders, Account Managers) need a fundamentally different landing experience focused on critical issues rather than general analytics.

  • Current dashboard shows general metrics (model usage, policies)
  • Need failure messages and critical issues first
  • High failure rates and client impact should be immediately visible
  • "Jump back in" should show critical issues requiring attention
"I want to know what my high failure rates are, right? Where my client's impact is. So that's what I would have want to see first when I popped it. What are my critical issues?"
โ€” Sai Bezawada, Technology Sales Leader

High Agent Configuration Complexity

Users don't understand the differences between configuration elements:

  • Instructions vs Guardrails vs Behavior
  • Various configuration options and their purposes
  • When to use each configuration element
  • Documentation exists but users don't read it
  • Need contextual, step-by-step guidance with examples

Medium Personalization Essential

Templates and examples must be tuned to user's specific interests and use cases.

  • Generic examples don't resonate with users
  • Need industry-specific templates
  • Use case-specific recommendations
  • Role-based content and workflows
"What would be helpful is if you can tune the pre-prompts to what Ava's interests are... if Ava is looking to extract fields from a passport, then it's something else, right?"
โ€” Sai Bezawada, Technology Sales Leader

Medium Day-Zero Analytics Problem

Empty analytics provide no value when no agents exist yet.

  • Analytics relevant for admins and experienced users, not first-time users
  • Should not be prominent on day-zero when no data exists
  • More appropriate for Day-30+ users who have built agents
  • Solution: Show key features instead of empty analytics
"When I'm just on board with watsonx Orchestrate, there will be no agents at all, so there will be no analytics at all, so this will not really provide me any value."
โ€” Ahmed Azraq, Chief Architect

โฑ๏ธ Time to Value Analysis

Current State
10-30
minutes to understand value
Target State
<5-10
minutes to see value

Recommendations to Reduce Time to Value

  • Streamline path to first agent preview
  • Provide pre-configured examples that work immediately
  • Show value before requiring configuration
  • Use progressive disclosure for advanced features
  • Focus on "Time to First Agent" as primary metric

๐Ÿ‘ค Persona-Specific Needs & Goals

Persona Primary Tasks Time to "Aha" Success Metric
Bob/Betty (Builder) Building agents, testing, integrations, configuration โ‰ค 5 minutes First agent test run succeeds with โ‰ฅ1 tool/action executed
Pooja (Business SME) Creating solutions for business needs without code โ‰ค 10 minutes 1 template-based agent/workflow created and run successfully
Ava (Systems Admin) Connections, models, analytics, credentials, governance โ‰ค 1 hour Tenant provisioned + RBAC configured + first user logs in
Cassie (End User) Executing tasks with existing agents โ‰ค 5 minutes First real work task completed end-to-end
Sales/Account Leader Monitoring fleet health, addressing critical issues Immediate Issue-first dashboard, failure alerts, client impact metrics

๐ŸŽฏ Three Proposed Design Directions

Direction 1: AI-First Path

Leverage AI chat for tenant setup, Q&A, agent building via natural language, chart generation

Goal: High flexibility and "magic" UX

Direction 2: Curated Funnel

Initial questionnaire โ†’ structured checklist by role with clear path to value

Goal: Clarity, structure, guided experience

Direction 3: Educational Sandbox

PLG via interactive tours, sample agents, template previews for learning by doing

Goal: Low friction, hands-on exploration

๐Ÿ“Š ICE Prioritization (Impact ร— Confidence รท Effort)

Direction Impact (1-10) Confidence (1-10) Effort (1-10) ICE Score Research Validation
Direction 2: Curated Funnel 10 9 6 15.0 ๐Ÿ† โœ… 5/5 interviews: "Personalization essential"
โœ… Steve: Clear "Aha moments" by persona
โœ… Ronak: "Persona IS important"
โœ… Intercom: Checklist-based progress proven
Direction 1: AI-First 9 7 8 7.9 โœ… Ronak: "Use chat for agent creation"
โœ… Frances: "Chat to curate experience"
โš ๏ธ Users: "Chat not immediately obvious"
โš ๏ธ High implementation complexity
Direction 3: Educational 4 8 7 4.6 โŒ Ronak: "Users already know product"
โŒ Ahmed: "Show features, not tutorials"
โŒ All interviews: Focus on action, not education
โŒ Contradicts "Activation Over Education"

Formula: ICE Score = (Impact ร— Confidence) รท Effort | Higher score = Higher priority

Strategic Decision: Hybrid Approach

Primary: Direction 2 (Curated Funnel) - ICE Score 15.0

  • Highest impact (10/10) - addresses #1 user need for personalization
  • Highest confidence (9/10) - validated by ALL 5 interviews + stakeholder consensus + Intercom competitive analysis
  • Medium effort (6/10) - persona selection + domain paths are well-understood patterns

Secondary: Direction 1 (AI-First) - ICE Score 7.9

  • Use as supporting interface, not primary experience
  • Needs visual clarity improvements before becoming primary
  • High effort (8/10) requires significant AI/UX investment

Deprioritized: Direction 3 (Educational) - ICE Score 4.6

  • Low impact (4/10) - contradicts research findings
  • Move educational content to contextual help, not Day 0

Current Implementation (v1.2): Implements Direction 2 as primary (Persona + Domain selection), with Direction 1 elements (AI chat) as secondary interface. Direction 3 moved to contextual help.

๐Ÿ’ฌ Stakeholder Feedback Synthesis

Ronak (PM)

Product Manager

Key Insight: "First time users would have already seen the product. So, the goal shouldn't be helping them learn about the product but quickly get them started to an agent"

Impact: This insight drives the design philosophy toward "Activation Over Education"

  • Remove educational/tour elements from Day 0
  • Focus exclusively on "Time to First Agent"
  • Assume product awareness, optimize for action
  • Combine agent creation paths into unified AI chat

On Personalization: "Knowing Persona and expertise in agent creation is definitely important but should be prioritized later when we bring in Customer Care specific journey"

  • Persona selection IS important for tailoring the experience
  • Domain-specific journeys (Customer Care, Procurement, etc.) can be phased in later
  • Focus current version (v1.2) on core persona-based experience, add advanced domain paths in future iterations

Frances (Manager)

Design Manager

Recommendation: "Blend of direction 1 and 2, where we ask specifying questions and use the chat to curate the user's experience"

  • Supports combining AI chat with structured guidance
  • Recognizes need to balance chat vs. other engagement methods
  • Use chat as primary interface with structured flows underneath

Steve (Design Leader)

Design Leadership

Framework: Defined "Aha Moments" for each persona with specific time targets and observable triggers

  • Builder: โ‰ค5 min - First agent test succeeds
  • SME: โ‰ค10 min - Template agent runs without code
  • Admin: โ‰ค1 hour - First user logs in with correct access
  • End User: โ‰ค5 min - First task completed

Robert (Lead)

Technical Lead

Framework: Global onboarding tasks + domain-specific paths

  • Global: Invite team, setup workspace, environment, domain
  • Customer Care: Contact center, support desk, human transfers
  • Procurement: Order processing system integrations
  • Domain-specific KPIs and metrics

Andy (Team Member)

Design Team

Key Question: "What are the major experiential milestones for each persona, and what do they need to support their primary goal?"

  • Different "first win" for each persona
  • Pooja: Clear path to creating solutions for business needs
  • Bob/Betty: Efficient path to build, test, monitor solutions

Intercom Analysis

Competitive Research

Key Patterns: Checklist-based progress, multi-purpose chat, progressive disclosure

  • "0/5 steps" progress tracking
  • One primary action at a time (expand current, collapse others)
  • Multi-purpose chat panel (Agent, Help, News, Support)
  • Short, skippable questionnaire (3-6 questions)
  • Separate core setup from optional "Go further" items

๐Ÿ† Competitive Insights from Intercom

Analysis of Intercom's Day 0 onboarding revealed several best practices that should be adopted:

๐ŸŽฏ Strategic Recommendations

Current Implementation (v1.2)

  • Design unified AI chat interface for agent creation (combine "from scratch" and "search catalog")
  • Add "Needs Attention" section to replace empty analytics - show blockers like connection setup, RBAC, guardrails
  • Remove Analytics/Assets from Day 0 - show key features instead for first-time users
  • Add tooltips to OOTB agents with contextual examples (e.g., "Invoice Extractor - Process 100 invoices/day")
  • Implement persona selection - persona and expertise level are important for tailoring the experience
  • Add progress milestones - "0/5 steps" checklist with clear completion tracking
  • Implement issue-first dashboard for experienced users (Sales Leaders, Account Managers)

Future Enhancements

  • Connection setup wizard - guided flow for ServiceNow, Salesforce, SAP, Workday integrations
  • RBAC quick setup - clarify admin vs builder roles, provide visibility into existing connections
  • Domain-specific paths - Customer Care onboarding with contact center integrations
  • KPI tracking - deflection %, containment, customer satisfaction metrics
  • Configuration guidance - in-context tooltips explaining Instructions vs Guardrails vs Behavior
  • Personalized templates - tune examples to user's industry and use case interests

Long-term Vision (v2.0+)

  • Full domain-specific onboarding - Procurement, HR, IT Service Management paths
  • Advanced personalization - AI-driven recommendations based on user behavior and goals
  • Comprehensive analytics - post-Day 0 dashboard with fleet health, usage patterns, ROI metrics
  • Team collaboration features - shared workspaces, agent templates, knowledge bases
  • Integration cookbooks - step-by-step guides for common system integrations
  • Multi-agent orchestration examples - showcase complex workflows and agent coordination

๐ŸŽจ Design Principles

1. Activation Over Education

Users already know the product. Focus on helping them DO something, not learn about it.

2. Chat-First, Structure-Underneath

AI chat as primary interface with structured flows guiding the conversation.

3. Context Over Empty States

No empty analytics/assets. Show "Needs Attention" items, examples, and tooltips instead.

4. Persona-Aware Experience

Persona and expertise level are important for tailoring the experience. Domain-specific journeys (Customer Care, etc.) can be prioritized in later phases.

5. Milestone-Driven Progress

Clear progress indicators, celebrate "Aha moments", track time to value.

6. Issue-First for Experienced Users

Prioritize critical problems over general analytics for Sales Leaders and Account Managers.

๐Ÿ“Š Success Metrics by Persona

Persona Time to First Agent Key Metrics Success Criteria
Bob/Betty (Builder) <5 minutes Test success rate >95%, Agents deployed 3+ in first week Agent executes with โ‰ฅ1 tool/action, Component reuse >50%
Pooja (Business SME) <10 minutes No-code success 100%, Workflows created 2+ in first month Workflow runs without writing code, IT ticket reduction measurable
Ava (Admin) <1 hour RBAC configuration complete day 1, Security incidents zero User logs in with correct access, Smooth first login experience
Cassie (End User) <5 minutes Task completion rate >90%, Daily usage 3+ days/week Real work task completed end-to-end, Time savings measurable
Dave (Exec) <1 week Deployed agents visible, Usage metrics tracked Governance + telemetry visible, ROI visibility clear