Skip to main content
Status: In development (local spike, disabled in staging and production) Tier: Pro Rollout: Local dev only until the Phase 4 measurement gate passes
Cannabis Coach is not available in staging or production. It exists only as a local development spike running against Anthropic’s Claude Managed Agents beta API. If you are not a High IQ engineer running the code locally with valid API keys, you cannot access this feature — the Ask AI entry point in shipped builds still routes to the existing Deep Research implementation.This page documents what Coach is, how it’s designed to work, and how to test it in local dev. It does not describe a feature you can use right now.

Overview

Today, Ask AI in High IQ is a stateless chat. Every conversation starts from scratch. The AI doesn’t remember that you said last Tuesday you don’t like peppery strains, or that you have three Wedding Cake cartridges in your stash, or that you’ve been looking at sativas on Sunnyside for the last twenty minutes. Every time you open it, you explain yourself all over again. Cannabis Coach fixes that. It’s a persistent AI companion powered by Claude Managed Agents — Anthropic’s new agent runtime that gives each user a dedicated container with a file system that survives across conversations. Every Coach session reads a workspace of notes the agent has written about you: your preferences, your tasting log, your goals, and a live snapshot of what you’re shopping. The agent updates those notes as it learns. It’s the difference between asking a stranger for a strain recommendation and asking a friend who’s been with you through every dispensary visit.

What it does

  • Remembers across conversations — Preferences, tasting notes, goals, and things that didn’t work are stored in the agent’s persistent workspace and loaded on every interaction
  • Knows what you’re shopping in real time — When you browse Sunnyside (or any supported dispensary) in the Safari or Chrome extension, the extension pushes your shopping context to Convex and Coach reads it during your next question
  • Uses the web for live research — Coach can search papers, read forum threads, look up new releases, and cite its sources
  • Queries your High IQ data — Coach has access to your stash, favorites, orders, Daily Stories history, and the full TIWIH strain database through a dedicated MCP server
  • Learns over time — As you correct, reject, or confirm its suggestions, it updates its internal notes about you so future answers get better
  • Future: proactive notifications — A scheduled v2 agent runs daily per Pro user, scans their state, and decides whether to push a proactive notification (“You’ve been looking at 3 sativas on Sunnyside — want me to compare them to your stash?”)

User value

The “aha moment” is asking Coach “what should I try next?” a month after you started using it and getting an answer that references your Tuesday tasting note from three weeks ago, your preference for balanced hybrids, and the two Gelato 41 options you looked at on Sunnyside this morning — all without you having to re-explain any of it.
Coach replaces the current Ask AI entry point in the mobile app for Pro users. Free users lose Ask AI entirely and see an upgrade prompt.

How it works

Coach is built on three Anthropic / TIWIH primitives working together:
  1. A Claude Managed Agents session is created once per Pro user and resumed across conversations. Each session has a container with a /workspace/ directory where the agent stores its notes about that user. The session’s file system is the memory store — Claude’s native read, write, and edit tools work directly against it.
  2. An MCP server (coach-context) exposes four tools to the agent: get_user_context, get_shopping_context, get_strain_details, and find_similar_strains. These proxy to Convex and Supabase so Coach has live access to the user’s data and the strain database.
  3. The browser extension pushes shopping context to a dedicated Hono endpoint (POST /api/v1/shopping/context), which upserts into a Convex shoppingContext table. The MCP server reads from this table at the start of each turn, and the agent writes the latest snapshot to /workspace/shopping-context.md.
The workspace layout the agent uses:
/workspace/
├── README.md              # Agent's own notes on how it uses this workspace
├── preferences.md         # What the user likes, dislikes, and why
├── tasting-log.md         # Strains tried with notes, ratings, contexts
├── goals.md               # What they're optimizing for
├── avoid.md               # Things that didn't work and why
├── conversation-summary.md  # Rolling summary of past sessions
├── context-snapshot.md    # Latest Convex snapshot (auto-synced)
└── shopping-context.md    # Latest browser extension state (auto-synced)

Local testing guide

These instructions apply to local development only. They assume you are a High IQ engineer with access to the monorepo, Anthropic API credentials with the Managed Agents beta enabled, and a Convex dev deployment. Do not try to follow these against staging or production.

Prerequisites

Before you can test Coach locally, you need:
  1. Anthropic API key with Managed Agents beta access — The beta header managed-agents-2026-04-01 must be allowed on your org. Check with:
    curl https://api.anthropic.com/v1/agents \
      -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
      -H "anthropic-version: 2023-06-01" \
      -H "anthropic-beta: managed-agents-2026-04-01"
    
    A 200 response (even with an empty list) means you have access. A 403 means you need to request beta access from Anthropic.
  2. A running Convex dev deploymentpnpm turbo dev --filter=@highiq/mobile or cd apps/mobile && npx convex dev. The coach-context MCP server proxies user data through Convex, so the dev deployment must be reachable.
  3. Clerk test user with Pro tier — Use testuser+clerk_test@example.com with verification code 424242. Set subscriptionTier: 'pro' on the user row in Convex via the dashboard or a manual mutation. Without this, the Pro gate will return a 402 and you’ll never reach Coach.
  4. Environment variables set in apps/api/.env.local:
    ANTHROPIC_API_KEY_COACH=sk-ant-api03-...
    COACH_AGENT_ID=agent_...
    COACH_ENV_ID=env_...
    COACH_MCP_URL=http://localhost:3001/mcp/coach-context
    CONVEX_DEPLOYMENT_URL=https://your-dev-deployment.convex.cloud
    
  5. Feature flag enabled — In apps/mobile/src/_config/featureFlags.ts, set coach_enabled: true for the dev environment. This flag is off by default and should stay off in staging and production builds.

Phase 1: Infrastructure smoke test (curl only)

Before touching the mobile app or the API routes, confirm Coach’s infrastructure is reachable directly.
1

Create the Coach agent

One-time setup. Creates the Agent definition that every session will reference.
curl https://api.anthropic.com/v1/agents \
  -X POST \
  -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: managed-agents-2026-04-01" \
  -H "Content-Type: application/json" \
  -d @apps/api/src/services/coach/coach-agent.json
Save the returned id into COACH_AGENT_ID.
2

Create the Coach environment

Also one-time. Defines the container template the agent runs in.
curl https://api.anthropic.com/v1/environments \
  -X POST \
  -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: managed-agents-2026-04-01" \
  -H "Content-Type: application/json" \
  -d '{"runtime":"node-24","networking":"unrestricted"}'
Save the returned id into COACH_ENV_ID.
3

Create a test session

curl https://api.anthropic.com/v1/sessions \
  -X POST \
  -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: managed-agents-2026-04-01" \
  -H "Content-Type: application/json" \
  -d "{\"agent\":\"$COACH_AGENT_ID\",\"environment_id\":\"$COACH_ENV_ID\"}"
Save the returned session id into a shell variable $SESSION_ID.
4

Send a test message and stream events

# Send the message
curl https://api.anthropic.com/v1/sessions/$SESSION_ID/events \
  -X POST \
  -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: managed-agents-2026-04-01" \
  -H "Content-Type: application/json" \
  -d '{"events":[{"type":"user.message","content":[{"type":"text","text":"Hi, I am new here. What do you know about me?"}]}]}'

# Stream the agent's response
curl https://api.anthropic.com/v1/sessions/$SESSION_ID/events/stream \
  -H "x-api-key: $ANTHROPIC_API_KEY_COACH" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: managed-agents-2026-04-01"
You should see agent.mcp_tool_use events as Coach calls get_user_context, followed by agent.tool_use events for workspace file writes, and finally agent.message events with the response text.

Phase 2: API integration test

With infrastructure confirmed, test the Hono API routes that the mobile client will hit.
1

Start the Hono API

pnpm turbo dev --filter=@tiwih/api
API is now running on http://localhost:3001.
2

Start a session for the Clerk test user

curl http://localhost:3001/api/v1/coach/start \
  -X POST \
  -H "Content-Type: application/json" \
  -H "x-clerk-user-id: user_2test..." \
  -d '{}'
Expected response: { "sessionId": "session_...", "isNew": true }.If you get a 402 pro_required response, the Clerk user doesn’t have subscriptionTier: 'pro' in Convex. Fix that first.
3

Send a message and stream the response

curl http://localhost:3001/api/v1/coach/message \
  -X POST \
  -H "Content-Type: application/json" \
  -H "x-clerk-user-id: user_2test..." \
  --no-buffer \
  -d '{"sessionId":"session_...","content":"What is in my stash?"}'
You should see SSE-style events streaming back, culminating in a text response that references the user’s actual stash from Convex.
4

Verify Convex session tracking

In the Convex dashboard, check the coachSessions table. There should be one row for the test user with turnCount: 1 and a recent lastActiveAt.

Phase 3: Mobile UI test flow

With the API confirmed, test the end-to-end mobile experience.
1

Start the mobile app

cd apps/mobile && pnpm ios
Sign in as testuser+clerk_test@example.com with code 424242.
2

Verify the Pro gate

Navigate to the Ask AI entry point.
  • If the Clerk user is on subscriptionTier: 'free', you should see the CoachUpgradePrompt with “Upgrade to Pro — $12.99/mo” as the only CTA
  • If the user is on subscriptionTier: 'pro', you should see the Coach chat UI
3

Conversation round-trip

Send: “I tried Blue Dream last night. It was too heady for me.” Wait for the response. In the debug sidebar (visible in __DEV__ builds only), verify that /workspace/tasting-log.md now contains a Blue Dream entry.
4

Shopping context injection

In a separate terminal, simulate a browser extension push:
curl http://localhost:3001/api/v1/shopping/context \
  -X POST \
  -H "Content-Type: application/json" \
  -H "x-clerk-user-id: user_2test..." \
  -d '{
    "url": "https://sunnyside.shop/strains/gelato-41",
    "pageTitle": "Gelato 41 | Sunnyside",
    "cartContents": [],
    "viewedProducts": [{"name": "Gelato 41 1/8oz", "strainSlug": "gelato-41", "viewedAt": 1712614800000}]
  }'
Now in the mobile app, ask: “What do you think about what I’m looking at?” Coach should reference Gelato 41 specifically.
5

Cross-session memory test

Close the app. Wait at least a minute. Reopen. Ask: “What did I tell you about Blue Dream?” Coach should reference the tasting log entry from earlier, proving session resumption works.

Phase 4: Measurement

The whole point of the spike is to collect data on whether Coach is viable. Capture these numbers:
MetricWhere to find itTarget
Active CPU seconds per interactionsession.stats.active_seconds from GET /v1/sessions/{id}< 20s per typical question
Input tokens per turnsession.usage.input_tokens< 30k (benefits from caching after turn 2)
Cache hit ratecache_read_input_tokens / total_input_tokens> 60% after the first turn
Session resumption latencyTime from /coach/resume to first agent.message event< 3s
Cost per “heavy user day”5 interactions × measured cost< $0.10/day
Cost per Pro user per monthHeavy day cost × 30< $3.00/month
Log all numbers to docs/superpowers/specs/2026-04-08-cannabis-coach-spike-results.md. If the monthly cost is under 3/Prouser,productionize.Ifitsbetween3/Pro user, productionize. If it's between 3–6, iterate (try Sonnet 4.6 instead of Opus, tighten the tool surface). If it’s over $6, kill the spike.

Enabling and disabling

Coach is gated by the coach_enabled feature flag in apps/mobile/src/_config/featureFlags.ts.
export const FEATURE_FLAGS = {
  coach_enabled: {
    dev: true,       // enabled for local development
    staging: false,  // disabled in staging
    production: false, // disabled in production
  },
  // ...other flags
}
To turn Coach off entirely — even in local dev — flip the dev value to false. The Ask AI entry point will fall back to the existing Deep Research implementation for all users, Pro or free, and no Managed Agents sessions will be created.
This flag is also the production kill switch. If Coach ships and we need to roll it back fast, flipping production: false is enough — the mobile app will stop creating new sessions and stop routing Ask AI to the /coach/message endpoint. Existing sessions stay alive in Anthropic’s infrastructure but will idle out on their own schedule.

Known limitations

  • Beta API — Claude Managed Agents is in beta (managed-agents-2026-04-01). The API surface can change between releases. Isolate all Coach-related code behind the apps/api/src/services/coach/ boundary so a swap is localized.
  • Vendor lock-in — Coach bypasses Vercel AI Gateway and calls Anthropic directly. No multi-provider fallback for this feature.
  • Cold start latency — Session resumption reads the container’s file system. First turn after a long idle may be slower than subsequent turns.
  • Session eviction — Anthropic may terminate idle sessions on their own schedule. When that happens, /api/v1/coach/resume creates a new session and returns wasReset: true. The mobile UI shows a toast: “Your Coach started fresh — previous history is not available.” Workspace files from the terminated session are NOT recovered in v1.
  • Privacy — Learned preferences and shopping context are stored on Anthropic’s infrastructure. Pro tier is explicit opt-in; a privacy notice should appear at enrollment (not yet implemented).
  • No proactive notifications — v1 is reactive only. The v2 design note for scheduled proactive notifications is in the spike plan but not implemented.