Skip to main content
Status: Brainstorm Phase: Phase 5 (AI Media) | Tier: Pro / Studio

Overview

You just uploaded a dispensary receipt with three strains. High IQ generates a research report breaking down the terpene profiles, effects forecasts, High Family classifications, and similar strain recommendations for each one. Right now, that report is text. Detailed, useful, intelligent text — but still text. Video Reports turn that text into a narrated visual presentation. Instead of reading your report, you watch it. An AI narrator walks through the findings while data visualizations, terpene charts, High Family badges, and strain comparisons animate on screen. It is your personal cannabis briefing, produced on demand, every time you log an order. Two formats serve two distinct use cases. The Share Clip is a 30-60 second highlight reel designed for Instagram Reels, TikTok, iMessage, and group chats — punchy, visual, and immediately interesting to anyone who sees it. The Full Briefing is a 2-3 minute deep dive that covers everything the text report covers, but in a format that is easier to absorb while doing something else. You can watch your briefing while rolling a joint, commuting, or waiting for your edible to kick in. The critical insight is personalization. This is not a generic “Blue Dream strain review” video that could exist on YouTube. This video mentions you by name, references your purchase history, compares this strain to others you have tried, and highlights how this order fits into your broader consumption patterns. That level of personalization is what makes it shareable — “look at what my cannabis app just made for me” — and what makes it impossible for competitors to replicate without the same data infrastructure.

What It Does

Share Clip (30-60 seconds)

  • Opens with the user’s name and the order date: “John, here’s a quick look at your January 15th pickup”
  • Highlights the standout strain from the order — the one with the most interesting terpene profile or the highest match score
  • Shows a terpene visualization that animates the dominant terpenes
  • Displays the High Family badge with a one-sentence explanation
  • Ends with a branded outro and deep link back to the full report
  • Optimized for vertical video (9:16) for Stories and Reels
  • Includes subtitles/captions baked into the video for silent autoplay

Full Briefing (2-3 minutes)

  • Structured narration matching the report sections: Overview, Strain Breakdown, Terpene Analysis, Effects Forecast, Similar Strains, Medical Research
  • Each section has its own visual treatment — charts for terpenes, cards for similar strains, quotes for research highlights
  • Pauses on key insights: “This is interesting — your Blue Dream has unusually high Myrcene, which means…”
  • Compares to previous orders when history exists: “Last month you went heavy on sativas. This order is all indica-dominant — here’s what that shift means…”
  • Available in both vertical (mobile-first) and horizontal (full-screen) formats
  • Downloadable as MP4 for sharing or saving

User Value

The “aha moment” is the first time the AI narrator says your name, mentions the exact strains you bought, and explains how they interact — this is not a generic video. It is a production made specifically for you, from your data, about your cannabis. That realization is what drives shares and what drives upgrades.

Technical Approach

Architecture

LayerTechnologyNotes
Script GenerationLLM (Claude or GPT-4)Converts structured report data into a natural narration script with timing cues
Text-to-SpeechElevenLabs or similarNatural-sounding narration with personality; configurable voice/tone
Visual RenderingRemotion or similarProgrammatic video generation from React components; renders charts, animations, text overlays
Data VisualizationCustom componentsTerpene wheels, High Family badges, comparison cards, effect meters
Video AssemblyFFmpeg or cloud renderingCombines narration audio with visual track, adds subtitles, renders final MP4
Storage & CDNSupabase Storage + CDNCached for repeat views; keyed by report ID
DeliveryHLS streaming or direct MP4Stream in-app; download for sharing

Generation Pipeline

1

Report Completion Trigger

When a research report finishes generating, a background task is queued (Trigger.dev) for video generation. The user sees a “Video generating…” indicator with estimated time.
2

Script Generation

The LLM receives the structured report data (not the prose text) and generates a narration script. The script includes timing annotations, emphasis markers, and visual cue references. For Share Clips, the prompt emphasizes brevity and hook-worthy opening lines. For Full Briefings, the prompt mirrors the report structure.
3

Audio Rendering

The narration script is sent to TTS with voice parameters (warm, knowledgeable, slightly casual — think a friend who knows a lot about weed, not a documentary narrator). Audio is returned as WAV/MP3 with word-level timestamps.
4

Visual Assembly

A Remotion composition is rendered using the report data as props. Visual elements are synchronized to the audio timestamps — when the narrator says “Myrcene,” the terpene chart highlights Myrcene. Transitions, animations, and branded elements are applied.
5

Final Rendering

Audio and video tracks are muxed. Subtitles are burned in. The video is rendered in the target format (vertical 9:16 for Share Clip, both 9:16 and 16:9 for Full Briefing). Thumbnail frame is extracted.
6

Storage and Notification

The final video is uploaded to CDN storage. The user receives a push notification: “Your video report is ready.” The video appears on the report detail screen alongside the text report.

Key Technical Challenges

  1. Generation time — Video rendering is slow. Even with cloud infrastructure, a 2-3 minute video might take 2-5 minutes to produce. The UX must set expectations (progress indicator, push notification when ready) rather than pretend it is instant.
  2. Visual quality — Programmatic video (Remotion-style) looks polished and consistent but cannot match cinematic AI video generation (Runway-style). The choice is between control and aesthetics. Starting with Remotion for v1 (reliable, deterministic) and exploring AI video for v2 (more impressive but less controllable) is the safer path.
  3. Narration naturalness — The script needs to sound like a person talking, not a report being read aloud. The LLM prompt must emphasize conversational transitions: “Now here’s what’s interesting…” rather than “Section 3: Effects Forecast.”
  4. Cost per video — A 2-minute video with TTS narration and cloud rendering could cost 0.500.50-2.00 per generation. At 4 reports/month per active subscriber, that is 22-8/month in video costs alone — significant but manageable within Pro pricing.
  5. Storage — Video files are large (10-50MB per clip). CDN costs accumulate with user base growth. Implement retention policies (keep for 90 days, re-generate on demand) and compression optimization.

Tier Impact

TierAccess
FreeSee a blurred video thumbnail on the report page. Tap to see a 5-second frozen preview frame with a “Pro” badge overlay.
ProFull access to Share Clips and Full Briefings for every report. Unlimited downloads and shares.
StudioPriority rendering queue (videos ready in under 60 seconds), choice of narrator voice, custom intro/outro, extended formats (5+ minutes for multi-order compilations).

Dependencies

  • Research report generation pipeline — built and live
  • Strain data and terpene profiles — built and live
  • Trigger.dev task infrastructure — built and live
  • Narration script generation prompt (LLM)
  • TTS integration (ElevenLabs or similar)
  • Remotion video rendering pipeline (or equivalent)
  • Data visualization components for video (terpene charts, badges, cards)
  • CDN storage for video files
  • Push notification for “video ready” event
  • Video player component in mobile app

Open Questions

  1. Programmatic video vs. AI video — Remotion gives us full control over visuals and consistency, but AI video generation (Runway, Pika) could produce more cinematic results. Which approach for v1? Recommendation: Remotion for reliability, explore AI video for v2.
  2. Narrator voice selection — One default voice, or let users choose? Multiple voices increase perceived personalization but add complexity. Could be a Studio tier feature.
  3. Share Clip length — 30 seconds for Reels/TikTok compatibility, or 60 seconds for more substance? Platform optimal lengths vary. Could offer both.
  4. Subtitle language — English only for v1? The narrator speaks English, but subtitles could be translated for non-English speakers at low cost. Worth exploring for international reach.
  5. Pre-generation vs. on-demand — Should videos auto-generate for every report (higher cost, zero wait time for user) or generate only when the user taps “Generate Video” (lower cost, wait time)? The flywheel argument favors pre-generation.
  • AI Podcasts — Audio-only alternative; lower cost, different consumption context
  • AI Music Videos — Combines video with AI-generated music for premium format
  • Report Sharing — Infrastructure for distributing Share Clips
  • Share Cards — Static visual sharing format; video is the animated evolution
  • Blog AI Content — Same video pipeline applied to blog articles