Skip to main content
Status: Brainstorm Phase: Phase 5 (AI Media) | Tier: N/A (Platform Enhancement)

Overview

The TIWIH blog publishes articles about cannabis science, strain deep dives, terpene education, industry news, and platform updates. Each article is well-researched, well-written, and useful. But each article exists in exactly one format: text. In a media landscape where the same idea reaches different audiences through different channels — a TikTok for the scroller, a podcast for the commuter, a newsletter for the reader — publishing only text means leaving reach on the table. Blog AI Content transforms every article into three additional formats: a 1-2 minute Video Summary that visually narrates the key points, a 3-5 minute Podcast Episode where two AI hosts discuss the article, and an optional Ambient Soundtrack that plays while reading. These formats are embedded directly in the blog article page as “Watch,” “Listen,” and “Read” tabs — the reader chooses how they want to consume the same information. This is a content multiplication strategy. One piece of research produces four pieces of content. Each format lives on the article page (improving dwell time and page value), can be distributed independently (the video on TikTok, the podcast clip on Twitter/X, the audio on a podcast feed), and serves a different consumption context. Someone in the car listens to the podcast. Someone on Instagram watches the video. Someone at a desk reads the article. Same intelligence, three times the reach. The infrastructure investment is minimal because Blog AI Content reuses the pipelines built for Video Reports (video generation), AI Podcasts (dialogue generation + TTS), and Session Music (audio generation). The blog-specific layer is just the content source — article text instead of report data — and the embedding UI.

What It Does

Embedded Formats

A short, visual narration of the article’s key points. An AI narrator reads a condensed version of the article while relevant visuals — strain images, terpene charts, data visualizations, and branded graphics — appear on screen. Designed for people who want the gist without reading 1,500 words, and for social media distribution where video outperforms text links.The video is vertical (9:16) for mobile and social, with a horizontal (16:9) option for desktop embedding. Subtitles are burned in for silent autoplay.

Page Layout

The article page gets a format selector at the top, below the title and above the article body:
┌─────────────────────────────────────┐
│  "The Science of Myrcene"           │
│  Published Jan 15, 2026             │
│                                     │
│  ┌──────┐  ┌──────┐  ┌──────┐     │
│  │ Read │  │Watch │  │Listen│     │
│  │  ✓   │  │ 1:42 │  │ 4:15 │     │
│  └──────┘  └──────┘  └──────┘     │
│                                     │
│  [Article content / Video / Audio]  │
└─────────────────────────────────────┘
“Read” is the default. “Watch” replaces the article body with the video player. “Listen” starts audio playback while keeping the article text visible (so users can read along if they choose).

User Value

The same article reaching three audiences is three times the content marketing value for zero additional editorial effort. The video goes on TikTok. The podcast clip goes on Twitter/X. The article stays on the website. Different people discover TIWIH through different channels, all leading back to the same canonical article page.

How It Works

1

Article Published

When a new blog article is published (or an existing article is marked for media generation), a Trigger.dev task is queued to generate all three formats. This can be automatic (every article gets media) or selective (editor flags specific articles).
2

Article Analysis

The system reads the full article text, extracts key points, identifies any strain names mentioned (for terpene data lookup), and determines the article’s topic category and mood. This context feeds all three generation pipelines.
3

Parallel Generation

Three generation tasks run in parallel:
  • Video: Article text is condensed into a 1-2 minute narration script. TTS generates the audio. Remotion or similar renders visuals synchronized to the narration.
  • Podcast: Article text is converted into a two-host dialogue script. TTS renders both voices. Audio is assembled with intro/outro.
  • Soundtrack: Article topic and mood are mapped to musical parameters. Music generation API produces a track matching the estimated reading time.
4

Quality Review

Generated media is automatically checked: video duration matches target, podcast dialogue is natural (no hallucinated data), soundtrack does not have silence gaps. Flagged items are re-generated.
5

Embedding

Media files are uploaded to CDN. The article page is updated with the format selector tabs and media player components. The article’s Open Graph meta tags are updated with the video thumbnail for social sharing.

Technical Approach

Pipeline Reuse

The key architectural insight is that Blog AI Content does not need its own pipelines — it reuses infrastructure from three other AI media features:
Blog FormatReuses FromAdaptation
Video SummaryVideo ReportsArticle text replaces report data as input. Same TTS, same Remotion rendering, same video assembly.
Podcast EpisodeAI PodcastsArticle text replaces strain data as input. Same dialogue generation prompt (adapted), same dual-voice TTS, same audio assembly.
Ambient SoundtrackSession MusicArticle topic/mood replaces terpene data as input. Same music generation API, same post-processing.
This means Blog AI Content should be built AFTER the three source features are operational. It is not a standalone build — it is a content-source adapter layer on top of existing pipelines.

Automation via Trigger.dev

blog-article-published (trigger)

       ├── generate-blog-video (task)
       │     └── video-reports pipeline with article input

       ├── generate-blog-podcast (task)
       │     └── ai-podcasts pipeline with article input

       └── generate-blog-soundtrack (task)
             └── session-music pipeline with article mood input
All three tasks run in parallel. Total generation time: 2-5 minutes (limited by the slowest pipeline, which is video).

Distribution Strategy

Each format has a distribution life beyond the article page:
FormatOn-PageDistribution
VideoEmbedded player in “Watch” tabTikTok, Instagram Reels, YouTube Shorts, Twitter/X video, LinkedIn
PodcastAudio player in “Listen” tabPrivate RSS feed, Spotify (eventually), Twitter/X audio clips, Apple Podcasts
SoundtrackBackground audio in “Read” tabN/A (article-specific, not distributed independently)
A single article about “The Top 5 Terpenes for Sleep” could generate:
  • A 90-second TikTok video with 50,000 views
  • A 4-minute podcast clip shared on Twitter/X with a link back to the full article
  • An ambient reading experience that increases average time on page by 30%

Cost Per Article

FormatEstimated CostNotes
Video Summary (90 sec)0.300.30 - 1.50TTS + Remotion rendering
Podcast Episode (4 min)0.100.10 - 0.40Dialogue LLM + dual TTS
Ambient Soundtrack (10 min)0.100.10 - 0.50Music generation
Total per article0.500.50 - 2.40
At 4-8 articles per month, the monthly cost is 22-19. This is a marketing expense, not a per-user cost, so the economics are straightforward.

Tier Impact

TierAccess
FreeAll blog content (text, video, podcast, soundtrack) is freely available. The blog is a top-of-funnel content marketing channel — gating it would defeat the purpose.
ProSame access. Blog AI Content drives traffic and trust, which converts to Pro subscriptions through the main app funnel.
Blog AI Content is a content marketing investment, not a monetized feature. The ROI is measured in traffic, engagement, and brand perception — not direct subscription revenue.

Dependencies

  • Blog publishing system — built and live
  • Blog article content — built and live (multiple articles)
  • Trigger.dev task infrastructure — built and live
  • CDN storage — built and live
  • Video Reports pipeline (prerequisite) — see Video Reports
  • AI Podcasts pipeline (prerequisite) — see AI Podcasts
  • Session Music pipeline (prerequisite) — see Session Music
  • Blog article content-source adapter for each pipeline
  • Format selector tab UI component
  • Video player embedded in article page
  • Audio player embedded in article page
  • Open Graph video thumbnail integration
  • Trigger.dev task: blog-article-media-generation

Open Questions

  1. Auto-generate vs. selective — Should every blog article automatically get all three formats, or should an editor select which articles warrant the investment? Automatic is simpler and more complete. Selective saves cost on low-value posts. Recommendation: auto-generate for all feature articles and educational content; skip for changelog-style posts.
  2. Podcast show identity — Should blog podcast episodes be part of the same “show” as user-facing AI Podcasts (same hosts, same intro), or a separate show? Same show builds a cohesive brand. Separate show avoids confusion between personal and editorial content.
  3. Soundtrack length calibration — How do we match the soundtrack length to the article reading time? Average reading speed is ~250 words/minute. A 1,500-word article takes ~6 minutes to read. The soundtrack should be slightly longer (8-10 minutes) to accommodate slower readers and re-reading. But generating unnecessarily long tracks wastes cost.
  4. Social media posting automation — Should the blog media pipeline automatically post the video to TikTok, the clip to Twitter/X, etc.? Or should these be manually posted by the team? Automation saves time but reduces editorial control over timing and captions.
  5. Existing article backfill — The blog already has dozens of articles. Should we backfill all of them with AI media, or only generate for new articles going forward? Backfilling creates a more complete experience but is a one-time cost spike.
  • AI Podcasts — Core podcast pipeline that blog episodes reuse
  • Video Reports — Core video pipeline that blog videos reuse
  • Session Music — Core music pipeline that blog soundtracks reuse
  • Strain Page Videos — Related platform enhancement for the website
  • Social Posts — Could auto-generate social captions for blog media distribution