Video Recording Data Architecture
How video recording data flows from the browser through storage and processing to admin display.
Overview
Video data in the platform flows through three phases:
- Capture — browser records video, detects device capabilities, selects codec
- Storage — video blob uploaded, response metadata saved to raw session registry
- Processing — FFmpeg transcoding, transcription, thumbnail generation, metadata enrichment
Each phase adds metadata. The admin dashboard displays the combined result.
Data Flow Diagram
Per-Phase Data Model
Phase 1: Session Creation
When a respondent opens a flow, a session is created with identity and device context.
FlowSession (Raw Registry)
├── sessionId "session_1706123456789"
├── flowId "flow_abc123"
├── flowName "Customer Testimonial"
├── userAgent ← captured from browser UA string
│ ├── browser "Chrome"
│ ├── browserVersion "120.0"
│ ├── os "macOS"
│ ├── osVersion "14.2.1"
│ ├── deviceType "desktop"
│ └── rawUserAgent "Mozilla/5.0 ..."
├── responses [] (empty, filled per question)
├── createdByDeviceToken? device ownership token
├── createdByUserId? Clerk user ID
├── createdAt ISO timestamp
└── updatedAt ISO timestamp
File: packages/registries/src/server/rawVideoFlowSessionRegistry.ts
Type: FlowSession (line 44–57)
Phase 2: Per-Question Recording & Upload
For each question answered, the browser records a video and uploads it. Device metadata is captured from the useVideoRecording() context at recording time.
QuestionResponse (added to Raw Registry)
├── questionId "question1"
├── question "What do you love about our product?"
├── videoUrl "https://blob.store/flow_abc/question1/session_xyz.webm"
├── timestamp ISO timestamp (when response was stored)
├── recordingDeviceMetadata? ← NEW: captured from MediaRecorder context
│ ├── browser "Chrome"
│ ├── browserVersion "120.0"
│ ├── deviceType "desktop"
│ └── codec
│ ├── mimeType "video/webm;codecs=vp8,opus"
│ ├── videoCodec "vp8"
│ ├── audioCodec "opus"
│ └── container "webm"
└── processed false (set to true after processing)
Capture point: packages/app-video-flow/src/web/components/steps/questions/QuestionsScreen.tsx
builds metadata from useVideoRecording() → passes via onVideoRecorded(blob, index, metadata) →
video-flow-screen.tsx → use-video-upload.ts → addFlowSessionResponse server action → raw registry.
Phase 3: Processing Pipeline
After form submission, the webhook triggers async processing for each response.
QuestionResponse (Video Processing Registry — after processing)
├── questionId "question1"
├── question "What do you love about our product?"
├── videoUrl "https://blob.store/.../session_xyz.webm"
├── timestamp ISO timestamp
│
├── ── Recording-time metadata (persisted from raw) ──
├── recordingDeviceMetadata? { browser, codec, deviceType, ... }
│
├── ── FFmpeg-extracted metadata (added by processing) ──
├── metadataForBrowser? VideoBrowserMetadata (lightweight)
│ ├── width 1280
│ ├── height 720
│ ├── fps 30
│ └── durationInSeconds 45.2
├── metadata? VideoMetadata (full FFmpeg output)
│ ├── displayOrientation "landscape"
│ ├── actualOrientation "landscape"
│ ├── rotation 0
│ └── format
│ ├── codec ["vp8", "opus"]
│ ├── bitrate "2500000"
│ ├── pixelFormat "yuv420p"
│ └── aspectRatio "16:9"
│
├── ── Transcription (added by processing) ──
├── transcript?
│ ├── raw { text, segments[] }
│ ├── corrected? { text, segments[], corrections[] }
│ └── error? string
├── answer? "I love the ease of use..."
│
├── ── Media assets (generated by processing) ──
├── thumbnailUrl? "https://blob.store/.../thumb.jpg"
├── animatedThumbnailUrl? "https://blob.store/.../thumb.gif"
├── audioUrl? "https://blob.store/.../audio.mp3"
├── silenceRanges? [{ start, end }]
├── fillerRanges? [{ start, end, word }]
│
├── ── Captions (generated by processing) ──
├── captionsByLanguage? { en: Caption[], nl: Caption[], ... }
│
└── ── Processing metadata ──
├── processingVersion "v3.2"
└── lastProcessedAt ISO timestamp
Key insight: processVideoResponse() uses { ...response, ...processedFields } spread,
so recordingDeviceMetadata from the raw response is automatically preserved through processing
without explicit handling.
File: packages/services/src/server/testimonials/process-video-response.ts
Data Overlap Analysis
Some fields exist at multiple levels. This table clarifies what's unique vs. overlapping:
| Field | UserAgentInfo (session) | RecordingDeviceMetadata (response) | VideoMetadata (response) | PostHog Only |
|---|---|---|---|---|
| Browser name | browser | browser | — | browser |
| Browser version | browserVersion | browserVersion | — | browser_version |
| OS | os | — | — | os |
| OS version | osVersion | — | — | osVersion |
| Device type | deviceType | deviceType | — | deviceType |
| Raw UA string | rawUserAgent | — | — | — |
| Recording codec | — | codec.videoCodec | — | video_codec |
| Recording audio codec | — | codec.audioCodec | — | audio_codec |
| Recording MIME type | — | codec.mimeType | — | mime_type |
| Container format | — | codec.container | — | container |
| Output codec (FFmpeg) | — | — | format.codec[] | — |
| Bitrate | — | — | format.bitrate | — |
| Resolution | — | — | width/height | video_width/height |
| FPS | — | — | streams[].fps | video_frame_rate |
| Pixel format | — | — | format.pixelFormat | — |
| Orientation | — | — | displayOrientation | — |
Overlap: Browser name, browser version, and device type appear in both UserAgentInfo (session-level) and RecordingDeviceMetadata (per-response). This is intentional:
UserAgentInfois parsed from the UA string and stored once per sessionRecordingDeviceMetadatacomes from the MediaRecorder'sDeviceCapabilitiesdetection
Unique value of RecordingDeviceMetadata: The codec selection data (mimeType, videoCodec, audioCodec, container) is only available here. Previously this was only tracked in PostHog analytics and not persisted alongside the video.
Callback Chain (Browser → Storage)
QuestionsScreen
│ useVideoRecording() → { codec, deviceCapabilities }
│ buildRecordingDeviceMetadata() → RecordingDeviceMetadataPayload
│
├─ onVideoRecorded(blob, questionIndex, metadata)
│
└─ video-flow-screen.tsx
│ handleVideoRecorded(blob, questionIndex, metadata)
│
└─ use-video-upload.ts
│ uploadVideoRecording(blob, questionIndex, metadata)
│ ├─ uploadVideo(blob, flowId, questionId, sessionId) → videoUrl
│ └─ addFlowSessionResponse(sessionId, {
│ questionId, question, videoUrl,
│ recordingDeviceMetadata ← included here
│ })
│
└─ Server Action (BVF actions.ts)
│ addFlowSessionResponse(sessionId, response)
│
└─ flow-actions-client.ts
│ storage.addResponse(sessionId, response)
│
└─ RawVideoFlowSessionRegistry
│ Stores response with timestamp + recordingDeviceMetadata
└─ Redis hash: flow-sessions → sessionId → JSON
Admin Display Chain (Storage → UI)
VideoRespondentDashboardPage (Server Component)
│ getVideoProcessingRegistry().getRespondentData(flowId, respondentId)
│ → Session with responses[] (each has recordingDeviceMetadata + metadata)
│
│ enrichClipsWithMetadata(rawClips)
│ → Fills in browserMetadata (width, height, fps, duration)
│
└─ VideoRespondentDashboard (Client Component)
│ respondentData.responses[0]
│
└─ VideoInfoCard
├─ browserMetadata → Resolution, FPS (Video section)
├─ videoMetadata → Codec, Bitrate, Pixel Format (Encoding section)
├─ videoMetadata.streams → Per-stream info (Streams section)
├─ recordingDeviceMetadata → Browser, OS, Device, Recording Codec (Recording Device section)
└─ processingVersion, lastProcessedAt (Processing section)
PostHog Analytics (Parallel Track)
Recording metadata is also tracked via PostHog for aggregate analytics, separately from persistence:
| Event | When | Key Properties |
|---|---|---|
recording_codec_selected | MediaRecorder initialized | mime_type, video/audio codec, resolution, device info |
video_blob_ready | Recording stopped | blob_size, duration_seconds, blob_type |
video_recorded | Upload succeeded | session_id, question_id, uploaded_extension |
recording_audio_unavailable | Mic access failed | reason, browser, is_ios |
recording_low_audio_detected | Audio level too low | duration_ms, threshold |
PostHog captures richer recording-time detail (audio sample rate, channel count, per-device capabilities) that is not persisted to the registry. The persisted RecordingDeviceMetadata is a curated subset focused on video quality correlation.
Key Files
| Component | File | Purpose |
|---|---|---|
| MediaRecorder hook | packages/app-video-flow/src/web/video-recording/useMediaRecorder.ts | Recording, codec detection, device capabilities, device preference persistence (camera/mic choices survive across questions via refs) |
| Recording context | packages/app-video-flow/src/web/video-recording/VideoRecordingContext.tsx | React context exposing recorder state to all flow screens |
| Capabilities types | packages/app-video-flow/src/web/video-recording/capabilities.ts | CodecSupport, DeviceCapabilities, VideoQuality, facingMode detection for front/rear camera |
| Record screen | packages/app-video-flow/src/web/components/steps/questions/RecordScreen.tsx | Three-phase recording UI (permission gate → camera check → recording). Uses persistent video element across phases |
| Recording setup screen | packages/app-video-flow/src/web/components/screens/RecordingSetupScreen.tsx | Camera/mic check with live preview, audio meter, device selectors |
| Permission gate screen | packages/app-video-flow/src/web/components/screens/PermissionGateScreen.tsx | Pre-permission explanation, "Enable Camera & Microphone" CTA |
| Upload hook | packages/app-video-flow/src/web/hooks/use-video-upload.ts | Upload state, threads metadata to server action |
| Questions screen | packages/app-video-flow/src/web/components/steps/questions/QuestionsScreen.tsx | Captures metadata from context, passes to callback |
| Flow screen | packages/app-video-flow/src/web/components/screens/video-flow-screen.tsx | Orchestrates flow, forwards metadata |
| Flow actions | packages/app-video-flow/src/server/flow-actions-client.ts | Server-side session + response storage, webhook |
| BVF server actions | apps/branded-video-flow/app/flows/[flowId]/actions.ts | Next.js server actions wrapping flow actions |
| Raw registry | packages/registries/src/server/rawVideoFlowSessionRegistry.ts | Redis-backed raw session storage |
| Processing types | packages/registries/src/server/video-processing-types.ts | Canonical types for processed data |
| Processing registry | packages/registries/src/server/videoProcessingRegistry.ts | KV-backed processed session storage |
| Process response | packages/services/src/server/testimonials/process-video-response.ts | FFmpeg + transcription pipeline |
| User agent parser | packages/app-video-flow/src/server/utils/user-agent.ts | UA string → structured UserAgentInfo |
| PostHog analytics | packages/app-video-flow/src/web/analytics/posthog.ts | Event tracking for recording lifecycle |
| Admin page | apps/admin/components/features/videoRespondentDashboard/VideoRespondentDashboardPage.tsx | Server component, fetches + enriches data |
| Video Info card | apps/admin/components/features/videoRespondentDashboard/VideoInfoCard.tsx | Displays all metadata sections |
| Dashboard | apps/admin/components/features/videoRespondentDashboard/VideoRespondentDashboard.tsx | Main admin detail view |