Endpoints, Manifest Requests, and Monetization Opportunity in FAST
At a glance
Monetization starts with requests. If your stream doesn’t get requested reliably (or requests get cached/aggregated in ways your SSAI can’t personalize), your ad opportunity count collapses. This is the #1 “it should be making money but isn’t” blind spot for non-technical teams.
Who this is for
- Technical operations teams managing origins, CDNs, and SSAI
- Ad ops teams trying to reconcile “starts” vs “impressions”
- Channel operators needing a practical way to think about request volume
Key concepts (translated)
- Endpoint: The URL the player hits to start playback (often a master manifest).
- Manifest (HLS/DASH): The playlist describing which video segments to fetch.
- Per-viewer manifest: SSAI often generates a unique manifest per viewer so ads can be personalized and tracked.
- CDN caching: Great for scale, dangerous for monetization if it caches the wrong thing.
Walkthrough: How endpoint requests become revenue
- Player requests the master manifest (HLS) or MPD (DASH).
- SSAI generates a session (often a session ID) and returns a personalized manifest.
- At an ad break, SSAI calls the ad server/SSP for ads sized to your pod.
- SSAI stitches ads into the manifest and the player downloads ad segments.
- Impression tracking fires (server-side beacons and/or hybrid methods).
What can go wrong (and why it kills monetization)
| Failure | Why it happens | Impact |
|---|---|---|
| CDN caches personalized manifests | Cache rules too broad; missing cache-busting headers | Many viewers share one manifest → fewer ad decisions + broken tracking |
| Origin or SSAI endpoint instability | Timeouts, spikes, mis-sized capacity | Stream starts drop; ad opportunities never exist |
| Ad calls blocked | Network egress rules, DNS, TLS issues, vendor outages | Pods become no-fill; may fall back to slate |
Operational checks (do these weekly)
1) Request volume sanity checks
- Track: total manifest requests vs unique sessions.
- Track: sessions that reach at least one ad break.
- Segment by platform/device (Roku vs Samsung vs LG vs web players).
2) Cache correctness checks
- Confirm CDN does not cache per-viewer manifests unless explicitly designed to.
- Confirm ad segment URLs are cacheable appropriately (depends on SSAI strategy).
- Confirm query-string or session IDs aren’t being stripped by proxies.
3) Ad call success checks
- Log HTTP status codes and timeouts.
- Track “no-fill” reasons separately from errors.
Real-world endpoint patterns (examples)
Example 1: SSAI sessionized playback URL (common)
Player hits: https://play.example.com/channel/master.m3u8
SSAI returns: https://ssai.example.com/session/9f3a.../master.m3u8?token=...
- Sessionized URLs are how SSAI personalizes ads and tracking per viewer.
- If your CDN caches the sessionized manifest incorrectly, multiple viewers can collapse into one “ad decision,” hurting revenue and measurement.
Example 2: Platform proxying requests
- Some platforms front your stream with their own request path.
- Operational takeaway: reconcile platform-reported starts with origin/CDN requests; they may not match 1:1.
Examples you can paste into tickets
Example: What to ask engineering for (minimum logging)
Manifest requests:
- total requests per hour
- unique viewer sessions per hour
- top 10 user agents (devices)
SSAI ad decisioning:
- ad opportunities created
- ad requests sent
- filled vs no-fill vs error
- top error reasons (VAST parse, timeout, policy reject)
CDN:
- cache hit ratio for manifests (should be low/controlled for per-viewer)
- cache headers observed at edge
Example: “Revenue leak” debugging questions
- Are unique SSAI sessions close to platform starts? If not, where is session creation failing?
- Do ad opportunities per hour match expected break cadence? If not, cueing/schedule is broken.
- Are manifests being cached at the edge (unexpectedly high cache hit ratio)?
- Are ad call timeouts spiking for one SSP or one geo?
Example: “Symptoms → likely root causes”
| Symptom | Likely cause |
|---|---|
| Views rising, impressions flat | Cached manifests, breaks not detected, ad calls blocked |
| Impressions rising, revenue flat | Low CPM demand mix, floors too low/high, measurement rejection |
| High ad errors on one platform only | Device playback limitations, codec mismatch, policy enforcement differences |
Best practices (labelled)
- Best practice: treat per-viewer manifests as sensitive; cache deliberately, not accidentally.
- Best practice: instrument request volume and ad opportunity volume separately so you can see where revenue is leaking.
- Best practice: keep a “golden test device set” and validate breaks weekly.
Sources
- IAB Tech Lab – VAST 4.2 (PDF)
- IAB Tech Lab – OpenRTB 2.6 (PDF)
- IAB Tech Lab – OpenRTB SupplyChain Object (schain) (GitHub)
- IAB Tech Lab – ads.txt overview
- IAB Tech Lab – ads.txt 1.1 (PDF)
- IAB Tech Lab – app-ads.txt 1.0 (PDF)
- IAB Tech Lab – sellers.json overview
- IAB Tech Lab – sellers.json (PDF)
- IAB – Digital Video Ad Measurement Guidelines
- MRC – SSAI & OTT Guidance (PDF)
- MRC – Viewable Ad Impression Measurement Guidelines v2.0 (PDF)
- Google Ad Manager – Connected TV ads
- Google Ad Manager – Dynamic Ad Insertion (DAI) for Developers
- Google Ad Manager – Full-service DAI
- AWS Elemental MediaTailor Documentation
- AWS MediaTailor – SSAI CDN architecture overview
- Apple Developer – HTTP Live Streaming (HLS)
- Apple Developer – HLS Authoring Specification
- ISO – ISO/IEC 23009-1:2022 (MPEG-DASH) page
- MPEG – Standards overview (includes MPEG-DASH)
- Roku Developer – Integrating Roku Ad Framework (RAF)
- Roku Developer – Implementing SSAI using Roku adapters
- SCTE – SCTE-35 catalog page
- ANSI preview – ANSI/SCTE 35 2017 (preview PDF)
- Comscore – CTV Measurement
- Nielsen – Connected TV insights (May 2025)
- FreeWheel – Publisher Suite overview
- FreeWheel Enterprise API docs
- IAB Tech Lab – VPAID (deprecated) page
- AWS MediaTailor – generating per-viewer manifests
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article