Creative Review Pipeline
Brand Safety Workflow
61,146 creatives classified · automated + human review · pre-bid enforcement
The principle: venue context is the brand-safety contract
DOOH brand safety has a structural difference from web and CTV brand safety. On the web, a brand can choose to appear on safe-content pages and exclude unsafe-content pages — the unit of brand-safety is the page. On DOOH, the unit is the venue. A creative for a sports-betting brand might be perfectly appropriate on a screen inside a casino or a sports-bar, and entirely inappropriate on a screen inside a pediatric clinic. The same creative, the same brand, two completely different verdicts.
That structural difference means brand-safety on DOOH cannot be a property of the creative alone. It has to be a function of the creative AND the venue. The Trillboards approach is to classify the creative once, across nine independent content dimensions, and to evaluate the eligibility per venue at pre-bid time using the venue's own rules. The creative classification is global; the eligibility verdict is local.
Nine independent content dimensions, plus age-rating
Each creative is scored against nine content dimensions independently, plus a composite age-rating tier. Independence means the dimensions don't depend on each other — an alcohol creative isn't automatically political; a political creative isn't automatically violent. The classifier produces a confidence score for each dimension and a binary decision.
| Dimension | Detection scope |
|---|---|
| alcohol | Visible alcoholic beverages, branding, or consumption context. |
| tobacco | Cigarettes, vaping, smokeless tobacco; brand or generic. |
| gambling | Casino, sportsbook, lottery, fantasy-sports money games. |
| political | Candidates, ballot measures, advocacy organizations. |
| religious | Religious institutions, theology, faith-affiliated services. |
| adult_content | Sexually explicit material; not suitable for any DOOH venue. |
| violence | Depictions of physical harm, weapons used to harm, gore. |
| profanity | Explicit language in audio track or on-screen text. |
| age_rating | Composite tier — G, PG, PG-13, R-equivalent for DOOH context. |
Classifier pipeline: multimodal LLM, image classifiers, audio transcript
The classification pipeline runs in three layers. First, a multimodal large language model takes the creative's video frames, audio transcript, and metadata (advertiser, campaign label, declared category) and produces a structured score for each of the nine dimensions plus the age-rating tier. The LLM is the broadest layer — it can spot context-dependent issues that pixel-level classifiers would miss.
Second, dedicated image classifiers run on every video frame for the dimensions where pixel-level recognition is reliable — alcohol bottles, cigarettes, weapons, explicit imagery. Each classifier is independently trained and validated against public datasets plus an internal validation set. The classifier outputs are then cross-checked against the LLM verdict; agreement increases confidence.
Third, the audio track is transcribed (we use the same diarized speech pipeline that powers our audience-signal layer) and screened for profanity, brand mentions, and policy-flagged terms. Audio is the channel where political and religious cues are often strongest — a creative whose video is generic stock footage can still carry a strong political message in the voiceover.
Human review where it matters
The classifier verdicts go through human review on two paths. First, every low-confidence verdict (any dimension where the classifier disagrees with itself across the three layers, or where the confidence score is below a calibrated threshold) is routed to a reviewer. Second, any creative flagged on a high-risk dimension (political, adult_content, violence above a threshold) is reviewed regardless of confidence.
The reviewer can override the classifier verdict per dimension, set a final age-rating, and add a free-form note that becomes part of the creative's review record. The human verdict is then the authoritative one — the classifier output is preserved for retraining but does not override the human call. This is the asymmetry we want: classifiers handle volume, humans handle nuance, and the final verdict is the human's when they touched it.
Pre-bid enforcement: venue rules gate eligibility
The decisions feed our ad-decision service's pre-bid filters. Each venue category carries a rule set — a quick-service restaurant excludes alcohol and tobacco by default, a family-pediatric clinic excludes all R-equivalent creatives, a sports-betting-allowed casino opens up the gambling dimension that would be closed elsewhere. The rules are venue-category-level, with per-venue overrides where the operator has chosen a stricter posture.
At bid-decision time, before the auction even runs, the ad-decision service drops every creative whose classification violates any rule active on the screen's venue. The dropped creatives never enter the auction; the auction runs only among the eligible-by-venue set. This is the pre-bid filtering layer — it's structurally faster and more reliable than post-auction filtering because we never need to roll back a winning bid for a brand-safety reason.
The filter outcomes are themselves logged and exported. Aggregate counts of rejection rates by venue category, by content dimension, and by age-rating tier are surfaced in our internal admin dashboards (for operations) and rolled up into the State of DOOH 2026 brand-safety chapter (for industry transparency). Per-page category breakdowns live in /data/brand-safety/ (cross-linked below as the category pages come online).
IAB content category alignment
The nine independent dimensions and the age-rating tier are deliberately aligned with the IAB Tech Lab's content category taxonomy, the same vocabulary buyers and DSPs already use for web and CTV. Alignment means a buyer who has already declared a brand-safety posture against IAB content categories on their web or CTV plan can apply that same posture to Trillboards DOOH inventory without re-mapping or rewriting the rules.
We also tag every creative with the OpenRTB cat array (creative category codes) and theattr array (creative attribute codes, e.g. audio-on, autoplay) so DSP buyers can self-filter on the bid response side if they have additional rules beyond what we enforce. The combination — IAB cat / attr on every creative plus venue-context pre-bid filtering on every screen — gives the highest density of actionable brand-safety signal in the DOOH ecosystem.
IAB content category alignment is forward-compatible. As the IAB Tech Lab publishes new categories (the 2025 update added several sub-categories under health/wellness and under climate/sustainability), we pick up the new IDs in our classifier's taxonomy package and the new categories flow through to the same pre-bid filter layer. No code change is required per category update; the source-of-truth is the taxonomy package version.
Reviewer governance and audit
The reviewer team operates under a documented governance regime. Reviewers are trained against a published rubric covering each of the nine content dimensions, with calibration sessions every quarter using a held-out set of creatives that exercise edge cases (politically-adjacent satire, gambling-adjacent fantasy sports, alcohol-adjacent zero-proof beverages). Calibration agreement is tracked per reviewer; sustained disagreement triggers retraining.
Every review action is audit-logged. The audit log captures the reviewer's decision, the dimension(s) overridden, any free-form note, and the timestamp. Audit logs are retained per our data-retention policy and are referenced when a creative's classification is questioned downstream. The audit chain is what makes the human-review layer accountable.
Aggregate audit telemetry — reviewer agreement rates, dimension-level override rates, time-to-review distribution — is itself reviewed monthly to spot drift. If a dimension starts seeing systematic overrides in one direction, that's a signal that the classifier's calibration has shifted and needs retraining; if the reviewer-vs-classifier agreement on a dimension drops below the governance threshold, the dimension is held for human-only review until the classifier is recalibrated.
Related reading
- DOOH demand ecosystem at Trillboards — cornerstone covering the protocol, multi-SSP supply chain, and audience-signal layer that pre-bid filtering sits on top of.
- State of DOOH 2026: Brand Safety & Creative Review — industry-level rollup with full content-category distribution and review-outcome breakdown.
- /data/brand-safety/<category>/ — per-category brand-safety pages (alcohol, tobacco, gambling, political, religious, adult-content, violence, profanity) coming online with the next data-pages release.
Cite this page: Trillboards (2026). DOOH Brand Safety: How Trillboards Reviews Creatives. Trillboards Network Data, observed 2026-01-01 through 2026-05-11. Retrieved from https://trillboards.com/data/demand-ecosystem/brand-safety-workflow/