Two-Namespace Audience Taxonomy

Audience Signals

IAB AT 1.1 (1,558 segments) + Trillboards segtax=600 namespace

IAB Audience Taxonomy 1.1 — 1,558 segments (segtax=4)

The IAB Tech Lab's Audience Taxonomy 1.1 is the broadest cross-vendor audience vocabulary the programmatic industry has standardized. It covers interests, demographics, life-stage, in-market, B2B segments, and tens of other facets across a 1,558-node hierarchy. Every modern DSP knows how to target against IAB AT 1.1 without a custom integration — the taxonomy ID is a first-class field on the OpenRTB segments array, identified by segtax=4 in the bid request.

Trillboards exposes the full IAB AT 1.1 catalog. Each segment a buyer can target in web or CTV against a static panel-based audience is also targetable on Trillboards DOOH inventory, with the caveat that DOOH audiences are aggregate-public rather than single-user. The signal-density isn't the same as a logged-in CTV viewer, but the segment vocabulary is identical — which means buyers can extend an existing IAB-AT-1.1-based campaign onto DOOH without re-mapping segments or building a parallel targeting plan.

The taxonomy itself is versioned. We follow IAB Tech Lab's release cadence: AT 1.0 was deprecated in 2024, AT 1.1 is the current production version, and we will adopt AT 2.0 when it ships by bumping the segtax code on the bid request (the next version will use segtax=7) and running both versions in parallel until downstream buyers migrate. The taxonomy source of truth is the @trillboards/iab-taxonomy workspace package; we do not maintain hand-rolled parallel lookups in service code.

Trillboards segtax=600 — CV-unique signals with no IAB equivalent

Sensing-enabled Trillboards screens emit a second class of audience signals that IAB AT 1.1 does not cover, because they describe a public-space cohort rather than an individual web/CTV user. We publish these under our own taxonomy namespace, identified by segtax=600 on the OpenRTB segments array. The segtax code itself is recognized by every DOOH-aware DSP as a publisher-namespace extension; buyers who want to target against it reference our published documentation.

The five declared classes in the Trillboards namespace are below. Each is emitted as a structured field with a constrained enum vocabulary — no free-form fields except for the engagement-narrative class, which is capped at 280 characters and structured. All signals are derived from aggregate scene context, never tied to a person, never persisted as a face attribute.

audience_group_composition

Who is in front of the screen as a group structure. Solo, pair, small_group, mixed_group, family_unit, coworkers. Aggregate over the dwell window — never per-person.

Values: solo · pair · small_group · mixed_group · family_unit · coworkers

audience_intent_stage

Where the observed audience appears to sit in the marketing funnel based on their venue, posture, and activity. Coarse-grained, aggregate-only.

Values: awareness · consideration · decision · post_purchase · unknown

audience_attire_archetype

What the audience is dressed for. Useful for context-aware creative routing — fitness brands toward athleisure-heavy contexts, premium brands toward business / formal contexts.

Values: business · casual · athleisure · formal · uniform · streetwear · outdoor

audience_activity_macro

The macro-activity context the audience is in. Derived from venue + posture + motion patterns, never from explicit identification.

Values: commuting · dining · shopping · leisure · waiting · working · transit

audience_engagement_narrative

Short structured rationale string capped at 280 characters describing the engagement context. Useful for similar-moments retrieval and creative-fit reasoning. Always derived from aggregate scene context, never from individuals.

Values: Free-text up to 280 chars, structured per declared schema

How signals get onto the OpenRTB bid request

Every bid request our ad-decision service constructs includes a structured segments array on the impression object. The array carries one entry per active audience signal, with the segtax code identifying the taxonomy namespace (4 = IAB AT 1.1, 600 = Trillboards) and the segment ID identifying the specific node. A buyer's DSP reads the array, matches against its own targeting rules, and decides whether to bid and at what price.

The signal-density on any one bid request depends on what the screen and the sensing layer can observe at that moment. A non-sensing screen emits only the venue / geo / dwell context plus any pre-mapped IAB AT 1.1 segments inherited from venue type. A sensing-enabled screen with a recent observation adds the segtax=600 signals on top, structured into the same OpenRTB segments array. No special protocol extension is needed — segtax codes are first-class in OpenRTB 2.6.

The segments are aggregate by construction. We don't persist per-person attributes, we don't maintain individual-level profiles, and we don't tie audience signals to any device identifier or IP. The signal's scope is the public-space context observed during the dwell window; once the bid request is sent, the underlying observation is decoupled from the wire-format segment code.

Privacy posture, briefly

The Trillboards approach to audience signal generation is: face data is processed ephemerally for cohort statistics (count, composition, attention) and never persisted as a face vector or face image. The on-device sensing pipeline blurs faces before any cloud transmission, and the cloud Gemini call that produces the CV-derived semantic signals receives only the blurred frames plus the edge-derived numerical signals. No per-person identifier ever leaves the screen.

Audience signals are an audience-cohort property, not an audience-member property. When a sensing-enabled screen emits audience_group_composition=family_unit, that's a description of the public-space context, not a record about any specific family. The same cohort five minutes later might emit a different composition; the signal is observation-window-scoped.

For more on the audience-archetype distribution across the network, see the State of DOOH 2026: Audience Archetypes chapter.

What buyers actually do with these signals

The most common buyer workflow is a hybrid plan that mixes IAB AT 1.1 segments (familiar from web and CTV campaigns) with one or two Trillboards segtax=600 classes to layer DOOH-specific context on top. A coffee brand running a national audience-extension campaign might target the IAB segments for "In-Market: Specialty Coffee" and "Demographic: 25–34 Urban Professional", and add segtax=600 audience_activity_macro=commuting to narrow the DOOH inventory to screens where the audience is in a context that matches the creative.

A second pattern is contextual-only targeting, where the buyer doesn't want to pay for audience-data overlays and instead targets purely on venue + context + activity. This is the cleanest use of the segtax=600 namespace: a fitness brand pointing at audience_attire_archetype=athleisure to find gym-adjacent moments without needing to compose a complex IAB stack. The signals are a stand-alone targeting axis.

A third pattern, increasingly common in 2026 as buyers seek more rationale for their DOOH spend, is engagement-narrative search. A buyer surfaces the audience_engagement_narrative field on a sample of bids, reads the structured rationale, and uses it to refine the campaign. The narrative is the only free-text field in the namespace — capped at 280 characters, structured against a declared schema — and it's indexable for similar-moments retrieval.

Across all three patterns, the buyer-side workflow is exactly the same as for IAB AT 1.1 segments: read the segtax-coded entries on the OpenRTB segments array, apply targeting rules, decide bid price. The two namespaces interoperate first-class on the protocol because they ride the same array, distinguished only by their segtax code.

How the signals are delivered aggregate-only

The audience-signal pipeline aggregates by construction at every stage. The on-device sensing layer aggregates per-face observations into per-cohort statistics before any cloud transmission; the cloud Gemini call receives only the aggregates plus the blurred frames; the audience-signal write-back into our observation store happens at the cohort level, not the face level; the OpenRTB segments array carries the same cohort-level segtax IDs. There's no point in the chain where a per-person attribute exists.

The cohort-level granularity is what makes the signal useful to buyers. A signal tied to a specific person would carry strict privacy obligations and would require user-consent flows that no DOOH venue supports. The cohort-level signal carries no such obligation because it describes a public-space context, not a person. The IAB Tech Lab's OpenRTB DOOH extension was designed around this framing: the impression object has no user, only a dooh context, and the segments array describes that context.

Buyers asking for finer granularity (per-person, per-device) are asking for something the architecture does not support and will not be modified to support. The strict aggregate floor is non-negotiable; it is what makes the signal layer legally durable across our 22-country footprint and what gives buyers a stable contract they can plan campaigns against.

Related reading

Cite this page: Trillboards (2026). Audience Signals Available to Buyers. Trillboards Network Data, observed 2026-01-01 through 2026-05-11. Retrieved from https://trillboards.com/data/demand-ecosystem/audience-signals/