Back to glossary

Sensing SDK

Trillboards' on-device computer-vision package — face detection, attention, audience composition. All inference runs locally; no faces leave the device.

The Sensing SDK is Trillboards' on-device computer-vision package, the engine behind every buyer-grade audience signal we ship. The SDK runs continuously while the screen is playing ads. Each frame from the screen's front-facing camera passes through a detection pipeline: face detection, person counting, head-pose regression, gaze estimation, and (for screens with sufficient compute) emotion classification and demographic estimation.

Every inference is local. The raw frames never leave the device — only aggregate signals (face count, attention level, cohort composition) get uploaded to the Trillboards API. This is the GDPR-clean architecture: the device sees faces, the cloud sees counts. Buyers get the audience signal; viewers' biometric data stays on the screen.

The model stack is intentionally modular. Small screens (budget Android tablets, the long tail of Trillboards inventory) run a Lite profile: face detect + person count + attention bucket. High-spec screens run the Full profile: adds FaceXFormer-based age/gender estimation, emotion classification, and gaze direction. The platform supports OTA model download so model upgrades roll without firmware updates.

Output schema: face_count, person_count, attention_level (low/medium/high), dwell_seconds, gaze_seconds, plus the cloud-emitted cohort composition, attire archetype, and intent stage. See /support/developers/sensing-sdk for the full field reference.

Authoritative reference

IAB — Computer Vision for DOOH

See also

Reference docs

Building against Trillboards?

Our developer reference covers the DSP API, partner SDK, proof-of-play verification, and the sensing pipeline that powers buyer-grade audience signals.

View developer docs