Independent measurement means something specific: the entity measuring the program has no financial interest in the outcome of the measurement.

That definition is harder to satisfy than it sounds. Most of the measurement infrastructure available to advertisers today is built by, sold by, or structurally connected to the platforms and data businesses whose work it measures. Understanding how that happens — and how to identify it — determines whether what you’re getting is actually measurement, or something closer to a performance report authored by the performer.

This piece extends Choose a Lane: Measure or Target, which covers the structural division between measurement and targeting functions. Here we go further: the specific architectures that compromise independence, and what verifiably independent measurement looks like in contrast.

The DMP as Measurement Vendor

Data Management Platforms — TransUnion TruAudience (formerly Neustar), LiveRamp, and others — offer measurement and attribution alongside their core identity and audience targeting products. The bundled offer makes surface sense: the same identity graph that resolves audiences for targeting can also resolve journeys for measurement. One platform, one integration, one vendor relationship.

The problem is structural, not incidental. Data services contracts commonly include “platform improvement” provisions that authorize the use of client data to improve the underlying product. In a DMP, that product is the targeting audience and identity resolution infrastructure — the same infrastructure that serves every other advertiser on the platform, potentially including your competitors. A brand that engages a DMP for measurement contributes its conversion signals — who bought what, when, on which device — to the system that improves targeting across the full client base. There is no opt-out from platform improvement in a measurement-only engagement with a company whose core revenue comes from the identity graph that measurement events improve.

The measurement function and the targeting business are not structurally separate in an integrated DMP. The question worth asking is whether the contract reflects that clearly.

How to verify this

Ask your measurement vendor: does your measurement data contribute, in any form, to platform improvement, model training, or audience enrichment?

Then read the “platform improvement” and “data use” provisions of the MSA directly — not the privacy policy. The privacy policy describes what the vendor tells consumers. The MSA describes what they do with your data. These are frequently different documents with different disclosures.

Sandboxes and Clean Rooms

Platform-provided clean rooms and measurement sandboxes — Amazon Marketing Cloud, the now-discontinued Google Privacy Sandbox, and their successors — are useful analytical tools. The limitations deserve clear framing, because the marketing language tends to obscure them.

Amazon Marketing Cloud’s own documentation states that “insights that AMC unlocks stay within the walls of AMC” and that advertisers cannot access Amazon Advertising event-level data directly. Advertisers bring first-party data — conversion events, TV attribution signals, customer journey data — into Amazon’s infrastructure. Amazon’s underlying event-level data doesn’t flow back out. This is the architecture, not a limitation of a specific implementation. The clean room is designed to serve Amazon’s advertising ecosystem. Insights derived from AMC are insights about Amazon media performance, produced within Amazon’s infrastructure, using Amazon’s methodology.

That’s a useful input. It’s not independent measurement. The two can and should coexist — the problem emerges when the distinction isn’t maintained.

The designed purpose matters because it determines what the data is optimized to show. A clean room built by a platform to help advertisers measure that platform’s effectiveness has an inherent structural orientation — not necessarily from bad intent, but because the methodology, the data access, and the infrastructure all belong to the entity being measured. Understanding the designed purpose allows you to use the output honestly: as directional input with a known structural orientation, useful within those constraints, not as confirmation of an independent conclusion.

The November 2024 FTC guidance was direct: clean rooms “are not rooms, do not clean data, and have complicated implications for user privacy.” The FTC’s concern was primarily the gap between how these tools are marketed and what they actually do. The measurement concern is adjacent: the gap between “independent measurement infrastructure” and “platform-provided reporting environment” is significant, and the product names don’t reliably signal which one you’re in.

C3 doesn’t participate in measurement programs that require replacing our methodology with a platform’s methodology. Where sandbox data is available and relevant, we treat it the way rigorous analysis treats data produced by an interested party: as a source with a known orientation, useful within those constraints, not determinative on its own.

There’s a concrete, verifiable way to check where a vendor’s data collection sits on the measurement-versus-targeting spectrum: the consent category assigned to their tag by the consent management platform on your site.

Consent management platforms present users with layered choices — essential/functional, measurement/analytics, and marketing/targeting. These categories are assigned by the CMP based on how the tag’s data is actually used, not self-reported by the vendor. They determine how many users consent to the collection.

C3’s tag lands in the measurement/analytics category. It’s used for attribution and for nothing else — no targeting audience builds, no identity graph enrichment, no off-platform data use. A DMP’s tag lands in the marketing/targeting category, because the underlying product is targeting and the CMP classifies accordingly.

The practical implication in markets with meaningful consent enforcement: a marketing/targeting tag has a substantially lower pass-through rate than a measurement/analytics tag. The data pool powering a DMP’s “measurement” is filtered through the population willing to accept marketing tracking — a different and smaller population than the one that accepts measurement analytics. The consent category reflects what the tag actually does with the data it collects — and it’s assigned by the CMP based on technical function, not vendor self-description.

How to verify this

Any advertiser running a consent management platform can audit the category assigned to any tag deployed on their site. Your CMP vendor can provide this breakdown. The category reflects how the technology infrastructure — not the vendor’s marketing materials — classifies what the tag does.

If your measurement vendor’s tag sits in the marketing/targeting category, that’s not a configuration error. It’s an accurate description of what the tag does with the data it collects.

What Verifiable Independence Looks Like

The two structural patterns described above — platform improvement clauses embedded in DMP contracts, and platform-built reporting environments that cannot produce independent output — both point toward the same question: does the measurement vendor have a financial interest in the outcome of the measurement? The answer is either verifiable or it isn’t.

Verify
No targeting revenue

No DMP, no identity graph sold to advertisers, no audience activation product. A revenue model that depends on the platforms it measures performing well has a structural conflict, whether or not it acts on it. Verifiable in public company filings or direct inquiry.

Verify
Measurement data stays in the client relationship

No platform improvement provisions, no cross-client data pools, no data assets that extend beyond the specific engagement. Verifiable in the MSA — not just asserted in a one-pager or a privacy policy.

Verify
Transparent, auditable methodology

The chain of custody — what was observed, at what confidence, attributed how — is documented and available for review. A record with the assumptions visible, not a model output with the assumptions baked in. The Signal Manifest and Attribution Manifest are how C3 satisfies this criterion.

C3’s programs satisfy all three. The revenue model is measurement, not targeting. Client data stays in the client relationship. The methodology chain is documented in the Signal Manifest and Attribution Manifest. None of that requires taking our word for it — which is, in a sense, the point of having the documentation.

Related reading

This piece covers the structural conflicts in the measurement vendor landscape. For the methodological question of how proof, estimates, and disclosed assumptions work together in attribution, see Showing the Work. For the specific question of what offline match rates are actually measuring, see What Your Vendor’s Match Rate Is Actually Measuring.

Research Notes
  • FTC clean room guidance (Nov 2024): Clean rooms “are not rooms, do not clean data, and have complicated implications for user privacy.” FTC →
  • Amazon Marketing Cloud: “The insights that AMC unlocks stay within the walls of AMC” — advertisers bring first-party data in; Amazon event-level data does not flow out. Amazon Ads →
  • Consent accept-all rates: Global average approximately 31% (2024). Measurement/analytics tags pass through a substantially larger share than marketing/tracking tags. CookieYes →
  • Google Privacy Sandbox discontinued: IAB Tech Lab found it would “place smaller media companies and brands at a significant competitive disadvantage.” Google’s CMA submission found 85% of conversions inaccurate by 60–100%. Program subsequently abandoned. Clearcode →