The Platform Attribution Window Problem

Finding

Meta's default attribution window is 7-day click, 1-day view; Google's is 30-day click. Both platforms claim conversions that fall inside their respective windows — including many of the same conversions.

Source

Platform documentation from both Meta Ads Manager and Google Ads. These are each platform's own default settings, publicly documented in their help centers.

Why It Matters for Measurement

When a consumer sees a Meta ad on Tuesday, sees a Google search ad on Thursday, and converts on Friday — Meta claims the conversion (within its 7-day window), Google claims the conversion (within its 30-day window), and the advertiser's aggregate platform reporting has counted one sale twice. Neither platform is wrong by its own rules. The rules are just designed by each platform to maximize the conversions it can claim. Independent attribution assigns that conversion once. The difference between the two approaches, summed across a full campaign flight, is not marginal. It is the difference between what the platforms tell you and what actually happened.

Finding

Last-touch attribution systematically over-credits the final touchpoint before conversion — which, in most digital programs, is paid search — and systematically under-credits every upstream channel that built the intent that made the search happen.

Source

This is a structural characteristic of the model, not a study — but it is documented in the academic and industry literature on attribution methodology, and acknowledged in Google's own documentation on data-driven attribution as a reason to move away from last-touch. The fact that Google advocates for moving away from last-touch is worth noting on its own.

Why It Matters for Measurement

If you optimize toward last-touch attribution, you will over-invest in search and under-invest in awareness, upper-funnel, and mid-funnel channels — because those channels will appear to drive fewer conversions than they actually influence. The consumer who was reached by TV, saw a display ad, and then searched is counted as a search conversion. The TV and display placements look like they did nothing. Programs optimized under last-touch tend toward a self-fulfilling contraction: cut the channels that look inefficient, watch the search performance decline, fail to understand why. The multi-touch record shows the upstream exposure. Last-touch hides it.

Google's Own Conversion Model Had an 85% Error Rate

Finding

Google's own submission to the UK Competition and Markets Authority found that 85% of advertising conversions within its Privacy Sandbox were inaccurate by 60-100%.

Source

Google's CMA report, produced as part of the Privacy Sandbox regulatory review in the UK. This is Google's own data, submitted to a regulator, not a critic's characterization.

Why It Matters for Measurement

The Privacy Sandbox was designed to replace third-party cookie measurement with privacy-preserving alternatives — specifically, to maintain measurement capability while eliminating third-party cookies. Google's own data showed that the replacement methodology produced conversions that were inaccurate by 60-100% in 85% of cases. Google subsequently killed the program. The implication is not that measurement is impossible without third-party cookies — it is that platform-controlled measurement infrastructure, even from the largest platform in digital advertising, can fail in ways that are not visible to the advertisers relying on it.

The Programmatic Waste Problem — What Happened When Someone Started Measuring It

Finding

The ANA's 2023 baseline found that 36 cents of every DSP dollar reached a real consumer, with 21% of impressions going to MFA sites. By Q3 2025, under sustained transparency pressure, MFA exposure had fallen to 0.8% and the share reaching publishers rose to 47.1% — but $26.8 billion in annual programmatic waste remains.

Source

ANA Programmatic Media Supply Chain Transparency Study (December 2023, $123M in log-level spend across 21 named brands including State Farm, Mondelez, and Discover) and ANA Quarterly Programmatic Transparency Benchmarks through Q3 2025. Ongoing, publicly available, with an interactive benchmark tool now open to ANA and TAG members.

Why It Matters for Measurement

The trajectory is the argument. MFA exposure fell 95% in two years because someone measured it and published the findings. The supply chain responded to accountability. That is also what $26.8 billion in remaining annual waste means: the problem has not been solved, it has been partially corrected under pressure. Fraud detection that identifies and removes invalid traffic before the efficiency calculation is not a historical argument — it is the live condition of every open-web programmatic program running today.

The Trade Desk and the Agencies — Transparency Is the Language, Margin Is the Currency

Finding

In 2025, WPP and Dentsu exited The Trade Desk's OpenPath initiative over undisclosed fees and placement transparency concerns; Publicis followed with an audit alleging improper DSP fee application and billing for unauthorized tools; Omnicom launched its own audit. Three of the six major holding companies were in open dispute with the largest independent DSP simultaneously.

Source

Adweek exclusive reporting (February 2025); Publicis audit findings as reported by Digiday and Campaign US; Omnicom audit reported by IDComms. The Trade Desk disputed the characterization and stated OpenPath fees were disclosed. The dispute is ongoing and documented across multiple trade outlets.

Why It Matters for Measurement

Digiday's framing is precise: "transparency is the language of the dispute — but margin is the underlying currency." OpenPath routes around agency trading desks, compressing the margin agencies extract from supply-path optimization. The agencies' complaints about fee visibility are real — and also structurally motivated by the fact that transparent direct-to-publisher buying threatens the opacity their own businesses depend on. This is not a story about The Trade Desk specifically. It is a live demonstration of the structural conflict that exists whenever the entity responsible for media planning and buying also controls the measurement of that media's performance. Independent measurement exists precisely because this conflict exists.

EDO v. iSpot — When a Measurement Company Uses Data Beyond Its License

Finding

A jury ordered EDO to pay $18.3 million to iSpot after finding that EDO had used iSpot's licensed TV ad data beyond contractual terms to build a competing product — exceeding data access limits and violating multiple contract provisions.

Source

AdExchanger, reporting on the jury verdict in the iSpot v. EDO case. The suit was originally filed in 2022; iSpot sought $47 million. The jury agreed on multiple contract violations and awarded $18.3 million. EDO is a TV measurement company co-founded by Edward Norton.

Why It Matters for Measurement

This is a data chain-of-custody case between two measurement companies. EDO licensed data for one purpose and used it for another — to build a product that competed with the company that provided the data. The structural parallel to the DMP "platform improvement" clause is direct: data contributed for a defined purpose, used beyond the scope of what the contributor understood or authorized. The jury found the violation real and material enough to award $18.3 million. The measurement industry's own conduct is subject to the same scrutiny it applies to others. Data use beyond the license is not a gray area — it is a breach, and a jury said so.

The Walled Garden Measurement Gap Is Getting Wider, Not Smaller

Finding

A December 2024 IAB survey found 64% of US ad buyers plan to focus significantly more on cross-platform measurement in the coming year — not because the problem is solved, but because it isn't.

Source

Interactive Advertising Bureau, December 2024. The IAB represents the buy side, the sell side, and the platform side of digital advertising. When 64% of buyers say they need better cross-platform measurement, that is the market reporting that the current infrastructure is failing to produce it.

Why It Matters for Measurement

Cross-platform measurement requires connecting signals across platforms that don't share them. Google's attribution data does not connect to Meta's. Meta's does not connect to Amazon's. The IAB figure is a demand signal — it reflects advertisers experiencing the gap between what cross-channel measurement should produce and what they're actually getting. The gap is not theoretical.

Only 8% of Companies Have Unified App Measurement

Finding

A 2025 Branch survey found that only 8% of companies have a fully unified view of app marketing performance across channels. The remaining 92% are working with partial or incomparable data.

Source

Branch Technologies, 2025 Mobile Growth Report. Branch is an app attribution platform with direct visibility into how advertisers are measuring mobile performance across their programs.

Why It Matters for Measurement

Mobile represents a growing share of consumer journeys and conversion events. The 92% figure suggests that cross-channel attribution — which depends on connecting mobile touchpoints to other channel touchpoints — is operating on incomplete data for the overwhelming majority of programs. A high reported multi-touch rate in this environment is not evidence that journeys are being fully observed. It may be evidence that the gaps are being modeled rather than measured.

When AI Gives the Answer, There's No Click to Attribute

Finding

Google AI Overviews — the generated summaries now appearing on nearly 50% of all search results — reduced organic CTR on affected informational queries by 61% between mid-2024 and late 2025, with paid CTR on those same queries down 68%.

Source

Seer Interactive, September 2025 update. The study analyzed 3,119 informational queries across 42 organizations, covering 25.1 million organic impressions and 1.1 million paid impressions over 15 months. This is the most comprehensive independent study on the topic to date — multi-organization, longitudinal, large sample. AI Overview prevalence in search results grew from 19% of queries in Q3 2024 to nearly 50% by Q4 2025, per Advanced Web Ranking.

Why It Matters for Measurement

Attribution depends on a click, a session, or an observable event. When the consumer gets the answer in the search interface itself, there is no click, no session, and nothing for the measurement program to record. The consumer was influenced; the program saw nothing. For programs with significant branded or informational search exposure, the AI Overview footprint means that a growing share of search's actual influence on consumer decisions is structurally invisible to click-based measurement. The case for impression-level measurement and cross-channel attribution is not getting weaker as AI search expands. It is getting stronger.

Platform Offline Conversion Matching — From the Platform Itself

Finding

Google's own documentation reports offline conversion import match rates of 40-50% for well-structured implementations using GCLID or hashed email matching.

Source

Google Ads Help Center — offline conversion documentation. This is Google's own characterization of what its offline conversion matching achieves.

Why It Matters for Measurement

Platform offline match rates measure how many uploaded offline conversion events the platform can match to its own ad interactions — a self-reported figure, Google attributing conversions to Google ads, matched against Google's user graph. It is not the same as connecting an offline conversion to an individually attributed multi-channel digital journey. The 40-50% figure is accurate for what it measures. The question worth asking is exactly what that is.

TV Measurement Currency Is Fracturing — And the Industry Knows It

Finding

For 2025-26, the Joint Industry Committee certified Comscore, iSpot, and VideoAmp as alternative national TV measurement currencies alongside Nielsen — and 85% of US brand and agency buyers say alternatives are "as or more effective" than the legacy currency. Nielsen's own MRC accreditation faces uncertainty again as of late 2025.

Source

JIC currency certifications reported by Variety, AdExchanger, and Marketing Dive (July 2025); 85% buyer preference figure from Advertiser Perceptions 2024 survey; Nielsen MRC accreditation uncertainty reported by Marketing Brew (October 2025).

Why It Matters for Measurement

GRP-based panel measurement was the TV currency for fifty years because there was no alternative. There are now alternatives — certified, audited, and backed by buyer preference. The fragmentation reflects the underlying problem: panel-based measurement of a landscape that now spans linear, streaming, connected TV, and addressable inventory was already straining before the alternatives existed. Programs that can connect TV and broadcast exposure to downstream digital behavior and conversion — independently of whichever currency is used to buy the media — are measuring something none of the currencies, old or new, can produce on their own.

What the FTC Actually Said About Clean Rooms

Finding

The FTC issued guidance in November 2024 concluding that clean rooms "are not rooms, do not clean data, and have complicated implications for user privacy."

Source

Federal Trade Commission, November 2024. The FTC is the US agency responsible for consumer protection and data privacy enforcement — not a measurement industry critic. When the FTC says the marketing language around clean rooms misrepresents what the technology does, that is not a vendor opinion.

Why It Matters for Measurement

Clean rooms are increasingly positioned as the privacy-compliant path to independent measurement inside platform ecosystems. The FTC's guidance addresses the gap between how clean rooms are marketed and how they actually handle data. That gap is at least as large in the measurement context: a clean room built by a platform provides measurement inside that platform's infrastructure, using that platform's methodology. Understanding what that means — and what it doesn't — requires more precision than the marketing language typically provides.

Agency Media Transparency — The ANA's Finding

Finding

The ANA's landmark Media Transparency report found that non-transparent practices — including undisclosed rebates, principal transactions, and non-disclosed inventory — were "pervasive" in the US agency media buying market.

Source

ANA / K2 Intelligence, 2016 Media Transparency Initiative. This report was commissioned by the ANA (the advertiser's trade association) and conducted by an independent investigative firm. The 2023 follow-up found that while some practices had shifted in form, the structural conflicts had not been resolved.

Why It Matters for Measurement

If the agency buying the media has undisclosed financial relationships with the media it recommends, the measurement of that media's performance is not operating at arm's length. An agency that earns volume bonuses or principal media profits from a particular platform has a financial interest in that platform appearing to perform well. Independent measurement — from a vendor with no relationship to the media being measured, no revenue from the platforms, and no financial interest in the outcome — is not an optional feature of a well-run program. It is the precondition for trusting the numbers.