Multi-touch attribution promises to show you how consumers move through your media mix before they convert. The question worth asking any vendor is: how do you know?

Not what the number is. How they know. A complete answer to that question covers three things: what was proven, how the proof was constructed, and where the methodology relied on estimates rather than evidence. All three together produce a number you can interrogate. Any one of them missing, and you’re working with less information than you think.

The Principle: Proof Over Assumption

When C3 reports a multi-touch journey, it means we have demonstrated, on an individually attributed basis, that a specific consumer was exposed to more than one media touchpoint before converting. We saw the Originator. We saw the Assist. We saw the Converter. We have the data to show it.

When we cannot demonstrate that — when we can see the converting touch but not the upstream exposure — we report a single-touch journey. Not because we assume the upstream exposure didn’t happen. Because we cannot prove that it did.

Whatever percentage that proven rate comes to in a given program, it marks a floor on what we can verify — not a ceiling on what actually happened. The real-world multi-touch rate almost certainly runs higher. But we report what we can see, labeled as such, not an estimate of what we think probably occurred.

Showing the work means more than the proven rate itself. It means documenting the methodology behind it and being explicit about where estimates appear — as first-class information, not footnotes. A sophisticated buyer shouldn’t need to ask. The Signal Manifest and Attribution Manifest exist so the chain of custody, including any probabilistic steps in that chain, stays visible without anyone having to dig for it.

This transparency requirement — proof, methodology, and disclosed assumptions — applies equally to the channels we can measure directly and the ones we cannot. All three together make the number defensible.

Why a High Multi-Touch Rate Is Structurally Impossible to Prove

This is worth stating plainly, because it explains why any vendor reporting very high multi-touch rates should be asked exactly one question: how?

There are real, structural reasons why provably demonstrated multi-touch journeys will always be a fraction of total journeys — regardless of how good your measurement infrastructure is.

Most touchpoints are genuinely first-touch or only-touch. Organic search, direct visits, word-of-mouth referrals — these are by definition single-event interactions. No upstream paid media exposure can be attributed to them, because there isn’t one. Any program with meaningful organic traffic will show a significant portion of single-touch journeys, because those journeys are single-touch.

Walled gardens don’t share connecting signals. Google’s attribution data does not connect to Meta’s. Meta’s does not connect to Amazon’s. Inside each platform, the chain of events is partially visible; across platforms, the chain breaks. A consumer who saw a Meta ad, then a YouTube pre-roll, then searched and converted — that journey cannot be reconstructed from any data source that relies on platform-reported touchpoints. A December 2024 IAB survey found that 64% of US ad buyers plan to focus significantly more on cross-platform measurement — precisely because existing systems are failing to produce it.

Mobile has the same problem. SDK-based mobile measurement samples event data rather than recording complete event chains. It can tell you aggregate patterns; it cannot prove individual cross-channel journeys with the specificity that multi-touch attribution requires. A 2025 Branch survey found that only 8% of companies have a fully unified view of app marketing performance across channels — the remaining 92% are working with partial, incomparable data.

Offline conversion matching is always constrained — and the marketed rates aren’t measuring what you think. Vendors often market offline match rates of 70%, 80%, or higher. What those numbers typically reflect is online-to-CRM matching or platform offline conversion imports (matching a click ID to a purchase file) — not the connection of a physical conversion event to a specific individual’s attributed digital journey. When you measure the thing those vendors are implying they can measure, the actual range under real-world conditions is 4% to 20%. The constraints are structural: households share IP addresses, people use multiple devices across multiple networks daily, approximately 6% of US households remain entirely offline, and there is no deterministic signal connecting a register transaction to a cookie. Anyone claiming materially higher rates for true offline conversion attribution is measuring something else and calling it the same thing. Ask exactly which measurement they mean.

Consent categories matter — and C3 collects first-party, not third-party. The tag a measurement vendor places on your site is categorized by consent management platforms according to what it actually does with data. A tag used for audience targeting or identity resolution lands in the marketing/tracking consent bucket — the category users decline at the highest rates. C3’s tag is categorized as measurement and analytics. It is first-party to the advertiser’s site, used exclusively for attribution, and never used to build targeting audiences or enrich an identity graph. In markets with meaningful consent enforcement, that distinction has a material effect on the data pool available for measurement — and on whether the measurement is operating on a representative population or the fraction willing to accept marketing tracking.

Taken together, these constraints set a ceiling that no measurement system can exceed through better execution alone. The ceiling is structural — not a limitation of C3’s methodology, or anyone else’s. It is a feature of the data environment: organic journeys without upstream paid signals, walled gardens that don’t share connecting data, mobile sampling that can’t prove individual paths, and consent frameworks that limit who can be observed at all. A vendor that reports dramatically higher proven multi-touch rates than these constraints allow hasn’t achieved better measurement. They’ve made a different methodological choice — probabilistic extension, panel augmentation, definitional expansion — and presented that output as a measured rate. When the ceiling is structural rather than technical, a reported rate far above it means the number was constructed, not found. That’s worth asking about directly, because the vendor’s slide won’t make that distinction for you.

The Deduplication Advantage

There is an argument for multi-touch attribution that doesn’t require any of the above to be resolved: deduplication.

Every platform claims every conversion it touched. Google claims the conversion. Meta claims it. Bing claims it. Each platform’s self-reported numbers count the same purchase, the same form submission, the same app install. Sum those claims and you’ve counted the same conversion three or four times. The advertiser’s aggregate “total conversions” from platform reporting is a fiction.

Multi-touch attribution assigns each conversion exactly once. One consumer, one conversion, fractional credit distributed across the touchpoints that contributed. The mathematics are more honest before a single journey is reconstructed.

This means MTA is more accurate than self-reported platform data — not primarily because of what it reveals about journey complexity, but because of what it eliminates in double-counting. Even a program where the majority of attributed journeys are single-touch is producing cleaner conversion counts than any combination of platform self-reporting. The deduplication alone justifies the methodology.

What Competitors Are Doing When They Show Higher Numbers

There are legitimate paths to a higher reported multi-touch rate. None of them are the same thing as proving the journeys.

Probabilistic extension. Rather than requiring observed evidence of each touchpoint, probabilistic models infer likely exposure based on similar audiences, campaign timing, or behavioral patterns. The resulting “journey” is a modeled reconstruction — potentially accurate in aggregate, but not verifiable at the individual level.

Panel-based augmentation. Some vendors supplement individual attribution with panel data — using aggregate behavioral patterns from a sample to estimate what the full population probably did. This can add directional value. A modeled journey and a proven journey are different things, and the distinction matters for how you act on the output.

Definitional expansion. Some vendors count a “multi-touch journey” at the program level — if the campaign included multiple channels, all conversions are classified as multi-touch. High rates by definition, not by observation.

Implicit assumption presentation. The most common version: the methodology embeds assumptions about multi-touch exposure without surfacing them to the client. The number reflects a model. The model’s assumptions are not labeled — they’re just baked in.

None of these approaches are necessarily dishonest. Some are useful. But treating any of them as equivalent to proving that a specific consumer had a multi-touch journey is where the market misleads.

A note on our own probabilistic data use

Some channels are inherently probabilistic. Linear TV and DOOH don’t emit individual impression signals — the best available data for these channels involves panel inference, modeled reach, or broadcast schedules mapped to device signals. Using probabilistic methods here is not a compromise; it is the correct methodology for what is being measured. The question is whether the probabilistic step is disclosed.

C3 uses probabilistic methods where the channel requires them — and labels every instance explicitly. What the Attribution Manifest documents for a TV placement is different from what it documents for a verified digital touchpoint, and that difference is visible to the client. What we don’t do is apply probabilistic extension to digital journeys that could be proven with individual-level data but fall short of a full observed chain. Panel augmentation to inflate multi-touch rates, behavioral inference to fill attribution gaps, classification of program-level multi-channel spend as individual-level multi-touch — these are the moves that produce impressive numbers by assumption, not by observation. The distinction is whether the methods are disclosed and whether they’re matched to what the data can actually support.

Why Showing the Work Is Better Than Inflating the Number

The alternative to proving journeys is assuming them. C3 could produce a higher multi-touch rate — by extending matches probabilistically, by augmenting with panel data, by classifying program-level multi-channel spend as individual-level multi-touch. Those are real methodological choices, and some of them would produce numbers that look better in a slide.

What they would not produce is a methodology you can defend when someone asks to see the work.

The Signal Manifest and Attribution Manifest are how we show the work. They document the chain of custody — what was observed, at what point in the journey, with what confidence — so that the reported number is not a model output with the assumptions hidden. It is a record with the assumptions visible. The multi-touch rate in that record is the proven rate. What remains unobserved is labeled as such, not averaged away.

The result is a methodology that can answer the question — how do you know? — with something more than a reference to the model. That conversation is harder than presenting a 70% rate and hoping no one asks. It holds up when a sophisticated buyer, a CFO, or an auditor looks closely. The other kind doesn’t.

The Structural Argument in One Paragraph

Independent multi-touch attribution is more defensible than last-touch measurement — closer to consumer reality, built on deduplicated conversion counts, and transparent about where the proof ends. The limitations are real: walled gardens prevent cross-platform stitching, mobile environments sample rather than record, organic touchpoints have no upstream media to attribute, and offline conversion matching has a structural ceiling well below what most vendors market. Acknowledging those constraints is the methodology. Pretending they don’t exist — by inflating match rates through assumptions, probabilistic modeling, or definitional expansion — produces a number that looks better and means less. We show you what we can prove.

Related reading

This piece covers the multi-touch methodology specifically. For the parallel question of why the vendor matters as much as the methodology — data that flows into a targeting platform and data that stays in the measurement lane are two different things — see Choose a Lane: Measure or Target. For a close look at what offline match rates are actually measuring, see What Your Vendor’s Match Rate Is Actually Measuring.

Research Notes
  • Walled garden attribution gap: December 2024 IAB survey — 64% of US ad buyers planning significantly more focus on cross-platform measurement. eMarketer / IAB →
  • Mobile unified measurement: Branch (2025) — only 8% of companies have fully unified view of app marketing across channels. Basis / Branch →
  • US offline households: ~6% of US households remain entirely offline. Pew Research →
  • Consent accept-all rates: Global average accept-all rate approximately 31% (2024) — marketing/tracking tags are declined at substantially higher rates than measurement/analytics tags. CookieYes →
  • MTA market adoption: 52% of marketers using multi-touch attribution in 2024, 57% calling it crucial. Ruler Analytics →