When a measurement vendor tells you they achieve an 80% offline match rate, that number is often technically defensible. It is also probably not measuring what you think it is.

The number is real. The question is: match rate of what, exactly?

The measurement industry uses “offline match rate” to describe at least three distinct things. They require different infrastructure, prove different claims, and have structurally different ceilings. Presenting them under a common label — which is standard in vendor materials — produces numbers that look impressive and obscure what’s actually being accomplished. Understanding the distinction is the foundation of any honest conversation about offline measurement.

Three Different Things Called “Match Rate”

What’s being matched Typical ceiling What it proves
Online-to-CRM matching
Device IDs, hashed emails, or consent-based identifiers → client CRM records
40–70% Your digital audience overlaps with your customer base. Does not connect any digital journey to an offline conversion event.
Platform offline conversion import
Purchase file upload → platform user graph (GCLID, hashed email, device signal)
40–50% Some offline buyers were also exposed to that platform’s ads. Self-reported by the platform being measured. No cross-channel sequence; no independent verification.
Independent offline attribution
Physical conversion event → individually attributed digital journey, across all channels, by an independent party
4–20% A specific consumer’s verified digital journey led to a verified offline conversion. The full measurement the first two imply, but rarely deliver.

The third measurement is what multi-touch attribution methodology is actually attempting. The 4–20% ceiling is not a failure — it reflects the structural reality of matching across shared IP addresses, fragmented device environments, and an offline transaction layer with no inherent digital signal. The marketed 80% figures reflect the first two categories, which are solving easier problems. They’re often sold to imply the third.

Why the Ceiling Is Where It Is

The constraints on true independent offline attribution are worth stating explicitly, because any vendor claiming dramatically higher rates is either measuring something different or working from undisclosed assumptions.

Shared identity signals. Households share IP addresses. The same individual uses a home desktop, a work laptop, a mobile device, and public WiFi across the same day — each on a different network. The digital signal that arrives at a conversion event frequently can’t be resolved to a specific individual with the precision that offline attribution requires. It can be inferred probabilistically, which is a different claim entirely.

Device and network fragmentation. A consumer who saw a display ad on a home desktop, clicked a search ad on a work laptop, and converted in-store represents a three-device journey with no deterministic signal connecting those devices unless there’s a logged-in identity across all three. The majority of digital interactions don’t include one.

Offline-only households. Approximately 6% of US households have no digital identity to match against. They buy things. That share of conversions cannot be connected to digital journeys by any vendor at any match rate — it’s a hard floor on what’s achievable.

No deterministic offline event signal. A point-of-sale transaction doesn’t emit a digital identifier. Connecting it to a digital journey requires either a loyalty program linking the purchase to a known identity, a file upload to a platform (which reintroduces the self-reporting problem), or probabilistic inference. The deterministic connection that “match rate” implies often doesn’t exist at the transaction level.

The gap between a marketed 80% match rate and the 4–20% range for true independent attribution is not a gap between vendors. It’s a gap between measurements. The first two categories are genuinely useful. They’re just different from the third — and the distinction matters when you’re making budget decisions.

The Question to Ask

This is directly researchable. If your measurement vendor claims a high offline match rate, the one productive question is: a match rate of what, exactly?

The audit question

Ask your vendor: Is the match connecting offline conversion events to individually attributed digital journeys, independent of the platforms being measured? Or is it connecting CRM records to device IDs? Or matching a purchase file upload against a platform’s own user graph?

The first produces true independent attribution. The second and third produce higher rates because they’re solving structurally simpler problems. All three are useful — as long as you know which one you’re buying.

Vendors who’re doing the first tend to describe the methodology in detail. The methodology is the product. Vendors doing the second or third tend to lead with the number. When a vendor leads with the rate, ask what the rate is measuring.

What C3 Measures and Why

C3’s offline attribution connects individual conversion events to individual attributed digital journeys, independent of the platforms being measured. That is the third category in the table above — the hardest version — and the match rate for this measurement in our programs falls within the 4–20% structural range, varying by program, market, and the proportion of conversions that carry individual identity signals through to the purchase event.

Every match rate figure is documented in the Attribution Manifest, labeled as proven attribution rather than modeled inference. Consumers whose offline conversions we cannot link to a digital journey are counted as conversions without an attributed digital journey — because that’s accurate. Forcing a match through probabilistic extension would inflate the number without improving the measurement.

There are other vendors who can attempt this measurement. Few report it the way it actually works. The match rate in our Attribution Manifest is the rate for the measurement that matters, documented as such.

The Broader Pattern

Offline match rate is one instance of a dynamic that runs throughout measurement vendor materials: a technically accurate number that describes a simpler problem, presented in a context that implies it proves a harder one. The number is real. The implication is the issue.

The remedy is the same in every case: ask what exactly is being matched, to what, using which identity signals, attributed by which party, and verified how. A methodology that can be explained precisely is a methodology that can be trusted. A number without a methodology is just a number.

Related reading

This piece covers the match rate question specifically. For the upstream methodological question — what it means to prove a journey vs. estimate one, and how the two differ across all channels — see Showing the Work: Proof, Methodology, and Disclosed Assumptions. For the question of whether your measurement vendor has a structural interest in the outcome, see The Measurement Companies That Forgot to Measure Themselves.

Research Notes
  • Platform offline match rates: Google and Meta offline conversion imports (GCLID-based, hashed email) achieve 40–50% in well-structured implementations. These are platform-reported figures matched against the platform’s own user graph — not independent attribution. Google Ads Help →
  • US offline-only households: ~6% of US households remain entirely offline with no digital identity to match. Pew Research →
  • Identity fragmentation: A 2025 Branch survey found only 8% of companies have a fully unified cross-channel view of app marketing performance. The fragmentation problem is endemic before offline conversion is introduced. Basis / Branch →
  • Self-reporting problem: Platform-reported offline match rates are produced by the platform attributing to itself. When the platform sets the matching rules and controls the verification, the reported figure reflects the platform’s methodology — not an independent measurement of it.