When a measurement vendor tells you they achieve an 80% offline match rate, that number is often technically defensible. It is also probably not measuring what you think it is.
The number is real. The question is: match rate of what, exactly?
The measurement industry uses “offline match rate” to describe at least three distinct things. They require different infrastructure, prove different claims, and have structurally different ceilings. Presenting them under a common label — which is standard in vendor materials — produces numbers that look impressive and obscure what’s actually being accomplished. Understanding the distinction is the foundation of any honest conversation about offline measurement.
Three Different Things Called “Match Rate”
| What’s being matched | Typical ceiling | What it proves |
|---|---|---|
| Online-to-CRM matching Device IDs, hashed emails, or consent-based identifiers → client CRM records |
40–70% | Your digital audience overlaps with your customer base. Does not connect any digital journey to an offline conversion event. |
| Platform offline conversion import Purchase file upload → platform user graph (GCLID, hashed email, device signal) |
40–50% | Some offline buyers were also exposed to that platform’s ads. Self-reported by the platform being measured. No cross-channel sequence; no independent verification. |
| Independent offline attribution Physical conversion event → individually attributed digital journey, across all channels, by an independent party |
4–20% | A specific consumer’s verified digital journey led to a verified offline conversion. The full measurement the first two imply, but rarely deliver. |
The third measurement is what multi-touch attribution methodology is actually attempting. The 4–20% ceiling is not a failure — it reflects the structural reality of matching across shared IP addresses, fragmented device environments, and an offline transaction layer with no inherent digital signal. The marketed 80% figures reflect the first two categories, which are solving easier problems. They’re often sold to imply the third.
Why the Ceiling Is Where It Is
The constraints on true independent offline attribution are worth stating explicitly, because any vendor claiming dramatically higher rates is either measuring something different or working from undisclosed assumptions.
Shared identity signals. Households share IP addresses. The same individual uses a home desktop, a work laptop, a mobile device, and public WiFi across the same day — each on a different network. The digital signal that arrives at a conversion event frequently can’t be resolved to a specific individual with the precision that offline attribution requires. It can be inferred probabilistically, which is a different claim entirely.
Device and network fragmentation. A consumer who saw a display ad on a home desktop, clicked a search ad on a work laptop, and converted in-store represents a three-device journey with no deterministic signal connecting those devices unless there’s a logged-in identity across all three. The majority of digital interactions don’t include one.
Offline-only households. Approximately 6% of US households have no digital identity to match against. They buy things. That share of conversions cannot be connected to digital journeys by any vendor at any match rate — it’s a hard floor on what’s achievable.
No deterministic offline event signal. A point-of-sale transaction doesn’t emit a digital identifier. Connecting it to a digital journey requires either a loyalty program linking the purchase to a known identity, a file upload to a platform (which reintroduces the self-reporting problem), or probabilistic inference. The deterministic connection that “match rate” implies often doesn’t exist at the transaction level.
The gap between a marketed 80% match rate and the 4–20% range for true independent attribution is not a gap between vendors. It’s a gap between measurements. The first two categories are genuinely useful. They’re just different from the third — and the distinction matters when you’re making budget decisions.
The Question to Ask
This is directly researchable. If your measurement vendor claims a high offline match rate, the one productive question is: a match rate of what, exactly?
Ask your vendor: Is the match connecting offline conversion events to individually attributed digital journeys, independent of the platforms being measured? Or is it connecting CRM records to device IDs? Or matching a purchase file upload against a platform’s own user graph?
The first produces true independent attribution. The second and third produce higher rates because they’re solving structurally simpler problems. All three are useful — as long as you know which one you’re buying.
Vendors who’re doing the first tend to describe the methodology in detail. The methodology is the product. Vendors doing the second or third tend to lead with the number. When a vendor leads with the rate, ask what the rate is measuring.
The Conversion Architecture: Sequencing by Confidence
Match rate clarity is one part of a larger question: how should different types of conversion data be assembled into a coherent attribution model?
Every measurement program has a conversion architecture — whether it was designed or not. Most inherit theirs by default: count digital completions, import platform conversion files, combine and report. The architecture accumulated rather than being decided. The problem with accidental architecture is that each conversion type carries a different level of certainty. Blending them without sequencing produces a number derived from data but undefined by methodology.
Digital online conversions carry the highest certainty — directly observed at the session level, tied to individual journeys through first-party and authenticated data, deterministic in their connection to specific touchpoints. These form the foundation.
Independently matched offline conversions — phone applications, dealer visits, branch transactions, policy applications — are real outcomes and belong in the model. They enter at the right confidence level: 4–20% match rate for true independent attribution, labeled as proven attribution, with gaps disclosed rather than papered over. Consumers whose offline conversions cannot be matched appear as conversions without an attributed digital journey. That is accurate. Forcing a match inflates the number without improving the measurement.
Platform self-reported conversions are variable in quality. Some are sampled. Some double-count — Meta’s 7-day click window and Google’s 30-day window claim many of the same conversions simultaneously. They belong in the model, but only after deduplication and with appropriate confidence flags applied. The deduplication that matters operates holistically across all three types — not within each category separately.
The Proxy Insight
In many programs, a digital conversion serves as a strong proxy for a downstream offline conversion. A completed online insurance quote is a strong signal for a phone-completed policy application. An online dealer inquiry is a strong signal for a vehicle purchase. When the independent offline match rate for the downstream event is low — 6%, 8% — the digital proxy often provides more complete and more reliable coverage of the same underlying consumer behavior.
The analytically rigorous approach runs both models and shows the difference: the base model using digital proxies, and the extended model including directly matched offline conversions. The delta between them is informative. Where it is small, the proxy is adequate for most decisions. Where it is large, the offline matching is revealing something the digital signal missed.
The Answer Varies. One Approach Is Always Wrong.
The conversion architecture question has different answers in every industry and for every program within an industry. Insurance has agent-bound policy applications and direct-mail-triggered calls. Automotive has dealer visits and F&I transactions. Financial services has branch openings and advisory relationships. Pharma DTC has prescription fills and patient enrollment. Retail has POS transactions and loyalty redemptions.
Each requires its own matching mechanism, confidence level, and deduplication approach. A program that applies a uniform conversion methodology across all of them is not solving the architecture problem — it is bypassing it. The only wrong answer is the universal one.
What C3 Measures and Why
C3’s offline attribution connects individual conversion events to individual attributed digital journeys, independent of the platforms being measured. That is the third category in the table above — the hardest version — and the match rate for this measurement in our programs falls within the 4–20% structural range, varying by program, market, and the proportion of conversions that carry individual identity signals through to the purchase event.
Every match rate figure is documented in the Attribution Manifest, labeled as proven attribution rather than modeled inference. Consumers whose offline conversions we cannot link to a digital journey are counted as conversions without an attributed digital journey — because that’s accurate. Forcing a match through probabilistic extension would inflate the number without improving the measurement.
There are other vendors who can attempt this measurement. Few report it the way it actually works. The match rate in our Attribution Manifest is the rate for the measurement that matters, documented as such.
The Broader Pattern
Offline match rate is one instance of a dynamic that runs throughout measurement vendor materials: a technically accurate number that describes a simpler problem, presented in a context that implies it proves a harder one. The number is real. The implication is the issue.
The remedy is the same in every case: ask what exactly is being matched, to what, using which identity signals, attributed by which party, and verified how. A methodology that can be explained precisely is a methodology that can be trusted. A number without a methodology is just a number.
This piece covers the match rate question specifically. For the upstream methodological question — what it means to prove a journey vs. estimate one, and how the two differ across all channels — see Showing the Work: Proof, Methodology, and Disclosed Assumptions. For the question of whether your measurement vendor has a structural interest in the outcome, see The Measurement Companies That Forgot to Measure Themselves.
- Platform offline match rates: Google and Meta offline conversion imports (GCLID-based, hashed email) achieve 40–50% in well-structured implementations. These are platform-reported figures matched against the platform’s own user graph — not independent attribution. Google Ads Help →
- US offline-only households: ~6% of US households remain entirely offline with no digital identity to match. Pew Research →
- Identity fragmentation: A 2025 Branch survey found only 8% of companies have a fully unified cross-channel view of app marketing performance. The fragmentation problem is endemic before offline conversion is introduced. Basis / Branch →
- Self-reporting problem: Platform-reported offline match rates are produced by the platform attributing to itself. When the platform sets the matching rules and controls the verification, the reported figure reflects the platform’s methodology — not an independent measurement of it.