Reference Library

Checklists and frameworks for measurement decisions.

Structured reference documents for the questions senior measurement people get asked: how to evaluate a vendor, how to verify independence, when to use which methodology, how to audit programmatic delivery, where to find search allocation efficiency, how to sequence conversion data, and how to read AI search visibility. Built for the conversation you take into a CFO or operating partner meeting.

Each reference document is a 3-page PDF, 12 questions plus an at-a-glance summary on the final page. Bring them to any vendor conversation, internal review, or board discussion — including ours.

Independence 3 pages · 12 questions

Verifying Measurement Vendor Independence

A 3-part audit checklist.

Independence is a structural property, not a marketing claim. This document covers data origin, environment control, consent and data use, and economic independence. The verifiable mechanism: how the vendor's tag is classified by the Consent Management Platform on your own site — a check that takes under a minute and reveals what the vendor's brochure cannot.

Answers questions like: How do I verify a measurement vendor is actually independent? What does the CMP category say about a vendor's posture? Where does the vendor's data actually go?
Methodology 3 pages · 12 questions

Choosing an Attribution Methodology

A decision framework for MMM, MTA, and incrementality.

Each methodology answers a different question and assumes a different set of conditions. The decision framework surfaces methodology fit as a deliberate choice rather than a default. The page-3 grid maps eight decision criteria across MMM, MTA, incrementality, and combined approaches. When methodologies disagree, the disagreement is information.

Answers questions like: When do I use MMM vs MTA vs incrementality? What's the X-Factor in my measurement? How do I read the gap between attribution and incrementality reads on the same channel?
Programmatic Audit 3 pages · 12 questions

Auditing Programmatic Delivery Quality

A 12-question framework + four-signal pattern reference.

Platform-reported invalid traffic metrics describe what the platform's quality systems chose to surface. Independent delivery-quality auditing reads the raw signal — impression logs, beacon fires, conversion records — for patterns that platform reporting does not flag. Includes the four-signal pattern C3 looks for in client engagements: viewthrough beacon ratio, impression-spike timing, peer-volume outliers, and CPM-weighted fraud cost.

Answers questions like: How do I audit programmatic delivery quality independently? What patterns indicate fraud the DSP missed? How do I size dollar-cost fraud vs. rate-only fraud?
Search Allocation 3 pages · 12 questions

Cross-Platform Search Allocation

A methodology framework + reference of typical findings.

Each search platform reports conversions on its own attribution basis. The cost-per-conversion figures are internally coherent and structurally not comparable across platforms. This framework describes what a credible cross-platform allocation methodology requires, plus a reference grid of typical findings: independently-attributed CPA differentials of 50–80%+, same-spend efficiency recovery of 6–23%, day-of-week CPC differentials of up to 140%+.

Answers questions like: Is my search spend allocated efficiently across Google and Bing? How big is the cross-platform allocation gap typically? How do I act on the finding without overcorrecting?
Conversion Architecture 3 pages · 12 questions

Conversion Architecture

Sequencing online and offline conversions in attribution.

Conversion data arrives from several sources with different confidence levels. Digital deterministic at the foundation; independent offline attribution at a structural ceiling of 4–20%; online-to-CRM matched in the middle; platform offline imports and modeled inference filling out the lower tiers. This framework sequences the tiers by confidence and surfaces the structural ceilings every measurement vendor's match-rate claims should disclose.

Answers questions like: What does my vendor's match rate actually measure? How should online and offline conversions be sequenced? What's the structural ceiling on true independent offline attribution?
AI Measurement 3 pages · 12 questions

AI Measurement Surface Inventory

A two-layer reference for tracking AI search in attribution.

AI search surfaces show up in measurement at two distinct layers: click-through traffic captured at the site tag, and citation visibility captured by publisher-side tools. Standard reporting covers parts of each layer with different reliability across surfaces. The page-3 inventory maps current AI surfaces — ChatGPT, Copilot, Bing AI summaries, Gemini, Perplexity, Claude, Apple Intelligence — against the reporting access available for each.

Answers questions like: Where do AI search referrals show up in my reporting? What does the Bing AI Performance Report cover that Google Search Console doesn't? How do I baseline AI citation visibility?

Want a conversation about your program?

The reference documents describe the questions; a direct conversation with our team applies them to your specific program, spend, and channel mix. No deck. The relevant document above is the agenda.

Talk this through →