The decision chain in current measurement discourse runs like this. Attribution reports 3.8x ROAS on paid social. Incrementality reports 1.1x on the same channel, same spend, same window. Treat the larger number as inflated and the smaller as real. Shift budgets accordingly. Cut the channels attribution credited. Push the channels incrementality credits. The shortcut is common. It also produces worse decisions than either number would have supported alone. The real insight sits in the difference between the two, not in choosing between them.

What the Two Numbers Measure

Attribution distributes credit for conversions that happened. It answers the question: across the touchpoints in a consumer's journey, how should credit be assigned? The output describes which channels contributed to outcomes that did occur.

Incrementality measures behavioral lift against a counterfactual. It answers the question: of the conversions that happened, how many would have happened without the spend? The output estimates the incremental conversions produced by the spend.

Different questions, different numbers. A 3.8x attribution result and a 1.1x incrementality result do not conflict, because they are not describing the same phenomenon. Comparing them is the measurement equivalent of comparing the circumference of a circle to the area of a square.

The Difference Is the Signal

The difference between the two numbers is where the information sits. The gap can be composed of many things: attribution double-counting across platforms, measurement window mismatches, selection bias in the holdout, differences in what each method treats as a conversion, or the channel taking credit for conversions that would have happened anyway. The insight comes from evaluating those possibilities in isolated contexts. Neither number alone answers the allocation question. The relationship between them, and the work of decomposing that relationship, is the signal.

"Measurement in isolation is a smaller product than the service of making sense of it."

The ROAS Problem

A deeper problem sits underneath the comparison. Both numbers are ROAS: revenue divided by ad spend. The formula asks whether advertising returns itself in revenue. The actual question facing the advertiser is whether advertising produces a benefit, which requires comparing incremental profit to spend, not revenue to spend. A 3x ROAS on a product with a 20 percent gross margin means the advertiser spent a dollar to generate sixty cents of gross profit, a loss of forty cents per dollar before any other operating cost. ROAS is useful as shorthand. Used as evidence that advertising created benefit, it can prove the opposite.

ROAS is also constructed from attribution choices. Which revenue. Attributed how. Over what window. With what deduplication. The 3.8x did not fall from the sky. It was produced by a model that made specific decisions about how to assign revenue to touchpoints. The 1.1x was produced by a different model that made different decisions about what to count as incremental. Comparing the two as if they were observations rather than constructions compounds the original category error.

On Causal Measurement

Incrementality is often framed as causal measurement in a stronger sense than it can deliver. A well-designed incrementality test can produce a statistical estimate of causal effect, in the sense that a change in spend under randomization is associated with a change in outcome. What it cannot produce is an understanding of why. The mechanism of decision, what actually happens between a consumer's exposure and their action, is not accessible to any measurement methodology currently in use. Claims of "causal measurement" that imply understanding of the driver go further than the methodology supports.

The Structural Limits

Incrementality also has structural limits. The methodology requires a holdout, and channels where a clean holdout is difficult face additional layers of uncertainty. Television is the paradigm case. Linear and connected TV both rely on signals that infer attention rather than verify it. Probabilistic exposure matching, automated content recognition, device-level impression delivery: all of these detect presence, not attention. Incrementality run on TV therefore layers attention-inference assumptions on top of the causality assumptions every incrementality test already carries. A framework that elevates incrementality as the single correct measurement implicitly dismisses the dimensions of uncertainty that make TV measurement hard.

Within the channels where a holdout is possible, there is still an isolation fallacy. Holdout testing asks what happens when a channel is removed from the mix, with the assumption that everything else stays constant. Nothing does. The holdout group continues to encounter organic search, direct mail, competitive advertising, and brand awareness built from prior spend. Media channels also do not operate independently. TV drives branded search. Social amplifies awareness. Removing one channel measures the residual value of the whole system minus that component, with the interaction effects invisible. The question a holdout answers is: what happens if we remove this channel? The question an advertiser actually needs answered is different: what is this channel contributing to outcomes within the mix as it runs?

Incrementality results are also extrapolations, not observations. The methodology derives an estimate of lift from a sample, whether a holdout group, a geo test, or a matched market, and projects that estimate to the full program. The smaller the sample relative to the program, the larger the extrapolation error. Marketing cannot be tested incrementally as a whole; advertising alone even less so, because advertising interacts with brand, product, pricing, PR, and market conditions in ways no holdout can isolate.

One Truth Is a Tell

Any vendor telling you their methodology produces a single truth is marketing, not measuring. Incrementality has multiple methodological paths, each producing different numbers on the same data. Attribution has multiple paths. MMM has multiple paths. The honest frame is that measurement is a family of methodologies, each producing partial answers, and a rigorous program understands the relationship between them. The dishonest frame is that one method is correct and the others wrong.

What You Can Actually Buy

"You cannot buy conversions. You can buy advertising."

Everything downstream is consumer behavior responding to many inputs, including your spend. All measurement is inference about which inputs contributed to which outputs. Honest methodologies disclose their limits. Dishonest ones market their method as the answer.

Three Habits for Rigorous Measurement

Three habits separate rigorous measurement from its marketing.

  1. Choose a lane. Declare which question you are answering, whether credit distribution, incremental lift, or media mix decomposition, and stay inside it rather than hopping between frames.
  2. Show your work. Expose the methodological choices that produced the number, including the uncertain ones, so a serious reader can evaluate the output on its merits.
  3. Say "I don't know" when you do not. Acknowledge where the methodology runs out of confidence rather than covering uncertainty with a single confident number.

A program built on those habits produces numbers a CFO can interrogate. A program that claims to have found the one truth produces numbers that collapse the moment the framing is examined.

C3 Metrics Perspective

C3's measurement program is built around the service of decomposing the gap between methodologies, not around a single authoritative number. Multi-touch attribution, incrementality signal, MMM context, and conversion architecture analysis each answer different questions, and the rigorous program understands where they agree and where they diverge. The Signal Manifest™ and Attribution Manifest make the methodological choices visible at every step, so the output can be interrogated rather than accepted on faith.

A companion piece in the Data Lab addresses a related but distinct question: whether an incrementality result is strong enough to be signal rather than noise. See X-Factor: 1 — Incrementality: 0.