Tariffs get announced over a weekend. A geopolitical conflict reshapes energy prices and consumer confidence. An economic shock changes purchase patterns in days, not quarters. These aren't tail events anymore — they're the operating environment. Markets move fast, and they move in ways that don't follow historical playbooks.

For advertisers, disruption creates an immediate measurement problem. The models providing budget guidance last quarter may no longer be describing a world that exists. And different measurement methodologies fail differently when real-world conditions break the assumptions they're built on. Understanding how each methodology fails — and why one doesn't — is one of the most important things most advertisers aren't thinking about when they evaluate measurement partners.

The Methodology Dependency Problem

Every measurement methodology has a critical dependency. MMM depends on historical patterns remaining representative. Incrementality testing depends on stable, isolated control groups. MTA depends on neither — it observes what's actually happening right now. That distinction becomes decisive exactly when you need your measurement most: during disruption.

Why MMM Breaks Under Disruption

Marketing Mix Modeling is a statistical method built on historical time series data. It works by finding patterns in how spend levels across channels have historically correlated with business outcomes — accounting for seasonality, competitive activity, macroeconomic factors, and promotional calendars. The model's value depends entirely on the assumption that the future will behave enough like the past to make historical relationships meaningful.

Any significant market disruption breaks that assumption. When tariffs shift category costs overnight, when geopolitical instability changes consumer sentiment in ways that have no historical analog, when a new technology reshapes how consumers research and purchase — MMM's historical relationships stop describing the current world. The model is still running, the reports still arrive, but the underlying patterns being extrapolated no longer exist.

Rebuilding an MMM model requires 12 to 18 months of new stable data to re-establish valid baselines. In a disruption environment, that means operating without reliable strategic budget guidance precisely when you need it most.

Why Incrementality Loses Its Control Groups

Incrementality testing measures the true causal lift of a campaign by comparing an exposed group to an unexposed control group. The logic is clean: if the two groups are otherwise equivalent, the difference in outcomes can be attributed to the campaign.

The problem is that meaningful market disruptions affect everyone — exposed and unexposed alike. When category-wide demand shifts, when external events change consumer behavior across the board, when macroeconomic conditions affect all segments simultaneously, there is no clean control group. The fundamental question that incrementality testing asks — "would these conversions have happened anyway?" — becomes unanswerable when "anyway" means a world that no longer exists.

Incrementality also depends on stable conditions over the test window. A test that starts under normal market conditions and ends in the middle of a significant disruption produces outputs that measure the disruption, not the campaign.

Why MTA Keeps Working

Multi-Touch Attribution doesn't rely on historical baselines. It measures what's happening right now — the actual consumer journeys occurring today, the touchpoints preceding today's conversions, the fractional credit earned by each channel in the current environment.

When consumer behavior shifts, MTA programs adapt within weeks rather than months. As conditions change — wherever and however they change — the MTA model reflects those changes in real time, because it's observing actual current behavior rather than extrapolating from a prior state of the world. The methodology doesn't require stability to produce valid outputs. That's not a feature designed for disruption. It's a structural property that becomes decisive during disruption.

This is not a minor technical distinction. Advertisers with active MTA programs during market disruptions can see which channels are working in the new environment, and which aren't — and reallocate accordingly. Advertisers relying on MMM or incrementality are flying without instruments.

COVID: The Ultimate Stress Test

The most extreme proof case is March 2020. Almost overnight, consumer behavior shifted in ways that invalidated years of historical modeling data — not gradually, not partially, but completely. Seasonality patterns collapsed. Category demand transformed in days. Consumer media behavior changed in ways that had no historical analog.

MMM models built on 2017–2019 data were no longer measuring the world they described. Incrementality control groups were contaminated simultaneously across every category — lockdowns, stimulus, category-wide demand shifts, and supply chain disruptions affected everyone, exposed and unexposed alike. For most advertisers, reliable MMM guidance was unavailable for the better part of two years.

Advertisers who maintained active MTA programs through COVID received actionable channel intelligence at the exact moment every other measurement signal had gone dark. MTA adapted because it had nothing to re-establish. It was already measuring the current world.

COVID was the extreme outlier. The principle it proved applies at every disruption below it — tariffs, war, recession, platform shifts, category shocks. Any event that breaks the relationship between past patterns and current behavior breaks MMM. Any event that contaminates exposed/unexposed equivalence breaks incrementality. MTA keeps working because it never assumed the world would stay the same.

The Second Disruption: Vendor Instability

External market shocks are one kind of disruption. But there's a second kind that receives far less attention: the disruption caused by changes inside your measurement vendor.

When a measurement provider is acquired by a data infrastructure company, a media network, or a platform with competing interests, the model doesn't stay still. Methodologies get revised to align with the new owner's product strategy. Historical baselines get reindexed against new data architectures. The ROAS benchmarks your team has been using to make budget decisions are now being computed differently — and the comparisons across periods become unreliable.

Advertisers rarely receive a clear disclosure when this happens. The reports keep arriving. The dashboards stay live. But the model underneath has changed, and the trend lines your team is reading may be measuring a different reality than they were six months ago.

"Measurement continuity is a precondition for measurement confidence. If the model changes, the historical comparisons change with it — and you lose the baseline that makes attribution meaningful."

Stability Is a Measurement Quality Signal

The lesson from COVID — and from every disruption since, and from the wave of measurement vendor consolidation that preceded and followed it — is that methodology matters, but so does continuity. A measurement program is an investment that compounds over time. The longer a model runs in a stable environment, the more reliable its outputs become. Disruption, whether from an external shock or an internal reorganization, resets that compounding.

For advertisers evaluating measurement partners, vendor stability deserves explicit consideration alongside methodology and coverage. Who owns the company? Has the ownership or leadership changed? Has the methodology changed? Have the model architecture or data partnerships changed in ways that affect historical comparability?

These are not routine due diligence questions. They are questions about whether the measurement program you're building today will still be measuring the same thing three years from now — in whatever environment that turns out to be.

C3 Metrics Approach

C3 Metrics has operated under consistent leadership and consistent methodology since 2019 — through cookie deprecation, COVID, platform consolidation, and AI emergence. No pivot. No acquisition. MTA programs running on C3's platform today produce outputs that are directly comparable to outputs from 2021, because the model architecture, the measurement philosophy, and the team behind it have not changed.