Solved – Multi-Channel Attribution Models: How to Measure Accuracy

marketingmodeling

What methods are there, if any, that measure or approximate the accuracy of attribution models?

I'm looking for something purely based in (real) data; preferably something analogous to typical cross validation for machine learning.

I realize that there is probably no perfect way to do this since the underlying truth is very obscure, but as a statistician I'd like to be able to compare one model to another in a better way than just 'this intuitively seems to be more accurate'. Everybody talks about first/last click attributions to be the worst, but can that actually be proven?

There are three answers that I have observed throughout my online research:

  1. No comments on measuring accuracy (by far the most prevalent one)
  2. Testing in practice (model the attribution, act on the results by adjusting marketing, see if ROI increases, repeat)
  3. Artificially modeling click-streams and outcomes, then running attribution models and comparing results with the underlying parameters of the click-stream models (highly biased)

Best Answer

Attribution models are a classic example of "mathiness" in quantification. First of all, there is no ground truth against which an evaluation is possible. Next, there are literally dozens of models out there as well as vendors selling smoke, mirrors and snake oil all of whose credibility is highly questionable. Another big concern is the nonignorable fact that a large percentage of impressions and click-throughs are generated by bots, crawlers and other non-human sources as well as outright fraud. Estimates of this source of non-eyeball error range as high as 40% of online activity. In addition to all of this, very few models attempt to integrate traditional advertising (TV, print, billboards, etc.) into their equations. Traditional marketing vehicles remain the dominant proportion of most marketing budgets, which fact digital marketers sweep under the rug. Finally, there is the issue of cross-platform exposure -- verified links across devices at the household level are few and far between, most links are probabilistic. I've seen the numbers on this but can't find the reference at the moment.

Given all of that, attribution models aren't all that different from astrology or divination, despite their appearance of an underlying empirical reality. Regardless, hundreds of millions of dollars are spent based on their results each and every year.

So, to answer your question of evaluation of competing models and your point #1, there is an inherent reluctance among marketers to publish information concerning model error, confidence intervals or anything that might weaken their stated claims. The industry colludes in this practice for a welter of reasons including innumeracy, a nearly total lack of healthy skepticism and aversion to risk aversion. It's an industry that thrives on hype and jumping on the bandwagon of the "next big thing."

Point #2 and #3 are the right ways to go about an evaluation. Since there is no ground truth against which to benchmark a model's results, the only way forward is to constantly test and retest a model's predictions against real-time or in market performance. While an academic marketing scientist might have little incentive to game the system, most vendors would. And as Kaiser Fung has observed in his book Numbers Rule Your World, there is virtually no way to evaluate the "truth" of a vendor's claims without getting into the trenches with the analyst(s) who did the work. The hard part here would be in building multiple, competing models, a challenge only an academic would willingly take on.

Simulations would be a cost-efficient method for model comparison since the many limitations of real-world data can be met with plausible assumptions. I stand to be corrected but I'm not aware of any published research that has performed this approach.

That said, none of this would constitute "proof" in any meaningful, Popperian sense. The results would, at best, be directional...which is more than enough to marketers.

Related Question