Q Thoughts

ownerIQ's staff (aka – "The Q") shares our insights and opinions on how marketers can more effectively impact today's shopper along the digital path to purchase. "The Q and A" provides honest and practical answers to the questions and challenges facing digital advertisers in the areas of second-party data, programmatic buying, shopper marketing, co-operative marketing, attribution, and emerging media.

What Weather Prediction Tells Us About Programmatic


As I write this there is a major snow storm bearing down on New England, where I live. The storm could either hit us with a wallop or whimper. Which is more likely? Depends on which weather model you believe.

The National Oceanic and Atmospheric Administration (NOAA) makes the results of multiple weather models available free of charge. The Global Forecasting System, or GFS, is the primary US weather forecasting model. But, NOAA also operates the North American Meoscale (NAM) Forecast System. Not to be outdone, the Canadians produce weather forecasts on the basis of their own model, and a coalition of 34 European countries operate the ECMWF model. There are hundreds of additional smaller scale and experimental weather models in operation around the globe.

Each of these weather models have different strengths and weaknesses. Some are designed for short-range forecast accuracy, while others attempt to get the long view right. MOAA’s GFS model uses a 28 kilometer grid, while the European ECMWF uses a shorter side length and calculates more points. Some of the models are best applied to specific forecasting problems, or specific areas, while others are general purpose and try to get the broad picture more or less right.

Wallop or whimper? Only time will tell. All of these models have access to exactly the same raw weather data at massive scale, yet they all produce different forecasting results and the variability of forecasts can be significant  Tremendous resources are deployed trying to interpret the data. NOAA claims that there are 6,773 scientists and engineers on its staff. The department within NOAA responsible for the GFS and NAM models has a multi-billion dollar annual budget with some of the largest supercomputing facilities in the world. The combined total spent annually on weather modeling across the globe easily exceeds $10 B. Clearly lots of shared data, lots of PhDs, lots of computing resources and heaps of money are required to interpret and use the raw weather data.

The important – and profound – takeaway for marketers is this: despite the current worship of raw data among marketers and investors, either at the church of data scale or at the church of data quality, the ability to interpret data and transform it into useful, predictive information, is the more important trick and is perhaps the one with the most added value.

The corollary conclusion is that interpreting data and creating useful information isn’t easy.

In the early days of getting ownerIQ off the ground I came to this conclusion myself, the hard way. We had then (and still do) direct data on a consumer’s behavior in retail. If an advertiser wanted a particular outcome, say, finding consumers who are likely to buy washing machines, I was sure that all I had to do was find those consumers who were browsing washing machines, put an ad in front of them and everything would turn out great for my advertiser.

I was wrong. The simplistic technique of looking for a single signal about a consumer’s behavior and showing them an ad just doesn’t work very well. When it didn’t work we checked to see if our data signals were polluted? Did we segment our users incorrectly? No. We found over and over again that rigidly segmenting users by a single behavior, or some combination of behaviors, and then indiscriminately targeting that segment was not a recipe for programmatic success.

The fundamental media problem is understanding how to sort through billions of ad placement opportunities appearing in front of hundreds of millions of users and use the limited budget to find the combination of these opportunities, users and ad frequency that produce the best result for the advertiser. Only in extremely limited cases is the presence of a single signal, and a rigid segment based on that behavior, sufficient to use as a filter to winnow down the number of possible opportunity-user-frequency combinations to a manageable number and produce great outcomes. The only time this technique works is in the presence of both a single highly predictive signal – and a very small user population.

That scenario, highly predictive signal and small user population, describes only one common tactic: retargeting.  In fact, the only situation where we find the simplistic technique works with reliability is in the case of retargeting. This explains why site retargeting was the first widely adopted programmatic use case and it also explains why any programmatic provider can make retargeting work.

In all other cases the simplistic technique fails and data alone, of any type, or at any scale, will not deliver the marketing outcome because the data signal alone is not sufficiently predictive to overcome the noise that is inherent in the programmatic landscape. This is where we started carefully building models to combine the highly predictive data we were bringing to market, including consumer behavior at retail, with meticulous evaluation of each placement opportunity.

Noise comes in many forms. Fraud and brand safety are the obvious ones. But, there are many others. Publishers who load up a page with lots of ad units. Publishers who have content so eye catching that it overwhelms any ads placed on the page. Users who seem to have nothing better to do than to surf the web and explore everything, so much so that each of their observed behaviors is of marginal importance. Users who never intended to be browsing washing machines and were really looking for dishwashers. Users who share a family computer with their teenage daughter and therefore present a confusing bag of behaviors.

This is a fact that marketers who have bought into the idea of raw data as the most important, or only, determining factor in media placement are now finding out to their dismay. A DSP software license combined with data from third-party aggregators, or even just with large quantities of first party data, cannot produce great results in the absence of models that interpret that data in the context of the specific marketing problems to be solved.

Just like weather forecasting, the marketer needs programmatic forecasting models that score each of the billions of possible opportunity-user-frequency combinations according to each business use case and makes a prediction about what is going to happen if the advertiser’s media is placed in a given opportunity. And, it turns out, that the quality of these models and how they are built add more value to the marketer than the mere data alone. Recall, that in weather forecasting everybody has access to the same data, yet the predictive results are anything but the same.

Data isn’t unimportant, to be sure. False signals will mislead any model. Some signals are more predictive than others. Raw data inputs do matter. But, when it comes to programmatic excellence, data isn’t the only thing that matters. Raw data deployed without a model to sort through the billions of opportunity-user-frequency combinations is really just a baby step up from the often derided spray-and-pray techniques. Models matter too, and marketers should be as focused on how their data is going to be interpreted and used – the model – as they are in the data inputs themselves.

Oh, and that snow storm. Whimper. The Canadian model got it right. NOAA got it wrong. If you cancelled your vacation expecting the wallop, you lost out. Same data, different prediction, different outcome. The model matters.

This article was recently featured in AdExchanger

Categories:Posts from 2016


Leave a Reply

Your email address will not be published. Required fields are marked *