Case studies are typically organized by industries and companies. Here, however, we present them according to the insights that generated the results we’ve seen. Hopefully they’ll stimulate possible avenues for you to explore. We’d love to help!
A global, nine-figure sales food company said, “We have media mix models for each of our brands. We’ve optimized our spend as far as we can, using them. How can we get more from our marketing buck?”
We suggested, “Well, the mix models give you a ten thousand foot view of performance. If you go still higher and look across your products, we’re sure you’ll see some opportunities to reallocate investments.”
We gathered up the different brands’ media mix model reports. Each report had “response curves” for each channel. Response curves describe the past relationship between investment and incremental sales (Return on Ad Spend, or ROAS). (Ideally, they would also predict the future, but most don’t yet go that far.) We taped the pages with individual response curves for all of the brands on one big wall, which we called our “trading board”. Immediately we noticed that when we took this more global view, there were opportunities to move money profitably among brands.
Of course we had constraints, like contractual limitations in our media buys, and strategic considerations (some newer brands were getting extra support early in life). We also had halo effects across the brands to consider that further informed the available trades. But in all, this macro view produced half of the mid-eight-figure annual profit gain we found.
—
We next suggested diving down to “the deck”. We dug into each model and its data at more granular levels, cut by geo and publisher. Again we started by using whatever data was readily at hand, rather than asking for anything custom. This meant reports by state / DMA and by publisher by week. So for example, on the input / investment side, we got (or allocated) things like the amount we spent in different cities, or on different media properties. On the output side, we got measures like sales, but also in some cases brand development / category development index (“BDI/CDI”) scores for different DMAs.
Now we asked questions like, “Why are sales so high in a place where we don’t spend a lot of money, but lower in this other place where we do?” In one case we discovered what we called a “snowbird pattern”, where consumers took their brand preferences with them when they went south for the winter. We figured, “Maybe the ad spend should follow, to let others like them them know their favorite product is available there as well!” We followed quickly with some tests that validated the predicted impact. In turn, we scaled this and other winners as the hypotheses we’d had proved themselves out.
We did these things in the scrappiest ways, with free / cheap sources of data where we needed it beyond what the company already had. Requirements then flowed from this on-the-ground experience. We knew what specific integrations we needed, what testing tools and processes we’d use — and the scope and extent of both. The solutions that emerged were highly focused, simple to understand, operationally practical, proven by in-market tests, and ultimately sustainable. They also emerged with momentum to fund the necessary investments and reorganizations.
—
Time is another fruitful optimization dimension. A global beverage company we advised was shifting lots of its marketing budget into digital media in one of its biggest markets. And yet, marketing ops teams for these channels were thin on the ground. The optimization approach for a typical campaign was, “Launch it, let it run, read the reports when you’re finished, and think about what to do better next time.”
We bet that during the course of the campaign, on any given day or week, if we drilled down to the level of the individual ad units we were running (defined by ad format, or channel, or creative version), we would find winners and losers. Some of the performance variation among them is likely signal – likely to persist – and not noise. If we could get a common view of that performance (by matching tags) across all of those assets, we could find opportunities to move money out of the low performers and into the better ones. Plus, if we left a little money behind in the “losers”, we’d have a pretty good control group (targeted at the same people at the same time), so we can judge how much lift we get from each of those re-allocations.
We started by plotting this data visually, pulled all of the relevant folks into a room every few days during the campaign, and made manual decisions about how to move the money among the different ad units. Then we wrote a macro in Excel (because that’s what the client team was comfortable with at that time, and we wanted to meet them where they were) that would look beyond the obvious trades in the chart, and would suggest other trades we could make, working both ends to the middle. (That is, we would move money from the lowest performers to the highest performers until we ran out of money from each of those, then we’d move to the next trade). This idea, conceived and piloted in just weeks by a handful of people, produced a 15% ROI increase on the relevant budgets, worth $6M a year. The client subsequently rolled this out to twelve of its biggest markets globally.
In another example, a big pharma company spread its search ad spending for one of its skin medications evenly across the year. We looked at traffic to its branded and disease awareness sites. We noticed a consistent multi-day spike in visits several weeks before each major holiday. We determined that folks were looking for help to clear up their skin conditions before being out with people, on dates and vacations, and at parties and family reunions. We asked, “What if we time-shift our search spend – pulse it when the traffic spikes typically happen?” In other words, what if we amplify outreach during times of natural interest? Many people stop at seasonal strategies, and leave a lot of coins between the sofa cushions by not looking at time more closely.
—
Another opportunity seam is process. We once worked with an iconic toy retailer, with a few flagship stores that served as branding beacons, but an even more important catalog and ecommerce business. The company had upgraded its web site to help grow sales, including adding a sophisticated recommendation engine. But as we did a mystery shop, from online ads and search through to the site and this feature, we saw it producing weird results when we asked for birthday present suggestions for a ten-year-old girl.
We started with the configuration file for the retailer’s specific implementation. We expected to find some combination of “most popular” and “most profitable” as the sort order settings for the recommendations. Instead we found the configuration value set to the default, “order in which products were added to the database”. Two character changes later, and we quadrupled the productivity of this feature. No lack of smarts on anyone’s part, just a missed communication between the folks that added the feature and the team responsible for managing the site. But the process-based tour revealed the opportunity.
Shortly after, we noticed that only a third of the emails in the company’s house list were going out. A quick trace of the process revealed that the ISP hadn’t whitelisted the company’s domain. Again, a flip of the switch and we tripled outgoing emails volume, and with it, sales from that channel.
—
Marketing analysts have suggested a rule of thumb that creative explains twice as much of the variation in marketing budget performance as media weight and mix do. That’s a fancy way of saying that “good” creative really matters. The opportunities to better understand the impact of creative on marketing effectiveness are expanding rapidly.
One firm, Real Eyes, extracts and infers metadata from each frame of a video ad and compares it with the emotional frame by frame reactions presented by a panel of opted-in users. Then they tie those reactions to sales data via integrations with buyer data panels. With this, they can tell you things like, “When you showed a clip of a puppy three seconds into the peanut butter for kids ad, mothers smiled, and eventually bought more peanut butter than mothers who didn’t see the ad. So run creative that gets people smiling.”
That’s pretty cool, and you should call them. But we’ve also done simpler versions of this. For one retailer, we grabbed email subject lines, did some rudimentary “semantic analysis” to classify them – like, “sale”, “recommendation”, “holiday”, “new”, and so forth – and compared these themes on metrics like open rate, clickthrough rate, conversion rate, and AOV. Curiously, the performance along these thematic lines turned out to be exactly the opposite of what the merchants and channel managers thought (“recommendation” vastly outperformed “sale”). With some adjustments, we doubled the productivity of the channel.
—
In another engagement with a consumer products company, we were trying to improve their logic for targeting audiences with ads in programmatic media. They had an eight-figure-record house list of consumer records they’d gathered, which was good. But the targeting logic they were using was fairly basic: “So, we’re thinking of doing a campaign centered on soccer. Let’s grab all the records where folks had responded to a sports-based past campaign, and use that as the seed list for our lookalike buy.”
We suggested using machine learning to match a broader set of customer attributes to campaign characteristics. So, something like, “Consumers who look like A, B, and C will be more likely to respond to campaigns with themes D, E, and F.” (The fancy term for this is a “propensity model”.) Of course this required good documentation about which campaigns had which themes. Perfect for AI, you’d say, and of course you’d be right – show an AI some words, sounds, and images, and it can do a good job of telling you what they’re about, in ways you can anticipate (“Product X vs. Product Y”) and maybe didn’t (“Day vs. Night”). But, when there are only a couple of dozen campaigns, and a couple of hundred creative assets initially handy, while you’re spooling up your AI pipeline for this, you can do as we did: order some pizzas and hand-classify. This effort produced 100% lift in target consumer engagement vs the prior approach, and our effort even beat the engagement rate of the AI-generated audiences offered by the media partner.
—
Of course you can use machines to consider some of these dimensions in combination, to extract even more explanatory and predictive power, but as any good data scientist will tell you, always start by exploring the data you have!