We’ve spent the last twenty years helping big companies use big data to spend big ad budgets better.
We’ve worked from strategy to execution, with a dozen iconic national and global brands, in retail, consumer products, financial services, telecommunications, and health care.
We’ve spanned the full purchase funnel, from brand awareness to conversion and retention across traditional and cutting-edge digital channels.
Cumulatively we’ve produced nearly a billion dollars in annual profit improvement — 15-20% ROI gains, from both growth and efficiencies — validated through rigorous incrementality analysis.
What have we learned? How do we do this?
—
Most marketing organizations approach getting results exactly backward. They confuse the means for the ends. They think, “If we build it, they will come.”
What works, especially in today’s turbulent, uncertain world, is the opposite:
govern through business results, not implementation schedules;
emphasize momentum over vision - momentum is strategic;
measure and adapt continuously,
scope and manage end-to-end, across people and process as well as technology; and,
start and keep things as simple as possible (though no simpler).
—
This perspective is well established.
One of the best articles we’ve seen across the past four decades appeared in the January-February 1992 issue of the Harvard Business Review. Written by Robert Schaffer and Harvey Thomson, it was titled, “Successful Change Programs Begin With Results.” The language is dated, and 1985 called and wants its programs (TQM!) back. But the research, conclusions, and prescriptions it presents are timeless. The first call-out text in the article reads,
Results-driven programs bypass lengthy preparations and aim for quick, measurable gains within a few months.
The ideas here are adapted from and corroborated by seventy years of behavioral psychology studies. This research shows that change happens faster and sticks better with a reinforcement schedule that is initially focused and short. Shifting up through the gears to generalize capabilities happens as the learner gains confidence, not at the teacher’s pace.
Coincidentally, this is how technology is best developed – scratching specific “itches”, in short sprints, close to the user.
—
So that’s our philosophy. What do we actually do? We apply a playbook we call “DYNAMO”, short for “DYNAmic Marketing Optimization”. That’s a fancy way of saying that we integrate a bunch of information into a framework we’ve found everyone can understand, and then use that information to identify and execute “trades” out of less effective investments, into better ones. The we wash, rinse, repeat, as long as we see value.
The key to DYNAMO is that it starts “outside-in” — from the customer — before considering trades “inside-out”, or from your marketing plan. Rather than starting by saying, “How do our MMM and MTA models suggest we should move money around?” we start by saying,
Who are we targeting?
How do they buy – what is their dominant / preferred journey?
How do our marketing investments map onto this reality – and where are there gaps?
How are our marketing and other investments performing along this journey:
What are the outcomes?
What outputs have we bought or generated?
What are the costs of these outputs?
What’s the conversion rate from each step to the next, along the path?
Where, for each target customer segment and journey, is the bottleneck?
Bottlenecks can be assessed with trends: “Wow, CPC has really jumped, much faster than conversion rates!”
They can also be assessed with benchmarks: “Gee, our email click-through rates are half our competitors’.”
At each bottleneck, when we drill in along various dimensions (separately or together), we ask, “What are the positive and negative outliers?”
—
Once we’ve assembled the first, crudest versions of this information, we bring marketers from all disciplines — creatives, quants, builders — in to understand it, refine it, and ask, “So what?”
Out of this come action plans for things to test in the market, for deeper research where the stakes and uncertainty are higher, and requirements for incremental capabilities — skills, tools — now informed by hands-on experience.
Over time we move from “descriptive” to “predictive” capabilities that enable us to ask, “What if?”, for smarter tests
We then keep turning the crank in short, accountable sprints, until we see diminishing returns and we see teams in place able to carry on the work.