The annual AMEC Summit – with its theme of “Data: Analytics, Algorithms & Augmentation” – has come and gone in Prague, but the conversations about exciting technological advances in the measurement field are ongoing. Spurred by presentations from in-house teams like Prudential and Adobe to big data brands like Facebook, alongside leaders in measurement supplier- and agency-side, three key trends emerged from the summit:
Onward: Great measurement is a journey (and not always a linear one).
Discussion of journeys are often a feature of the AMEC summit, but this year saw refreshingly honest conversations about how that manifests in organizations – from planning to data capture to analysis.
One key thread through this was the importance of education and bringing key stakeholders along, particularly if you’re looking to make big changes, such as moving away from focusing on reach to look at measures like visibility or economic value. (Tools like the Measurement Maturity Mapper can be helpful in these discussions by providing a benchmark for where you are, compared to others in your discipline.)
And of course it’s not just our own journeys that matter. One might argue that user journeys matter more. Acknowledging that people are rarely linear in their approach to decision making complicates measurement, but this awareness reiterates the importance of starting with data at the planning phase and not just at the end of campaigns. And not being afraid to try new things when examining the data. Which leads nicely into the next topic…
#FailForward: The majority of data programs will fail. That’s a good thing.
This was a bold point made by Allyson Hugley to start the summit, but also emphasized by Facebook’s Daniel Stauber in the challenges of avoiding system bias: “Success is a lousy teacher.”
So what can we do? First and foremost we have to be willing to fail forward. Take risks and look at new metrics, or novel combinations of data. Know that not every new effort will work out. But some will, and those with a willingness to fail and experiment are going to have a much higher rate of innovation in their use of data.
Measurement that really looks out for our blind spots and digs into what isn’t working well won’t always show the rosy picture. But it will help brands better improve their content and reputations over time.
Finally: Balancing central needs for simplicity vs. market specificity.
If the top of the program was about inspiration and failing forward, the workshops really looked at the practicalities of data programs. What does the right balance look like between central needs for focus vs. local market coverage?
Unfortunately, there isn’t a simple answer for this one. Every organization is different, and the level of granularity required will depend on your business objectives and, ultimately, budget.
We did see however some powerful evidence for simplicity from Dr. Jennifer Bruce. When Adobe audited all of their historical coverage, they found key indicators of comms success were all driven by their top tier media — including 77 percent of adobe.com visits. For them, that was more than enough data to produce a clear recommendation of focusing on a set of key media that matter most.