Have you ever had the feeling that despite having failing metrics (which were painstakingly picked), a campaign just wasn’t given the right chance? Despite the data pointing to failure, those results can feel invalid or irrelevant.
Say you’ve run a set of outreach touches that begin with a few value add pieces and slowly work to a phone call. The open rate on the emails is there but virtually nobody gets on the phone. After a few weeks the decision is made to stop the campaign because the value add pieces just aren’t landing sales progress. You might feel something is amiss, the value-add emails did get good traction, maybe given a little more time something may have developed.
Often that feeling points to the failure not necessarily coming from the campaign itself, but the setup or execution of the campaign. During the execution of the plan, would you have known if something went wrong at any of the steps?
Perhaps one of the nurture resources sent out in the emails went offline, a template was broken, or some of the emails were going to spam. As anyone who has run a multi-channel campaign can attest to, there is much that can go wrong.
Without having some indicator of any of these problems happening during execution a failure could appear to be strategic failure rather than tactical.
Jidoka (自働化) is one of the primary principles introduced in the Toyota Production System. Taiichi Ohno and Eiji Toyoda saw the value in supervising automated processes in such a way that all problems can be detected and addressed. They thought about this in a way that the process itself would raise the flags to indicate a problem, rather than having very careful watchers at every step. If you went to business school you probably have read all about this in a management context already.
This allows the process to be developed and matured to a point where the true potential can be realized. After the system is running smoothly, then its value can be assessed.
If the example given above had these sort of checks in place there would be a clear answer the campaign was failing due to tactical or strategic error.
Reporting can add visibility to many of these issues; having a live dashboard that shows flow through the funnel and important activity metrics (link clicks, responses, etc) provides good indicators to the structural health of the campaign.
It is easy to go over the top with analytics and tracking. Ideally you don’t want to be stalking prospects or flooded with data. Just as you carefully pick what metrics describe the success/failure of the campaign, each step should have careful identifiers that tie into those metrics.
These metrics should be intent-focused. In our example a prospect that reaches the end of the campaign and schedules a call without interacting with any of the provided resources indicates a pretty different intent from one who is engaging with the content and communicating back with the sales team.
Going into the campaign you need to know what your ask is and what indicators there are that a prospect will accept that ask. In the example, this is a call. It’s not enough to just say it is a call, though; it is an intro call to assess whether or not you’re right for the buyer (and them for you.)
They may be ready for that kind of a call if they are interacting with content on comparison or even more so if they’re communicating back asking about specific details of the product.
When a metric falls below an expected threshold, the campaign should be stopped and analyzed. You’re potentially sending costly leads down a broken funnel. This is ambiguous because these thresholds are similar to pipeline metrics, they are different for every group.
Say you’re measuring email engagement at ~70% across your emails but only 5% end up on a call. Sounds like somethings bugged, something you would not want to find out after letting the campaign run for a couple weeks. Many issues will not be so dramatic, but can have similarly dramatic effects on the result.
As you build this process of stopping, analyzing, and fixing into your process you will learn a lot about the inefficiencies in your campaigns.
You should only determine whether a campaign has been a success or not after these issues have been addressed and the campaign has had time to settle.
To sum up, adding metrics throughout the campaign and watching them as the campaign is executed provides a far more granular view into the success of the campaign.
That granularity provides the ability to know when a campaign is failing due to tactical reasons or strategic ones. Being able to respond in real-time can prevent a good campaign from falling flat.
In this article I’ve used campaigns as an example of what can suffer from this type of issue and how principles described by Jidoka can help, however those same principles can be used to validate your pipeline, keep an eye on your sales process (think handoffs), or really any similar process.
It can seem as if instituting a system like this requires that you track prospects on a very granular level, that is not the case. Measuring the campaign can be done through aggregate metrics retaining all the benefits of user tracking.
You do lose the ability to say there’s a specific person who did X, Y, and Z and didn’t end up on a call, but the aggregate metrics will tell the story of X, Y, and Z having high engagement while call volume is low.