Digital Marketing Insights
September 11, 2012 | Kara Trivunovic
Here's an article I wrote for ClickZ:
Email marketing is often the unsung hero of marketing portfolio success. Instead of praise and worship, email marketing gets a bad rap. Questions like "Is it dead?" "Does it work?" "Will it go on?" continue, everywhere…except among email marketers. Go figure, right?
What is comical is that it's common to find that the budgets of the naysayers, the exploration of new channels, is often funded by…revenue generated from email marketing. The biggest challenge that faces most email marketers (and marketers in general) is attribution. There is very little way to attribute the proper revenue consideration to email, mobile, social, apps - the list continues to grow. So I ask, "Where have all the control groups gone?"
In an age where the mindset around email marketing is quantity over quality, the idea of a "control" group has all but disappeared - which is unfortunate, because it's one of the single most telling methodologies to leverage in attribution modeling.
Where to Begin
First, you need to understand your entire marketing mix and map it out. There are going to be elements outside of your control, such as mass advertising opportunities that cannot be tracked to the individual, but it's important to understand that they exist and play a role in the communication of a message or brand to the customer. Categorize your efforts in groups of targeted and mass media to see what dials you have the opportunity to turn.
Finding the Audience
Once you have determined where you will be messaging, you need to then focus on the targeted efforts and begin adjusting the "who." It's imperative that you keep two things in mind. First, the "select" for the test must be random and representative of the entire mix. No segmenting geographically or taking an alphabetical data set and slicing it after every 10,000 customers - it needs to be truly random. Second, it needs to be statistically viable; each cell of the test needs to meet the requirements for a statistically viable sample size - often 10 percent works; depending on the size of your data set you may even be able to get away with a smaller percent - but better to be safe than sorry.
Determine the Mix
Now that you have the "who," you need to figure out exactly what they're going to get. It's suggested to execute these types of tests over a period of time, especially with attribution. A one-time proof does not a point make. But consistently exposing the same audience to a similar or like marketing mix will truly help you determine what impact each channel has on the likelihood to convert.
Let's take an example here. Say you have a database of 1 million subscribers that have proper permission to receive marketing messaging in all of your four available targetable channels: direct mail, email, SMS, and push-app notifications. The hypothesis is that email is the strongest contributor to the portfolio. To keep it manageable, break the audience into four categories: email only, email + direct mail, email + direct mail + SMS, and all four channels. This would allow you to measure the impact of adding the additional channels to communication flow. Clearly the messaging needs to be on point and the timing considerations are important - but now you have something to measure.
As you get everything ready to go, be sure each audience is held out in its assigned category for the duration of the test and that each category has its own set of tracking tags and categorizations so that you can easily determine who is part of which data set. While tests like this can get complex to manage, your diligence and attention to detail through the process will certainly simplify matters.
It's important to remain the master of the test - there are likely going to be scenarios where someone wants to "just get a message out to everyone via all channels," but these exceptions will certainly corrupt your test, and ultimately the ability to determine attribution and lift by channels.
Posted by: Kara Trivunovic at 9:56 AM
July 23, 2012 | Amanda Hinkle
Here's an article I wrote for BtoB:
Understanding how an email campaign influenced your audience is not always easy. There are many factors that could contribute to why your customers engage at any particular time. Let's take the example of a conference for which you are trying to get registrations.
Customer Y is considering registering for the conference. He may be using all kinds of information to make his decision—a colleague who is also going, a brochure you sent to his office, an email you sent or all three.
At some point, he makes the decision to attend, but hasn't gotten around to registering yet. Then your “Last Chance to Register” email lands in his inbox. It's got an incentive—a 10%-off registration code. He submits his RSVP. Do you attribute his registration to the last- chance email? Likely so.
However, what you don't know is that Customer Y was invited by one of the guest speakers and was planning to register anyway. By coincidence, the 10%-off registration code arrived moments before he was going to register; it didn't change his course of action, but it did save his company a few bucks.
This story illustrates potential pitfalls in trying to accurately attribute the impact of each of your marketing tactics. Is there anything we can do to militate against this problem?
One technique that email marketers can employ is to leverage distinct test and control groups within your audience to measure the impact of an email in the inbox; you may be familiar with control groups in the medical community as it pertains to clinical drug trials—one group receives the actual drug; the other (control) group receives a placebo.
In email marketing terms, a control group receives the same treatment that it would have received if no tests were being conducted. They receive the same number of emails, at the same pace with the same content as your last conference campaign. Conversely, the experimental group receives a campaign in which a new variable is introduced—different content, a different number of messages, different offers. Ideally you would change one variable at a time within each experimental group and, if your audience is large enough, you can run multiple tests across multiple groups simultaneously.
Insight can be drawn when you measure response of the test versus the control group. What was the impact of your email campaign within each group? Did the experimental groups perform differently than the control? Did more people register from the test group who received a 15%-off registration code versus the control group that received a 10%-off incentive?
With other variables held constant, test and control groups allow you to more accurately measure the impact of your email campaign with greater confidence.