This article originally appeared in BtoB on July 16, 2012.
Understanding how an email campaign influenced your audience is not always easy. There are many factors that could contribute to why your customers engage at any particular time. Let's take the example of a conference for which you are trying to get registrations.
Customer Y is considering registering for the conference. He may be using all kinds of information to make his decision—a colleague who is also going, a brochure you sent to his office, an email you sent or all three.
At some point, he makes the decision to attend, but hasn't gotten around to registering yet. Then your “Last Chance to Register” email lands in his inbox. It's got an incentive—a 10%-off registration code. He submits his RSVP. Do you attribute his registration to the last- chance email? Likely so.
However, what you don't know is that Customer Y was invited by one of the guest speakers and was planning to register anyway. By coincidence, the 10%-off registration code arrived moments before he was going to register; it didn't change his course of action, but it did save his company a few bucks.
This story illustrates potential pitfalls in trying to accurately attribute the impact of each of your marketing tactics. Is there anything we can do to militate against this problem?
One technique that email marketers can employ is to leverage distinct test and control groups within your audience to measure the impact of an email in the inbox; you may be familiar with control groups in the medical community as it pertains to clinical drug trials—one group receives the actual drug; the other (control) group receives a placebo.
In email marketing terms, a control group receives the same treatment that it would have received if no tests were being conducted. They receive the same number of emails, at the same pace with the same content as your last conference campaign. Conversely, the experimental group receives a campaign in which a new variable is introduced—different content, a different number of messages, different offers. Ideally you would change one variable at a time within each experimental group and, if your audience is large enough, you can run multiple tests across multiple groups simultaneously.
Insight can be drawn when you measure response of the test versus the control group. What was the impact of your email campaign within each group? Did the experimental groups perform differently than the control? Did more people register from the test group who received a 15%-off registration code versus the control group that received a 10%-off incentive?
With other variables held constant, test and control groups allow you to more accurately measure the impact of your email campaign with greater confidence.