Blogs

Digital Marketing Insights

The limitations of your current email tests

Here's an article I wrote for iMedia Connection:

For your next email marketing test, go crazy.

The traditional mantra of A/B and multivariate testing says to change just one variable of your campaign (call-to-action placement, for example) while keeping everything else consistent across test cases. This approach probably derives from the scientific method we learned in grade school -- form a question, postulate a hypothesis, test the hypothesis, and analyze results. And that's great! The method is aptly applied to help marketers understand how changes in email campaigns change results.

But it's not the only way to test. For your next test, consider this: Don't change just one thing, change everything.

Create dramatically different creative. Make everything different -- the layout, the tone, the image placement, the number of sections, etc. (Note: You might want to stay within the confines of your brand image, but don't let that deter you from taking a risk. Your "brand" is more flexible than you believe.) Swing for the fences.

The problem with rigidly controlled tests is that they are linked to a hypothesis (explicitly or implicitly). That hypothesis comes from a preexisting understanding of what might cause a change in response. For example, it is "understood" that subject lines, banners, call-to-action placement, and content length are all variables that can affect response rate.

What if your understanding of what you can and should test is wrong, or incomplete? If you continue to follow "scientific" A/B testing methodology, you'll never branch out from your particular incomplete understanding. And there are insights to be had outside your understanding.

Dramatically different test cases force you out of the norm. They force you to consider variables you never would have considered (e.g., an email five pages long vs. one with one giant image and "alt text" only). The departure might bring to light variables or practices your previous understanding wouldn't allow you to consider.

Early in my career, I attempted just such a test with a client selling training courses. The newsletter style email was sent with one variable slightly altered as a test, every week, like clockwork. Growth in response rate eventually stagnated at around 4 percent.

So we tried something drastically different. For the next email we sent, 80 percent received the traditional newsletter format and 20 percent got a new format, all text and no images, written as a personal letter. The letter format included no content other than the letter, while the newsletter format had images and five different articles.

The newsletter's conversion rate stayed consistent at about 4 percent -- the letter got 12 percent. Future letter-format emails were sent, and the conversion rate eventually leveled around 9 percent, more than double the consistent rate for the newsletter format. Had we followed the rigid testing method, we probably never would've tested the potential of the imageless letter format, and we never would have seen how well it resonated with our audience.

But how can you justify throwing traditional wisdom to the wind?

As I mentioned, the traditional testing approach is derived from the scientific method. Scientists seek to understand the mechanisms underlying the natural world in a finer and finer amount of detail. Such understanding requires a strict isolation of the variables involved, which is only possible in a carefully controlled lab.

Our subscribers don't live in a lab, and it is impossible for us to control all the variables that affect their response rates.

Email and cross-channel marketers are dealing with human beings -- a "system" that is composed of too many variables for us to control. Traditional testing gives us an idea of what influences response rates, but that idea will always be incomplete since our brains are limited, and our subscribers are always changing.

So why not try something totally different to see if you're missing anything? You can always control potential backlash by making the test group only 5-20 percent of your total list. But don't be afraid to break out of the confines of your traditional tests -- you might find something surprising.

Posted by: Justin Williams at 9:55 AM
Categories: email, email marketing, email tests, testing

0 Comments

Leave a comment

This thread has been closed from taking new comments.