Performance metrics are the secret sauce of digital marketing, allowing marketers to directly measure the results of their campaigns. But why settle for evaluating performance metrics after a marketing campaign has run? Why not use those metrics to your advantage — to evaluate, adjust and improve performance during a campaign?
That’s the promise of A/B testing: Sending two variants of an email to a portion of your list to determine which performs better. By following these best practices, you can use A/B testing to drive email opens and clicks and improve the results of your energy utility’s marketing campaigns.
What is an A/B test?
An A/B test, also known as a split test, is a digital marketing tactic that involves testing two versions of a campaign asset to determine which performs better. In some cases, the “winning” asset may be immediately deployed; in other cases, the asset may be further tested against another variation in an iterative process to optimize several different campaign elements.
A/B testing can be used to evaluate any type of digital marketing asset, but it is commonly associated with automated email marketing. In an email campaign, the test is sent to a small percentage of the list — say, 10% of the list receives version A and 10% receives version B. After a period of time, the better-performing version is determined and the email platform automatically deploys the “winner” to the remaining 80% of the list.
What elements of an email campaign can be tested?
Nearly any aspect of an email can be tested — but it is critical to test only one element at a time. If there is more than one difference between version A and version B it will be impossible to determine why one performs better than the other.
Email campaigns commonly A/B test one of these elements:
- Subject line: What message prompts the higher open rate?
- Sender: Should the email come from a company, person or other brand name?
- Call-to-action: Which color, button or active verb drives more clicks?
- Headline: Which title pulls recipients into the message and results in conversions?
- Imagery: Do recipients respond to a photo, illustration or particular design treatment?
What are the benefits of testing a subject line?
The subject line is the most common element tested in an email campaign. It is the single-biggest driver of email opens — and if recipients don’t open your emails, your campaign has no chance of success.
A subject line test allows you to see what message better resonates with your audience so you can optimize results. Questline Digital’s performance metrics show that emails with A/B-tested subject lines achieve 7% higher open rates.
What are the benefits of testing a call-to-action?
While email opens are obviously a critical first step, your campaign’s call-to-action is what drives results. Without clicks on a CTA button or link, your email won’t achieve its conversion goals. A/B testing can optimize those clicks.
Emails with A/B-tested call-to-action placements improved click-through rates by 16%, according to Questline Digital performance metrics. Depending on your message’s design, we recommend testing the size, color or placement of a CTA button and the text used in the call-to-action.
What A/B test sample size works best?
There isn’t a hard-and-fast rule to determine how big your A/B test sample audience should be. The variables to consider include the total size of your list and the expected response rate. Basically, you want to send to enough recipients so the test results are statistically valid and achieved in a timely fashion. Accounting for these factors, sending a test to between 10% and 20% of your list is usually sufficient.
How long should you run an A/B test?
As with list size, there isn’t an easy answer to how long a test should run. For a large list, 24 hours is usually sufficient. If you have a small list (and time to wait), running an A/B test for a full week has the advantage of eliminating fluctuations caused by the time or day you send.
How do you determine the winner of an A/B test?
The variable that a test measures is determined by the element you are testing and your campaign goals — typically open rate, click-through rate or conversion rate. These parameters are defined when setting up an automated A/B test; for example, the “winner” is the subject line with the higher open rate.
When testing the following elements of an email campaign, these are the metrics typically evaluated to determine a winner:
- Subject line: Open rate or click-to-open rate
- Sender: Open rate or click-to-open rate
- Call-to-action: Click-through rate or conversion rate
- Headline: Click-through rate or conversion rate
- Imagery: Click-through rate or conversion rate
In order to eliminate random chance or errors from results, it’s important to measure the statistical significance of the test. A good rule of thumb is to look for 95% confidence between the variants; depending on the sample size, this translates to a 25% to 35% difference in performance metrics.
For example, if subject line A earns a 20% open rate and subject line B has a 22% open rate, you may not be able to determine with statistical significance that the subject line is the cause of version B’s performance. But if subject line A has an open rate of 20% and subject line B drives an open rate of 26% — an increase of 30% — you can say with statistical significance that subject line B is the winner of your A/B test.
Reach your marketing goals with A/B testing
Don’t just rely on digital performance metrics to analyze marketing campaigns after the fact. Use performance metrics to your advantage to optimize results during a campaign. With A/B testing, your email campaigns will deploy higher-performing subject lines, CTAs, messaging and content, boosting results and helping your energy utility reach its marketing goals.