You’ve spun up an impressive email marketing engine, mastering both the basics of email marketing and intermediate techniques. Now it’s time to go even bigger with advanced testing.

The A/Bs of Testing

Testing your emails is one of the most important steps in email marketing. Why? When practiced correctly, careful and assiduous testing often drives dramatically improved deliverability, open rates, and click-through rates.

One of the simplest forms of testing is known as A/B or split testing, and its simplicity drives its power. In an A/B test, you create two variations of one email—differing in only one particular respect—and send them to a small percentage of your total recipients. Half of the test group is sent Version A. The other half receives Version B. The result, measured by the most opens or clicks, determines the winning email, and that version is sent to your wider body of remaining subscribers. 

Elements to Test

Which elements should you test? There is a range of possibilities—the crucial thing to remember is that you must test only one variance at a time. Some of the most common elements for testing are the “from” name, email address, subject line, and pre-header (a message placed at the top of an email, complementing your subject line and often visible from the inbox). These are the elements that impact whether recipients open the email in the first place.

Other useful elements to test include:

  • Timing—both day of the week and time of day
  • Design—for desktop, tablet, and mobile
  • Copy
  • Calls-to-action
  • Images
  • Offers
  • Social media links
  • Footer
  • Landing pages
Advanced Email Marketing A/B Testing

An A/B test is successful when you find a statistically significant variance in the performance of your emails. Use this insight to power the remainder of that email campaign with the element that resonated most with your audience, while using the lessons learned about these elements in your future emails.

There will be times, however, when an A/B test isn’t statistically significant enough to determine results. This means that both elements work just as well. The test hasn’t proven anything other than the elements’ comparable effectiveness. Don’t worry if this happens. You have learned something and can now focus on creating a new A/B testing experiment that compares two different elements of your emails.

SEE ALSO: How to Grow Your Email Contacts Organically

Six Rules for A/B Testing Emails

1. Start small.

If you’re new to email testing, start with simpler elements such as the “from” name and email address, or the subject line. These elements don’t require much time or creative work to generate different versions, but can provide valuable insights into your audience that may be useful when testing more complex elements.

2. Test one element at a time, and send on the same day of the week and time of day.

In order to know what caused changes in your results, you need to know which element is responsible for the changes. If you test more than one element at a time, you won’t know whether the change is caused by one of the elements, the combination of the elements, or none of them.

Be sure to send emails being tested on the same day of the week and time of day, as these can influence performance. If you test which day of the week or time of day is best to send emails (very valuable knowledge to gain about your audience), again, test one element at a time and keep all other elements unchanged.

3. Test representative samples of your list.

Instead of performing tests by splitting your entire list into segments, use representative sample segments to determine which email performs best. Then, send the “winning” email to the majority of your list. The sample segments should be small, yet large enough for the results to apply to your full list. Aim for a sample size of approximately 10%-15% of your list. Choose the contacts in your sample segments randomly so results aren’t biased.

4. Give tests time before analyzing results.

Wait for the results of your email to peak and begin to go down before determining results. Depending on the timing of your email, the behaviors of your audience, and other factors, results can vary dramatically within the first 24-48 hours.

Some elements of your email can have a significant impact on when results will peak. For example, if your email contains an offer that can only be redeemed within 24 hours, it’s likely that most email recipients will act immediately or shortly after receiving it. However, if your email contains an offer that can be redeemed within one month, email recipients are likely to take longer to act.

5. Take all results into account, not just specific metrics.

It’s important to not focus solely on the metrics you want to improve, such as open rate. If your open rate increased, but the number of spam complaints and unsubscribe rates also increased, that is important to note and correct in future emails.

6. Keep track of tests.

Keep a record to keep track of elements tested and the results. Refer to previous tests and identify trends. Categorizing your tests makes it easy to search or filter by type. For example, subject line could be a category, A spreadsheet is a good tool to use. Some email marketing tools also enable you to keep records of tests.

There are a number of benchmarks out there about the best time of day to send emails or the optimal length of copy. But A/B or split testing provides valuable insights into your firm’s contacts and should be an integral part of any strong email marketing strategy.

To learn more about how to improve your firm’s email marketing campaigns for higher impact, check out the free Email Marketing Guide for Professional Services Firms.

On Twitter or LinkedIn? Follow us @HingeMarketing and join us on LinkedIn.