We had a flood of great questions come in at the end of our webcast Stellar Email Marketing: A/B Testing and CTAs
that our presenters Carly Brantz and Jillian Wohlfarth didn't have time to answer. If you missed it, Jillian answered the CTA focused questions in her latest post
and to wrap everything up, Carly answered the A/B testing questions for us below!
Q. What’s a baseline for sample size, i.e., it should be a minimum of "x" knowing that the current customer base is small?
When you are doing an A/B test on your website or on a landing page, you have the advantage of time. You wait until enough people visit the website until the results reach statistical significance and you can call the test complete. However, in email, you are tied to your list size, how many people you plan to send to, and what your baseline conversion or click rate is, in order to make the experiment worthwhile. The key to selecting your sample size is ensuring that you send your test message to the smallest portion of your list possible in order to get to statistical significance. Then, you send the winning email variation to the rest of your list. There are many sample size calculators that can help you determine your specific sample size needed:
At the end of the day, I recommend working with what you’ve got. If you have a smaller list, you will need to send to a higher percentage of that list. For those with very small lists, you can simply split it up evenly and monitor the success of each group. Remember to outline your goals and what specific metrics you are driving towards before you begin each test.
Q. We often email smaller lists—under 1,000 people. Do you have recommendations for A/B testing in those cases?
Again, I would recommend plugging your numbers into one the calculators from above. For a list of 1,000 it would depend on your typical delivery rates and conversion rates to determine how many people should be allocated to each variation in the A/B test.
Q. Do you find any value in doing a/b/c/d type testing or just stick to 2 iterations?
Absolutely! Especially with design I prefer to do more than two variations. However, you need to make sure that you are sending each variation to a large enough sample size to get statistical significance from your results. There is nothing worse than spending time and resources on a test that doesn’t provide results that you can trust.
Q. How does your team compile or organize all of the findings you've collected through your various tests?
For our team, it really depends on the type of email that we are testing. For larger blasts, we include any findings in our recap report that summarizes engagement and conversion results. For tests within our many nurture campaigns, there is a separate spreadsheet that outlines all the tests we are running and the winning variations. Once a quarter we meet as a team to share results from all channels such as online advertising, content, landing pages, etc. While results from a landing page or a banner ad may not yield the same results in an email, there are always things to learn from all of the tests that we're running.
Q. You said to test the same variable multiple times...did you mean to test the SAME email multiple times or test that variable on future emails?
I was referring to testing the variables in future emails. Because your audience and business changes with time, what once resonated with your customer base may not win over new offers in the future. For example, our ideal customer profile has changed over the last few years. Where we once were primarily targeting developers, we now target both developers and marketers. Therefore, the design, CTA, content, etc. that we tested a few years back to determine winners with our developer audience are most likely not the variations that would win today. Just when you think you are done testing…time to test again!