Conversion rate optimization (CRO) is the process of improving digital experiences and encouraging users to take action. Common CRO tactics include A/B testing and user behavior analyses across web pages, emails, and digital advertising campaigns.

CRO is a way to improve customer acquisition in a digital world with increased automation and minimal to no human interaction.

Remember (or imagine) the days before the internet. You’d walk into a store and a salesman might try to give you information about the products you are interested in, offer discounts, let you know that there’s a super easy return policy, and encourage you to make a purchase today instead of waiting until tomorrow.

Even just a sales person’s mannerisms, warmth, and smile could affect your purchasing decision.

This is everything CRO attempts to do!

When people shop online, CRO acts as the digital salesperson who focuses on constant improvement, trying out new tactics to provide the right information, at the right time, to the right people, and encouraging an action like “try it on,” “sign up for a newsletter,” or “buy it now.”

This post covers everything you need to know about CRO including:

  1. What is a conversion rate?
  2. Why should I care about CRO?
  3. What is CRO not?
  4.  How does CRO apply to email?
  5.  A/B testing strategies
  6.  Statistical significance and sample size forecasting
  7.  Not reaching statistical significance on any of your tests? Here’s what to do
  8.  11 patterns and best practices for CRO
  9.  Now what?

What is a conversion rate?

To break it down into its elements, a conversion rate is simply:

The goal of CRO is to increase the numerator of that equation (number of people acting). While CRO is somewhat analytical, and looks at these numbers factually, it’s equally important to understand the psychology behind human behavior and decision making. A few of these psychological elements are discussed in the best practices section below.

Why should I care about CRO?

Frankly, businesses can’t survive nowadays if they’re not thinking about CRO. Most digital marketers already think about it without even knowing it. Having best practices in your pocket will give you better success at what you’re already trying to instinctively do.

Often cited in sales articles, Harvey Mackay has famously said, “Everyone is in sales. To me, job titles don’t matter. Everyone has to be thinking about sales. It’s the only way any company can stay in business.”

According to statistica, the e-commerce share of total retail sales has been growing exponentially from 2013 to today, currently sitting at around 11%. Even for in-store purchases, studies show that between 70-80% of people research a company online before visiting the business or making a purchase.

You don’t need to see these stats to know we live in a digital age. Optimizing your digital assets to improve your sales is a no-brainer, and that’s exactly what CRO is. Trying to get more people to call your company? Optimizing for that phone number placement on your site is part of CRO.

Another reason to care about CRO is if you’re spending money on advertising. Why would you spend resources to bring users into your store or onto your website, but then leave them completely to their own devices? Focusing more on turning those visitors into conversions will bring you greater ROI for your marketing campaigns.

Long story short, CRO is essential, and having a structured plan for it will help.

What is CRO not?

There’s a lot of bad advice out there about how to do CRO. Some people might consider ways to “trick” the user into clicking or purchasing. But dishonesty or trickery has no place in conversion rate optimization. These are what we call dark patterns. Trickery, although it may work in the short term, leads to bad reviews, bad reputation, and high churn.

An example of this might be a promise of free information if you provide your email address but then being hounded by emails that require you to give your credit card and make a purchase.

You’ll also see examples across the internet of 50%+ lifts in conversion rates simply by changing a button color. And while Google is famous for having tested 41 shades of blue, it is not reasonable or realistic to expect users to behave drastically different from such a minor change.

One shade of blue versus another is not going to make or break you. While a company like Google, with millions of visitors a day, even a .01% uplift can have significant revenue impacts. Unfortunately, for the rest of us, CRO doesn’t focus on such minutia. In order to have a big impact, you have to make big, thematic changes.

How does CRO apply to email?

CRO can be used successfully across all digital channels, from social advertising to website pages to email campaigns. The metrics of focus and the strategies might change slightly, but the principles are the same.

Every email has a purpose, and it’s important to optimize the design layout and copy toward that purpose. The primary areas to optimize for email often include:

  • Subject line – One of the first things recipients see
  • Preheader and fully body email copy – Not just what you say but how much you say
  • CTA buttons – color, copy, size, location, etc.

Here’s an example guide to email A/B testing and how to optimize CTAs in your email campaigns. And here’s your top email A/B testing questions, answered.

With email, there are a few specific optimizations that don’t apply to other digital channels. For example, the time of day to send the email is an important consideration. Also, making sure your emails land in the inbox and your sender reputation isn’t damaged by sending to false email addresses is another important email-specific consideration. Get tips on how to optimize your sending with email validation.

A/B testing strategies

How do we go about testing across other digital channels? CRO often uses a method called A/B testing. This is where you create two versions of a webpage or email, and expose a random half of your audience to one versus the other. It takes the opinion out of design and lets the numbers speak for themselves. A/B testing software tools can help you launch these experiments and record the results. Many email programs also offer A/B testing features.

What works for one industry or company, may not work for yours. For example, for lower-priced products, adding a .99 or .95 onto the price could keep conversion rates the same but generate more revenue. On the flip side, for higher-priced products, this might be seen as cheap, and whole number prices often encourage more conversion. It’s always important to test against your own audience to understand what will work for you.

To run your own tests, start with a well formed hypothesis. This is key to your CRO strategy. Unless you want to just keep throwing darts at a dartboard to see what sticks, then you have to go about things more systematically:

Research and ideate

Research your users through actual user interviews or through user behavior analyses in Hotjar or Google Analytics to help you form hypotheses. Perhaps you find that users are not understanding your pricing page because there’s too much information. You could hypothesize that removing some information to make the design more simple and straightforward will improve conversion rates.

Be sure your hypothesis centers around a larger theory, and is testable and measurable. If you have a well developed hypothesis, even a losing test can provide you with winning insights. Instead of having the result “design B failed against design A,” you can conclude “less information and simplicity did not encourage more conversions.” This result is much easier to apply across other pages, and can help you plan your next experiment.

Prioritize and plan

Once you’ve gathered a list of hypotheses, estimating the revenue impact of each change and the likelihood of success can help you prioritize which tests to launch first. There can only be one test running on the same page with the same audience, otherwise you risk confounding variables.

For a good prioritization framework, you can use the RICE scoring model. You can classify the effort and impact in terms of buckets. Maybe you size effort in terms of 1 (small) through 5 (extra large). You can also create a list of desirable outcomes (solves a problem, improves overall signups, increases revenue per user, gives the user a personalized experience, helps attract new users, etc). The impact is scored on how many of these outcomes the test will influence (1-5).

Your final RICE scores (Reach x Impact x Confidence/Effort) along with your own intuition can give you a roadmap for what to test first.

Test, report, and iterate

There’s a joke that A/B testing stands for “always be testing,” and there is some truth to that. Using the losing or winning results to ideate on another variation could eventually result in even bigger wins. Don’t settle for where you are–there’s always opportunity for improvement.

Tracking your testing activities, screenshots of what was tested, and the numerical outcomes, will help you and your team base long-term plans on historical patterns. Keeping a spreadsheet for this purpose might seem onerous at first but will save you a lot of trouble when a new team member comes on board and wants to know what was tested and what worked before.

If you follow the steps from researching, to reporting and iterating, you’ll be in a really great place with your testing plan and strategy. But, how do you ultimately determine winning experiments? How long should you run a test to see the results?

Statistical significance and sample size forecasting

You may have heard about the concept of statistical significance, but how important is this for your testing efforts?

Consider the scenario where you flip a coin 30 times and 17 of the times it came out heads. Would you think that the coin was weighted more towards heads or would you see that as just a coincidence?

Statistical significance is the likelihood that the result you saw during a test was a true difference and didn’t just occur by accident, randomly. We don’t want to spend company resources building out designs that have no positive impact, so it’s important to only make changes when statistical significance is high.

The good news is, you don’t need to understand exactly how to calculate statistical significance and p values; most tools like Optimizely or an online “statistical significance calculator” can do a rough estimate for you. However, you do need to understand what statistical significance means.

Neil Patel’s significance calculator puts the above coin flip scenario at a 70% confidence level, assuming that you flipped a fair coin and it converted to heads at a 50% rate (15 out of 30). This means that he’s 70% confident that the second coin is unfairly weighted towards heads instead of the result just coming about by chance.

The industry standard significance level is 95%, meaning the fairness of the coin cannot yet be determined without further flips. For a company willing to take more risks and act on information that may be just due to random circumstances, sometimes a 90% level is used.

So, how many more flips would you need to determine the fairness of the coin? Sample size calculators like this one can help you determine that. The baseline conversion rate is what normally happens without tampering.

In this case, a normal coin has a 50% chance of landing on heads. The fact that we’re seeing a 57% rate of heads (17/30) means that the minimum detectable effect is 14% (the percentage change between 50 and 57).

At a confidence level of 90%, we would need at least 440 flips of this coin compared to 440 flips of a fair coin to realistically determine if it is unfairly weighted. If the percentage differential that you expect to see (14%) changes, then so will the sample size needed. One note of caution would be to allow for a couple of weeks of testing even if significance is reached right away, as business cycles can really change outcomes over the course of a week.

I use this example of coin flips because we all know that most coins are fair and that seeing just a few more heads than tails doesn’t necessarily mean the coin is weighted towards heads.

Keep this example front of mind when you’re reviewing your experiments. Many marketers will see a positive uplift in version B with a 70% confidence level and call that a great success.

This is dangerous because if you’re acting on information that is likely due to chance (like the coin flip example at 70% confidence level), then you’re wasting your time and resources on building out an experience that may actually have no positive effect–and could potentially have a negative effect.

What to do if you’re not reaching statistical significance

This is an age-old problem in CRO. If we need to wait until 90-95-99% confidence level, then what happens if we perpetually sit around 40-70% confidence? How can we ever determine a real winning result?

First, decide when it makes sense to run a test. If the sample size calculator indicates that you need 40,000 visitors to see statistical significance on your experiment and the page being tested only gets around 2,000 visitors a month, then it does not make sense to run a test. Are you going to keep the test running for 20 months? Probably not.

Here are 5 things you can do in that situation:

  1. Just Implement – If your test is a no-brainer, just implement it. If you’re following best practice patterns (see below for examples), or if your users have clearly indicated in qualitative interviews that this change will improve their experience, just make the change. You don’t always need to test everything if it’s common sense. CRO can be employed directly and there doesn’t always need to be an experiment. Just be sure to monitor the conversion stats after implementation to make sure there isn’t a drastic decline.
  2. Think Bigger – If you’re testing something that will not have a significant impact on user behavior, think of a larger thematic change to test, and abandon the small idea. Rather than testing one word in the headline, think about the tone of the entire page. Change all copy to reflect that new tone. Instead of testing one image change…test the idea of product imagery versus people focused imagery across the whole page. Bigger thematic changes like these will have more impact than simply modifying one word or one image, and can be easier to track and notice results.
  3. Test Across Multiple Pages – If the test is really worthwhile, but the traffic is just too small, see if you can test the same hypothesis across multiple pages. If you can test the same idea on every page on your website, that will boost the traffic to this experiment and get you statistically significant results faster. (Just add the version a and b numbers together across your multiple iterations)
  4. Use Micro-Conversions – Try using micro-conversions as your success metric rather than macro-conversions. If you think a change should be increasing your signup rates, but signup rates are too low in order to observe statistical significance, maybe look at metrics that are highly correlated with signups. CTA button clicks and signup pageviews are the steps before a “signup” actually occurs. There is always some drop-off along the acquisition funnel, but if you can prove with statistical significance that your change is having a positive effect on CTA button clicks, then you can extrapolate that to potentially having an uplift in signups eventually. This isn’t 100% scientific, since an increase in one funnel metric does not always lead to an increase in the rest of the funnel, but it’s better than nothing.
  5. Move On – Finally, if your users are just indifferent, move on. It could be that users just absolutely don’t care one way or the other if a certain content piece like a video is included on the page. Maybe it doesn’t change their behavior whether it’s there or not. That’s when you get to decide which version to implement. Then, just move on to testing other ideas that will have a larger influence on your user’s behavior, like many of the ones listed below.

11 CRO patterns and best practice tips

Even though a certain pattern may work well for other companies, that doesn’t mean it works for your audience. Testing these best practices against your own audience is ideal. When all else fails, like the statistical significance problem listed above, these are generally good CRO strategies that tend to increase conversions:

1. Remove barriers: emphasize ease and low commitment

Wherever you can, make conversion easy. If your signup form has 10 fields, try to trim that down to 5. Each field removed can result in higher conversions. If you have a money back guarantee, or if you don’t require a credit card, say that very clearly. Emphasize anything that makes the choice to convert less of a risk. Highlight the word “free” in big, bold letters.

2. Personalize your content

This is difficult to do without a lot of data or a personalization software tool. But, industry wide, appropriate personalization can increase revenue by around 5-15%. There is a line where it becomes creepy or annoying, but generally adding the user’s first name in the subject line will increase your open rates, as long as you don’t do this on every single email. Offering products based on a user’s browsing behavior will increase conversions. Sections that say “recommended for you” or “reasons why we can help in your industry” usually get pretty good engagement. Moving from generalities to “you” will help the user feel understood and give them the information they need.

3. Pay close attention to CTAs

Call-to-action buttons are primary ways users convert. It’s very common to see buttons for “sign up now” or “view plans and pricing,” so users understand how to use them. Make sure they’re always visible or easy to find. For landing pages, CTAs right in the hero at the top are ideal. For blogs, a “see more articles” CTA might be better suited further down the page. Be sure they’re short and action-oriented. Start with a verb. If you have multiple CTAs, choose the one that is primary and highlight that one in a bolder color.

4. Consider complexity vs simplicity (long vs short form)

Generally speaking, simplicity makes things easier and ties back to #1 (emphasizing ease). Even removing options in a navigation can help focus your users and avoid decision paralysis. However, audiences really differ on what they prefer in terms of amount of content–usually they will prefer one over the other. If your users are more anxious about making a purchase, like a healthcare or financial product, a long-form email and extra information on a landing page might provide confidence and encourage conversion. If the users are just looking to buy a consumer item, short form and simplicity often convert better.

5. Create a (balanced) sense of urgency

This is a tricky one. Urgency almost always increases conversions, but be careful not to cause too much anxiety or create a false sense of urgency. Users often sit in the realm of analysis paralysis. They’re not sure which product to go with, and so they just keep analyzing instead of making a decision. Limited time offers or countdown timers to a certain event can be the catalyst that drives users into finally converting. As long as we’re not creating a false sense of urgency, this does benefit users in helping them take that leap, especially if what you’re offering is a free trial. Just get them to try it out already.

6. Use social proof and testimonials

If you have recognizable customers, try to display their logos on your website. If you have good quotes from satisfied customers, be sure to display those. If there are really positive reviews of your company on Yelp or other rating sites, point that out with a star rating. People often look to the community to validate a purchasing decision. Display that community right in front of them to give them the confidence they need to move forward.

7. Test styling: high contrast, highlighting, arrows, capitals, bolding

To draw a users attention, things that are high contrast, bold, large, or have arrows pointing towards them will stand out. Generally if you follow these practices for your CTAs for example, you will have better results. There’s really no need to test the color difference between light pink and deep blue on a white background. The higher contrasting color, while still being on brand and not too over-the-top, will usually draw more clicks.

8. Experiment with progress bars and transparency

Giving the users insight into what they can expect once they start completing actions (like clicking on a button or filling out a form) usually increases conversions. Progress bars show the user how much more effort is left until they’re done. Users can recalibrate their expectations and won’t be as likely to drop off on the second to last step. Even just letting them know that they will receive an email soon with further instructions allows them to expect and look out for that next step, which they are then more likely to engage with.

9. Provide site security assurances

Even if your product is not sensitive and you’re not asking for personally identifiable information, people still feel a lot safer engaging with a product that seems secure and seems to care about privacy. Displaying any kind of security logos on your website like McAffee protected, or even just an image of a lock and copy that says ‘your security is our priority’, will often increase conversions.

10. Improve page load speed

The faster your pages load, the more likely users are to stay on your page, giving them a higher probability of converting. Industry-wide, the effect of each additional full second lag on page load is estimated to be around a 7% decrease in conversions. Lazy loading is one easy way to improve your page speed. It’s the practice of loading only the visible elements of a page at a time. As most users don’t scroll to the bottom of a page, the images at the bottom of the page will not load until a user scrolls. This allows the top of the page to load faster, improving the experience for the majority of users.

11. Display your value propositions

There’s the rule of three that usually works in marketing. Three key value propositions are helpful to emphasize your company or product. They’re easy to read and digestible. But often times going into a longer list of features of what’s included in your packages will be perceived as more valuable. Test out how and where to talk about your value, but do remember to discuss it early and often. What makes you special compared to your competitors? That’s what users want to know.

For more experimentation ideas, here are 5 email marketing A/B testing ideas to get you started.

Now what?

Many companies are realizing the huge impact that CRO can provide and are hiring for roles completely focused on this within their marketing teams.

Wouldn’t it be magical if that thousand dollars you plan to spend on social media advertising this month could produce twice the amount of revenue for you as it did last month? Conversion rate optimization helps you achieve these lifts without spending any extra dollars. With a little time, some analysis, and a software tool to help you experiment, you can do it!

For an email marketing testing tool that can help with experimentation, find out how Twilio Sendgrid’s Marketing Campaigns product can run A/B tests for you. Good luck testing and optimizing!



Clare Fletcher
Clare is the Conversion Manager at Twilio SendGrid, where she uses her background in CRO, analytics, and design to develop winning digital experiences. When she's not optimizing the next AB test, you can find her optimizing her lindy hop skills on the dance floor, her foreign language fluency in order to travel the world, and her endurance to keep up with the active runners, hikers, and climbers in Colorado.