People say they want rankings, but they really want traffic and conversions.
At Distilled, we get a lot of leads from companies wanting SEO. Press them on what they want, and they say, “high rankings”, but is that what people really want? Make a query uncommon and long enough and I’m sure a solid page can rank at the top. But, if no one is searching for a query, who cares what position it ranks in? What people want is traffic, for which rankings are often used as a proxy, but even so, traffic is really only one part of the equation.
What good is web traffic if visitors don’t perform the actions you want them to perform?
Outside of the websites that make money from the number of eyeballs on a page, just about everybody wants a visitor to do something, whether that be downloading a whitepaper or buying a product. This is where conversion rate optimization (CRO) comes in, but there’s far more to CRO than just the A/B testing that people have grown to love.
What CRO actually is VS What people think it is
A decade ago, very few people knew what SEO was, but that began to change as digital marketers learned how lucrative it could be. Nowadays, it seems like most companies have at least one person who has a working knowledge of SEO. I like to think that CRO is sort of at the stage SEO was a few years ago, with most companies not having a dedicated CRO expert to lead the charge in optimization efforts.
This isn’t to say that no one is doing CRO. In fact, A/B testing is on the rise and has quickly become a buzzword in the industry. For comparison’s sake, it is interesting to note that the making money part of digital marketing (CRO), still lags behind SEO considerably in terms of search interest.
Imagine how much better websites would be if they optimized for conversion as much as they tried to lure organic traffic. As the data clearly shows, “A/B testing” is the more searched term in Google when compared to “conversion rate optimization”. I use “conversion rate optimization” here instead of “CRO” because that acronym has too many other meanings that generate search volume that are not relevant to this particular piece.
But what exactly is the difference between A/B testing and CRO?
An A/B test is a singular action; it’s a comparison between 2 different ideas to see how users react. A common example of this could be making a buy now button green or red. There are countless internet sources that will say definitively that green is the best color or red is the best color, but the reality is that there are no hard and fast rules when it comes to CRO – depending on your niche, the best color for you may be contrary to what the internet is saying, you’ll just need to test to find out.
CRO on the other hand, is not a singular action, but an educated process.
A/B testing is great and can potentially serve an phenomenal purpose, but there needs to be a process in place to identify what one should be A/B testing in the first place. Depending on how much traffic you have, you might even be able to do multivariate testing, but that’s a topic for another time.
CRO analyzes both qualitative and quantitative factors to determine areas of weakness and strength with the ultimate goal of generating a prioritized roadmap for specific tests on specific pages perhaps even on specific devices, browsers or networks. If you haven’t analyzed points of friction on your site, surveyed your users and monitored how they interact, chances are, you’re leaving conversions on the table.
Current state of CRO
CRO is still young, so most companies don’t have a full-time optimizer like they might have a social media strategist or an SEO. This laissez-faire attitude towards CRO can potentially lead to inexperienced people trying to do the right thing by testing, but ultimately drawing the wrong insights due to a flawed test or results. Making business decisions based off of incorrect data? That’s totally possible when a person tests without a process or isn’t experienced.
Case Study: South African Hotels
We hypothesized that this hotel booking site had issues with conversions, so Distilled used session replays to identify bottlenecks in the conversion process. This served as the basis for our tests and we were able to come up with a solution that increased by 290% the number of people who began the booking process and saw a 70% increase in those who completed a booking.
Results from our CRO work for South African Hotels.
What can I do?
Are you squirming yet thinking about all those visits that come to your site and don’t convert, but you don’t know why? Here are some tips to get you on your way. Like I mentioned before, conversion rate optimization is both quantitative and qualitative, which means that you’ll be needing data from technical analyses as well as user surveys and testing.
This does not necessarily need to come from Google Analytics, but more often than not it probably will. For the SEOs reading this, it might be arefreshing bit of fresh air to know that you’ll be able to really focus on all user acquisition channels instead of just organic. A good place to start could be analyzing which conversion pages have the lowest goal completion rate. For example, if you sell multiple products, which page has the lowest conversion rate.
Another place to explore could be conversion funnels.
In Google Analytics, if you go to Conversions > Goals > Funnel Visualization, you’ll be able to analyze the goal funnels that have been set up. I’ll caution you that this isn’t 100% accurate as skipped steps through the funnel (e.g. going from step 1 straight to step 3) will mess with the data (for more on common GA misconceptions, read here). That said, it is a good first step for finding friction within your funnel. For example, if a healthy percentage of people passed from step 1 to 3, but then suffered a significant drop off at step 4, before seeing the percentage of people continuing on the path rise again, we’d have identified step 4 as a potential problem worthy of deeper analysis.
Sitting around an office and discussing your website with co-workers all day can potentially lead to internal biases or a “curse of knowledge” when it comes to how someone should interact with your site. It’s very possible that you think a section of the site is easy to navigate or not confusing, but in reality, your audience is having a very different experience.
User surveys are great because they can grant us insight and access into the mind of our ideal audience. Of course, this only will be useful if we remember the following things:
- Only ask questions that serve a purpose, don’t just copy the competition (they probably don’t know what they’re doing)
- Survey the right people, at the right time
- Survey the correct quantity of people
- Try to avoid bringing biases into the data analysis
You do not know what your competition is trying to glean from whatever surveys they may be running nor do you know if they know what they’re doing – don’t merely copy them and think of them as “cutting edge”. Every question you ask should serve a specific purpose such as identifying pain points within the conversion funnel, common traits shared by a specific cohort of visitors or information that is deemed vital for a conversion, but is missing from the site.
There are multiple types of surveys, they could be on-site using a service like Qualaroo or sent to users via email with a service like Typeform or Google Forms. If you’d like to use an on-site survey, you should keep in mind on which pages the survey appears because it could work against you and be a distraction to users, causing a drop in conversion rate. If you’re going to send surveys to a specific cohort, you could target:
- Those who have put items in a cart, but not bought anything
- Those who recently purchased something in the past week (experience will be fresh in their minds)
- Those who have purchased, but have not in over a year
Depending on what type of organization you have, your cohorts will change. Perhaps you aren’t an e-commerce site, but still want to know why people have unsubscribed to you. Above all else, surveys allow us to uncover important insights like:
- Where are there bottlenecks or UX issues?
- Why do people abandon the cart?
- Who exactly is the consumer and what is their intent?
- What type of language does the audience use or expect to see?
Once we gather the insights from the technical analysis and user surveys, we need to draw insights and propose tests. This is the absolute most important part of this piece, so I’ll repeat it again: only after we complete the technical analysis and user surveys can we begin to identify what and where to test.
Before beginning the tests, create some sort of prioritization sheet that contains all of your test ideas, expected impact, required resources, and weighted composite score of these factors or any others that might be required in your case. This will help keep you on task, communicate effectively with other stakeholders at your company as well as those externally if you’re an agency or a freelancer.
If you’re looking for a place to get started, you can make a copy of this CRO task prioritization template with mock tasks, owners and metrics. The tasks (made up) are weighted and sorted in descending order so we let the data tell us where the greatest impact at the least resource requirement can be found. This is just a template and you may find that your version may need to be more robust for one reason or another.
Can’t-Miss Optimization Tools
Now that the foundation has been completely set, it’s time to play. Two of the best A/B testing tools are Optimizely and VWO. You can’t go wrong with either of these, but depending on what your needs and budget are, you may find one a bit more advantageous than the other, so make sure to check out their pricing structures.
Hotjar and Clicktale are great tools for generating heatmaps, scroll-tracking analyses, session replays, polls, and more, but Clicktale is much more expensive than Hotjar. Heatmaps, scroll-tracking, and session replays show you where visitors are spending the majority of their time, where they are looking, and what they are clicking on. This can reveal very interesting insights about things you believe to be important, but your visitors do not, or vice versa.
If you do not know how to code and find it difficult to get into your development or UX teams’ queue, Unbounce might be your new best friend for creating landing pages.
While it does integrate well with some third-party software like Google Analytics, MailChimp, and Campaign Monitor, it does lack some integrations that you might find useful if you need to automatically push your leads into a CRM like Pardot. Pardot does some things well, but landing page development is not one of them and Unbounce reigns supreme here. Quickly and easily generate new pages from a template or scratch by yourself or with the help from their real, live customer support team. Seriously, these guys are amazing.
Common CRO Questions
Q: How many tests should I run at a time?
A: Bigger sites are able to do more, but running multiple tests on the same audience may generate false positives or negatives. If running multiple A/B tests, try to minimize the overlap between the audiences
Q: How does A/B testing differ from multivariate testing?
A: A/B testing is when you show one segment of your audience a control and another a variant. Multivariate testing is showing a control versus at least two variants. Usually, A/B testing leads to bigger changes and more dramatic results, while multivariate testing focuses on smaller things that are less likely to matter to users. Multivariate testing comes with the additional risk of needing to wait longer to achieve statistical significance because each variant needs to receive a certain number of visits and this will vary based on your baseline conversion rate and the minimum detectable effect you select. More on this here.
Q: How long should I run a run test for?
A: There is no one answer to this as it depends on how much traffic your site receives, what its conversion rate is and what minimum detectable effect you’re looking for. I’d also recommend waiting at least two full business cycles to avoid “seasonality” of sorts. For example, if you’re an ecommerce site that has people buying at the beginning of each month, run a test for two months to see how the data compares, rather than comparing the beginning of month to other times during the same month.
It’s important to wait until you’ve achieved statistical significance because you may be reporting on false positives if you call the test early. Statistical significance is the probability that the difference between the control and a variant is not chance, so for example, declaring a winner with 95% statistical significance means that there’s only a 5% chance that the winner was actually random. A good rule of thumb is shoot for 90-95% statistical significance (Optimizely declares winners at 90%) to give you the best chance of calling a true winner and not a fluke.
Leave it to the experts (Distilled)
If you haven’t already performed the necessary qualitative and quantitative analyses on your site, chances are good your site is leaking potential conversions somewhere along the way. CRO can be highly rewarding for your business, but it also requires diligent preparation, knowledge and resources (time and money). While no one can guarantee positive results on every single test, a conversion rate optimization expert can be the most efficient use of resources to get ROI as soon as possible and fatten your bottom line.
If you don’t want to risk potential flaws in your data or spend time on the job learning CRO when you’re supposed to be doing the actual job your company pays you for, drop us a line and we’ll see how we can help.