How to use A/B test to skyrocket your SaaS CTR

We discuss how you can apply A/B testing for your SaaS with insights and examples from growth marketing experts and SaaS founders.

How to use A/B test to skyrocket your SaaS CTR
💡
Note: SaaSwrites is a curated growth marketing hub and resource built to help SaaS founders grow their products. We sincerely thank all our experts for their constant value addition to this world.
Updated on :01/04/2022

What is SaaS customer activation and CTR?

You can simply measure your SaaS activation through Click Through Rate (CTR). Your CTR can come directly from any view (ads, content, cold email etc.). Any impression your SaaS performs in front of your customer and how the customer reacts is measured through the CTR.
Let’s understand CTR from the perspective of ads. Ads Alchemist explains CTR.
CTR is the percentage of times people saw your ad or impression and performed any interaction with it. This is actually the definition for CTR (All).
Since it accounts for any interaction this CTR metric is typically higher than the other metric.
The interactions that are included in CTR (All) are:
  • Clicking on the post and performing any action (liking, commenting, sharing, etc)
  • Link click
  • Clicking on the profile pic or name
  • Clicking to expand the creative
The other type of CTR is called CTR (Link Click Through).
CTR (Link Click Through) - The percentage of times people saw your ad and clicked the link. This is the more important metric to follow because this focuses on the people who saw your ad and went to your website.
It's important that you are looking at the unique version of this metric. That way you are not double-counting individuals that somehow clicked on your link more than once. This will be the most accurate version of your CTR and the one you should use to make decisions.
CTR (Link Click Through) is a better metric compared to CTR (All) when you are running conversion campaigns. Because your sole goal is drive people from FB to a website and have them either sign up for something, perform a specific action on your site, or purchase.

How A/B testing can improve your SaaS CTR:

We've consolidated learnings from running 1000s of A/B tests for companies like Segment, Microsoft, and Tovala.
A/B testing = the science of testing changes to see if they improve conversion.
  1. Deciding what to A/B test
  1. Prioritizing valuable tests
  1. Tracking and recording your results
Every day of the year, a test should be running—or you're letting traffic go to waste. A/B testing isn't about striving for perfection with each variant. It's about iteration.

The A/B testing process

  1. Decide on and prioritize high-leverage changes
  1. Show some % of your visitors the change
  1. Run it until you reach a statistically significant sample size
  1. Implement changes that improve conversion
  1. Log design/results to inform future tests.

How to source test ideas

  • Survey users. Ask what they love about your product.
  • Use tools like Hotjar or FullStory to find engagement patterns: What are they clicking vs ignoring?
  • Your best ads have value props, text, and imagery that can be repurposed for A/B tests.
  • Mine competitors' sites for inspiration. Do they structure their content differently? Do they talk to visitors differently?
  • Your support/sales teams interact with customers & know best what appeals to them.
  • Revisit past A/B tests for new ideas.

Prioritizing tests

  1. Micro variants are small, quick changes: Changing a CTA button color
  1. Macro variants are significant changes: Completely rewriting your landing page
Prioritize macro changes bc they're higher leverage—they often result in large conversion swings.
You'll more often A/B test earlier parts of the funnel—for two reasons:
  1. Earlier steps have larger sample sizes—and you need a sufficient sample size to finish a test.
  1. It's easier to change ads, pages, and emails than it is down-funnel assets like in-product experience.

Other prioritization questions

  • How confident are you the test will succeed?
  • If a test succeeds, will it significantly increase conversion?
  • How easy it is to implement?
  • Is your test similar to an old test that failed?
Start w/ low effort, high-leverage changes.

Keys to setting up tests

  1. Run one A/B at a time. Otherwise, visitors can criss-cross through multiple tests when changing devices (e.g. mobile to desktop) across sessions.
  1. Run A/B variants in parallel. Otherwise, the varying traffic sources will invalidate your results.

Tools for running tests

  1. Google Optimize—free A/B testing tool that integrates with Google Analytics and Ads.
  1. Optimizely—better flexibility and insights.
We suggest starting with Google Optimize, then getting a demo from Optimizely to see if it's worth the upgrade.

Statistically validate tests

You need:
  • 1,000+ visits to validate a 6.3%+ conversion increase
  • 10,000+ visits to validate a 2%+ increase
Without lots of traffic, focus on macro > micro variants. Macros can produce 10-20%+ improvements vs micros 1-5% increases.

Sample sizes and revenue

The closer an experiment's conversion objective is to revenue, the more worthwhile it may be to confirm small conversion boosts.
E.g: A 2% improvement in purchase conversion is more impactful than a 2% improvement in "learn more" CTA clicks.

Track tests

Mark the following in a tool like ClickUp:
  • Conversion target you're optimizing for: Clicks, views, etc.
  • Before & after: Screenshots + descriptions of what's being tested.
  • Reasoning: Why is this test worth running? Use your prioritization framework here.

When each test is finished, make note of:

  • Start & end dates
  • Sample size reported by your tool
  • Results: The change in conversion, and whether the result was neutral, success, or failure.
If it was a success, note whether the variant was implemented.

Then ask: What can we learn from the test?

  • Use heatmaps to figure out why your variant won. e.g. maybe users were distracted by a misleading CTA in one variant.
  • Survey customers who were impacted by the test.
Figuring out why each variant wins will inform future tests.

Conclusion

A/B testing is higher-leverage and cheaper than most other marketing initiatives. Focus on macro variants until you run out of bold ideas.
Diligently track A/B results and reference them when ideating future ones. Learn from your past mistakes.

Written by

Qayyum Rajan
Qayyum Rajan

Qayyum (”Q”) is a serial builder with more than 5 startups to his name with 3 exits. He specializes in shipping products fast, and early with an focus on driving traffic across the marketing funnels