When it comes to split testing in professional services, you may have millions of ideas on what you’d like to test.  Coming up with testing hypotheses isn’t the problem, it’s knowing when you’ve succeeded. Many marketers will analyze their split testing results and assign one indicator a higher level of success than another. They’ll run tests “until they reach statistical significance,” which could incorrectly skew their results.

You need concrete metrics that will present a clear, easily-understandable result that grows your business and boosts your conversion rate.

So what exactly are the metrics that guide split testing? And which ones should you pay the most attention to?  Let’s take a closer look.


This is the core of your test – your hypothesis and the guiding force behind conducting the test.  You may be testing based on a gut feeling (would human faces work better than abstract images?) or based on other case studies in your industry. Defining the parameters of your test ensures that there are no surprises that could botch up the results.


In A/B testing, you’re only testing one element against another.  All other points remain constant. In the above example, you’d want to swap only the abstract image with a human face on the challenger page and leave everything else alone. There is a type of test known as multivariate (or Taguchi) testing which lets you test several elements on a page, however the results may take much longer to see statistical significance, if they even show it at all.

By split testing one variable at a time, you’ll be able to make a more confident conclusion that can then lend itself to your other tests, in a constant cycle of improvement.

Success Metric

A/B Testing in Professional Services

Your success metric is that which proves that a test was successful. It can be number of subscribers, number of email opens or clicks, or even how many people filled out a form on your website for a free consultation, for example. Whatever your measurement, it must be consistent and cannot be changed after the test has begun.

One common pitfall in measuring success is to look at test results after the fact, and assign a “reason” to the winner – i.e. that it was more successful because of some specific factor. A/B testing is very much a quantitative venture, and although it can be tempting to assign an emotional rationale to a result, try not to do this.  Numbers don’t lie.

Statistical Significance

This is where things get tricky. For an A/B test to be successful, you need a high enough volume of traffic to make the result “statistically significant.” But what exactly does that mean? At its core, statistical significance is a measurement of reliability.  It means that our test was conducted without any kind of flukes or random blips on the radar.  If you have a larger sample size (see below), these random blips are going to happen.  Statistical significance helps further cement your findings that the result wasn’t just by chance.  In short, it gives more credibility to your tests.

Defining the Test Group

I’ve previously mentioned volume as an important factor in statistical significance as it applies to A/B testing, but volume also matters with your test group. It’s very common with split testing to apply an even, 50/50 split to the test group. However, there are times when you might not want to make the test so cut and dried.

For instance, if you have a control page that’s converting extremely well, but you want to test a minor point on a challenger, it’s simply not worth sacrificing 50% of your traffic (and potential conversions and sales) on that minor change.  In this case, running a 90/10 test or an 80/20 test might be better, since you’ll still get a result (although it may take longer), and you won’t lose all the valuable traffic in the process.

The Bottom Line on a Solid Split Test

When selling professional services, it can be all too tempting to rely solely on the numbers – and for making pivotal decisions, you should.  However, services are a different animal online that requires an approach that goes beyond analysis.  The truth is that there are many facets of professional services firms that can’t be measured solely in numbers:  great service, brand awareness, client loyalty… all of the things that make up a professional services firm.

And although you can use these metrics to guide you in crafting tests that convert, the ultimate goal is, to drive new business and referrals, increase repeat business, and deliver an exceptional result. These are things that a split test by itself can’t measure, but combined with other strategic marketing efforts, can help your firm get there.

On Google+ or LinkedIn? Follow us +HingeMarketing and join us on LinkedIn.

For more information on using content marketing for different stages of the sales cycle, check out the free Content Marketing Guide for Professional Services.