SIMAEC.NET WEB PUBLISHING

AB Testing

An introduction into AB Testing for on page UX elements. As disclaimer, we don't recommend running AB tests as they don't provide better insight but may be used as misleading argument pro/con for new features.

You must ask for permission before a user participates in a test.

Setup Test

Setup the experiment carefully in order to prevent bias. It is a complex and tedious task. Fortunately, overtime you may be able to take advantage of features developed for previous tests.

  • Prepare a new version of a page or element on a page and create a copy of the current version. The test should reveal if the new version is performing better, equal or worse than the current version.
  • Determine the action which will be measured. Choose an on page indicator and not a session KPI. E.g. click on link, scroll down, submit form, time on page, etc.
  • Add event trigger to the action measured and setup tracking of the event with external or internal tracking. The test should count exposure Variant A, exposure Variant B, conversions action Variant A and conversions Variant B.
  • Guarantee that a returning visitor sees the same variant on each visit or that he is excluded to participate again
  • Make sure that the probability to be exposed to Variant A or Variant B is the same.

Determine Duration

What is the conversion increase you are expecting?

Based on the conversion rate the measured action occurs normally and the expected improvement in percentage you can determine how many views of the test setup are required.

For example 1000 views of variant A and 1000 views of variant B are enough if the action converts at 10% and you expect a conversion rate increase of 30% for the new variant.

The number of views required for a significant result determines how long (hours, days) the experiment has to run. An experiment has to finalize within a short period of time (not weeks or months).

Finally, when the test is running, don't stop it before the end of the previously determined test duration.

Conclusions & Results

Compare different data points like time stamp, agent, location of counted test views and conversions between the two data sets of Variant A vs Variant B. Are browser brands and versions equally distributed? Any hours of the day bias among the two data sets? Location bias? You should discard the data set if you detect bias in the data points different to the conversion you are testing.

Is the Variant A conversion rate the same or close to the expected value? If not, your Variant A isn't reflecting the current situation. You may still obtain an understanding about which one, Variant A or Variant B, is performing better but you won't be sure if the improvement of Variant B over Variant A applies also to current variant.

All concerns can be discarded you got a statistical significant higher conversion rate for the new variant? Congratulation.

Resources

AB Test Calc