If you are running an A/B test to select the best template or subject line, you can access the detailed analysis under the Reports > A/B Test Results tab of the campaign studio. Depending upon the metric you are optimizing for (for example, clicks, impressions, or a custom goal), the report shows which variation had the best performance with respect to the control along with the confidence level of the results. Blueshift also selects the best performing variation as the winner if the results are statistically significant (i.e. confidence level > 90%).

To learn more about A/B testing, or to set up A/B testing for a trigger, see A/B Testing.

View A/B test report

To view the A/B test report, go to the Reports tab for the campaign.

  • Select the Trigger for which you want to view the A/B test report.
  • Select the Control variation against which other variations are to be compared. By default, the first variation is selected as the control and all the other variations are compared to it.
  • Select the Metric that you want to optimize for. By default, the unique click rate is used as the metric to compare the performance of the different variations.

You can also view the A/B Test results for archived triggers by selecting the Show Archived Triggers option.

Example:

In the following example, the A/B Test results for the trigger Send an email are displayed. The trigger has four variations of which Hello World is selected as the baseline or control variation. The Metric that is selected for the report is Clicks.

The details for all four variations are displayed. We are the world is the winning variation.

jb_ABTest_report.png

A/B test report details

The following details are available in the A/B test report.

Field Description
Start time

This is the campaign start time. If the campaign start time is earlier than 10.01.2019, then 10.01.2019 is selected as the start time for any A/B test analysis.

End time

For an active campaign, this is the current date (today's date). For a campaign that is paused or completed, this is the campaign end date + 7 days.

Total Users

The total number of users to whom the particular variation was sent at the end of the selected time period.

For example, if the time window for the campaign is from June 1 to July 31, the count includes all unique users who were sent the particular variation from the start of the campaign until July 31.

In the example, the variation Hello World was sent to 5699 unique users, whereas the variation We are the World was sent to 377885 unique users. 

Total Completed

The number of unique users to whom the particular variation was sent and who completed the goal event during the time period.

For example, if the time window for the campaign is from June 1 to July 31, the count includes all unique users who to whom the particular variation was sent and who completed the goal event from June 1 to July 31.

In the example, 155 unique users who received the variation Hello World clicked on it compared to 11865 unique users who received the variation We are the World

Conversion

The number of users that converted, i.e. completed the goal event, as a percentage of the total number of users in the group.

Conversion = Total Completed/Total Users

For example, if the time window for the campaign is from June 1 to July 31, the Total Completed count (x) includes all unique users to whom the particular variation was sent and who completed the goal event from June 1 to July 31. Conversion for this time window is x/Total Users.

In the example, for the variation Hello World, of the 5699 unique users who received the variation, only 155 unique users clicked on it. Hence Conversion = (155/5699) * 100 =  2.72%.

For the variation We are the World, of the 377885 unique users who received the variation, 11865 unique users clicked on it. Hence Conversion = (11865/377885) * 100 =  3.14%.

Lift %

The Lift% for a variation is calculated by comparing the conversion for that variation with the conversion for the baseline or control variation.

Lift%variation = (Conversionvariation/ConversionControl variation) - 1

If the Lift % > 0, it is an improvement. If the Lift % < 0, it is a degradation.

In the example, the variation Hello World is set as the control variation. variation Hello Universe has a Lift % of -13.893%, whereas the variation We are the World has a Lift % of 15.445%. Hence the Lift % for the variation We are the World is an improvement over the control variation.

Confidence Level

The statistical likelihood or probability (pā€™%) that the improvement (or degradation) observed from the A/B test is correct. So if you were to infinitely repeat this test, you would observe the same improvement pā€™% of the time.

The šŒ2 test (chi-squared test) is used to calculate the confidence level.
pā€™ = 1 - p(šŒ2)
Statistical Significance When the Confidence Level is higher than 90%, this indicates that the results are statistically significant. However, you can use a higher or lower confidence level based on your A/B test objectives.
Confidence Interval The range of values within which the conversion for the group will lie pā€™% (i.e. confidence level percentage) of the time.
Winner The highest performing variation is selected as the winner as long as the results are statistically significant.

 

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.