Whether you are A/B testing the campaigns's flow or the best template or subject line, you can access the detailed analysis under the Reports > A/B Test Results tab of the campaign.

Depending upon the metric you are optimizing for (for example, clicks, impressions, or a custom goal), the report shows which variation had the best performance for the control and the confidence level of the results.

In the A/B test report, Blueshift will indicate whether the results were conclusive, i.e., statistically significant with a confidence level = 90%, and if there was a clear winner based on the results. The report will also show the confidence level in case your team has different criteria for statistical significance.

If you are using automatic winner selection to automate your A/B tests, remember that the results are not guaranteed to be statistically significant.

To learn more about A/B testing, please refer to the following documentation:

  Tip

To compare the winning variation from a previous A/B test with new variations, first archive the winning variation and then clone it. Use the cloned version in the new A/B test to ensure it remains independent from the previous test, preventing past results from influencing the new experiment.

View the A/B test report

To view the A/B test report, go to the Reports tab of the campaign.

Select the necessary options to view the report on the left side panel.

  • A/B test type – Choose journey to view the A/B split report for journey flow control or trigger to view the results for individual triggers.
  • Trigger - You can choose the trigger for which you want to view the A/B test report.
    • Enable the Show Archived Triggers option to view A/B test results for archived triggers.
  • Control - Select the control variation to compare against other variations. The first variation is set as the control by default, and all other variations are measured against it.
  • Metric - Choose the metric to optimize. By default, the unique click rate is used to compare the performance of different variations.
  • Date range - Set the date range for the report. This allows you to analyze A/B test results for a specific time period, including the duration during which a winner was selected for automatic winner selection.
    • The default start time is the campaign start time. If the campaign started before 10.01.2019, then 10.01.2019 is set as the start time for any A/B test analysis.
    • The default end time depends on the campaign status:
      • For active campaigns, the end time is set to the current date.
      • The end time for paused or completed campaigns is the campaign’s end date plus 7 days.

For example:

The following example displays the A/B test results for the trigger Send an email. The trigger has four variations, of which Hello World is selected as the baseline or control variation. The metrics chosen for the report are clicks.

The details for all four variations are displayed. We Are the World, which is the winning variation.

AB Test Report.png

A/B test report details

The following details are available in the A/B test report.

Field Description
Unique users

The total number of users to whom the particular variation was sent at the end of the selected time period.

For example, if the time window for the campaign is from June 1 to July 31, the count includes all unique users who were sent the particular variation from the start of the campaign until July 31.

In the example, Hello World was sent to 5699 unique users, whereas We Are the World was sent to 377885 unique users. 

Unique completed

The number of unique users to whom the particular variation was sent and who completed the selected metric during the time period.

For example, if the time window for the campaign is from June 1 to July 31, the count includes all unique users to whom the particular variation was sent and who completed the selected metric from June 1 to July 31.

In the example, 155 unique users who received the variation Hello World clicked on it compared to 11865 unique users who received the variation We Are the World

Conversion

The number of users that converted, i.e., completed the selected metric, as a percentage of the total number of users in the group.

Conversion = Unique Completed/Unique Users

For example, if the time window for the campaign is from June 1 to July 31, the Unique Completed count (x) includes all unique users to whom the particular variation was sent and who completed the selected metric from June 1 to July 31. Conversion for this time window is x/Unique Users.

In the example, for the variation Hello World, of the 5699 unique users who received the variation, only 155 unique users clicked on it. Hence, Conversion = (155/5699) * 100 =  2.72%.

For the variation, We Are the World, of the 377885 unique users who received this variation, 11865 unique users clicked on it. Hence, Conversion = (11865/377885) * 100 =  3.14%.

Lift %

The Lift% for a variation is calculated by comparing the conversion for that variation with the conversion for the baseline or control variation.

Lift%variation = (Conversionvariation/ConversionControl variation) - 1

If the Lift % > 0, it is an improvement. If the Lift % < 0, it is a degradation.

In the example, Hello World is set as the control variation. Variation Hello Universe has a Lift % of -13.893%, whereas the variation We are the World has a Lift % of 15.445%. Hence, the Lift % for the variation We Are the World is an improvement over the control variation.

Confidence level

The statistical likelihood or probability (p’%) that the improvement (or degradation) observed from the A/B test is correct. So, if you were to repeat this test infinitely, you would observe the same improvement p’% of the time.

The 𝝌2 test (chi-squared test) is used to calculate the confidence level.
p’ = 1 - p(𝝌2)
Statistical significance When the Confidence Level is higher than 90%, this indicates that the results are statistically significant. However, you can use a higher or lower confidence level based on your A/B test objectives.
Confidence interval The range of values within which the conversion for the group will lie p’% (i.e., confidence level percentage) of the time.
Winner The highest-performing variation is selected as the winner if the results are statistically significant.

  A/B split journey reporting insights

  • A user is assigned to a path variant as soon as they qualify for a path.
  • A conversion is counted when the user takes the defined action (metric) at any point in the path. Since conversions are based on unique users, a user performing the action once or multiple times within a branch still counts as one conversion.
  • If a journey contains multiple A/B split nodes, the topmost node reports a superset of all conversions in the branches below it, including those from other nested A/B split nodes.

Example:

A journey splits into Path A (50%) and Path B (50%). If Path A further splits into Path A1 (50%) and Path A2 (50%), then:

  • Conversions for Path A include conversions from both Path A1 and A2.
  • Conversions for the topmost split (A vs. B) include all conversions from A1, A2, and B combined.

This structure ensures that the topmost node accounts for all conversions within its branch, helping marketers analyze overall journey performance.

Was this article helpful?
0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.