For one time campaigns, you can enable automatic winner selection on messaging triggers that have multiple templates. Using this feature, you can let the campaign automatically select the best template based on the metric you are trying to optimize for.

Automatic winner selection consists of the following actions:

  • During campaign execution, qualifying users for the messaging trigger are split into a test and holdout group.
  • Instead of messaging all users, messages are sent only to users in the test group. Each test user is assigned a template in a round robin fashion.
  • After the test group has been messaged, the campaign waits for the time specified in the End Test field so that user interactions can occur.
  • After the waiting period is up, the campaign selects the winning template based on a metric selected in the Optimize field and messages the holdout group with this template.
  • Automatic winners are selected based on the test time selected by you. If the holdout period that you select is not long enough, it is possible that there might not be enough data to select a winning template. In such cases, it is possible that a random template is selected as the winner.

Setting up automatic winner selection

To use automatic winner selection, you must have more than one messaging template.

  1. Use the A/B Test tab to add multiple templates to the trigger.
  2. Then go to the Setup section of the A/B Test tab to set up the automatic winner selection parameters.

Automatic winner selection uses three parameters:

  1. Test group size: This is the number of users to use for the test group, expressed as a percentage of the users who qualify for the messaging trigger. Set this appropriately, so that each template variation gets enough users. This percentage depends on several factors such as the number of template variations, the size of your segment and whether or not the campaign has multiple messaging triggers.
  2. Holdout period: This is the amount of time, in hours, the campaign will wait before selecting the winner. The minimum time is 1 hour and the maximum time is 24 hours.
  3. Optimizing metric: This is the metric used to select the winning template. The metric is calculated as a rate per users.
  • The built in metrics that can be selected are "unique impressions rate", "unique click rate", "revenue per user" and "orders per user".
  • In addition, you can select any custom goals as your metric. These will also be calculated as a rate per user. For example, "unique impressions rate" is the number of impressions divided by the number of users.
  • In addition, you can select any custom goals as your metric. These will also be calculated as a rate per user.
  • Push and SMS do not have impressions, so "unique impressions rate" is not available for these channels.

After saving, you will see the automatic winner selection settings reflected in the UI. The percentages next to each template are hidden when using automatic winner selection because templates will be assigned to test users in a round robin fashion.



In order for automatic winner selection to work as intended, you must be careful when choosing the inputs. In particular, you should make sure you set the holdout period and metric such that there will be enough conversions on the metric you selected. You will probably need to take a look at your past campaigns in order to get a sense of what values will make sense.

Holdout period

Automatic winner selection selects the winning variation at the end of the holdout time period. It does not check for statistical significance. Hence it is possible that the result of the automatic winner selection is not representative of the overall campaign population.

For this reason, it is important to choose the holdout period appropriately. The longer the holdout period, the higher are the chances of automatic winner selection picking the true winner. We recommend that you check the A/B test report for statistical significance before the holdout period ends so that you can manually intervene if the results of the test are inconclusive.


Additionally, selecting the right metric for auto winner selection is equally important. Post iOS15, email opens coming from iOS devices aren’t representative of true opens. Hence using open/ impression rate as a success metric for such tests might not be a good idea if a significant percentage of the recipient population are iOS users.

Example: Let's say you notice that most orders are placed 3 hours after users receive a message. If you set up automatic winner selection with the metric "orders per user", but only use a holdout period of 1 hour, you are probably not going to get the required result.

Viewing Stats

As the campaign is running, you can see how many users are in the holdout group on the campaign's Reporting tab.

  • The campaign will remain in a Launched status until all winners have been selected and all the users in holdout groups have been messaged.
  • Pausing or re-launching the campaign may cause users to not get messaged.
  • When the campaign is complete, you will see an info box that identifies the winning template and how much lift was generated compared against the worst performing template.
  • If there is a tie, the template with more users is selected as the winner.
  • You will also see a Winner Selection section below the table. You can expand this section to get more detailed information on how the winner was selected for each messaging trigger.
  • The number of test users for each template may not always be equal. Templates are assigned in a round robin fashion, but it is possible a test user will not get messaged. One reason may be that the user has no products for the particular recommendation you are using. Therefore, if you are using a recommendation, you should make sure that most users in your segment will have products returned.
  • The number of test users for each template does not need to be the same in order for automatic winner selection to work well. However, you do need to ensure that each template receives enough users such that the difference in users between templates does not skew the results. That is why it is important to set an appropriate test group percentage.
Was this article helpful?
0 out of 0 found this helpful



Please sign in to leave a comment.