Once you launch a campaign in Blueshift, you can optimize it using the following types of tests & tools:
1. A/B Tests
Using A/B tests in Blueshift, you can see which variant of a template performs better. You can test several templates that may have different forms of personalization or entirely different content. For email templates, you can also conveniently test different subject lines for the same content.
Setting up email subject line tests
- During campaign setup, click on the "+ ADD SUBJECT LINE TEST" link after you select the template you want to test.
- This will add a field where you can enter in a new subject line and the percentage of users to allocate to it. You can use the same markup to customize the subject as you would for the email template. You can add up to 20 subject lines to test.Note: The first subject line is your control (the default subject line of the email template) and cannot be edited.
Setting up content tests
- To A/B test non-email templates or content beyond just subject lines, you will need to create each template that you want to use. If your templates will be similar, you will want to use the “clone” functionality after creating 1 template and edit the cloned template.
- During campaign setup, simply click "+ ADD ANOTHER CREATIVE" to add the templates you want to test and the percentage of users for each. You can test up to 20 templates for single messaging trigger.
Measuring & Optimizing Campaigns
Once the campaign is live, you can see the campaign stats for all the templates under the campaign’s reports. For each template, you will see the number of users, impressions, clicks as well as orders/revenue and custom goals data. This helps you measure template changes and their effect on downstream conversions and custom goals.
With Blueshift, you can split any dynamic segment into randomized lists and target different campaign variants at different lists. For instance, you might want to create a holdout group of users who do not receive any message or you might want to test some aspect of the campaign unrelated to a template.
Setting up the test
- First, set up multiple user lists from a segment. Click "Create lists" from the segment action menu in the segments index.
- After clicking on it, you will be taken to a page to create your new list from the segment
- Select the number of lists you want to split this segment into, enter any description as appropriate and allocate the percentage split. The total needs to add to 100%. Click on “Create Lists” after you are done. Once the creation process starts, you will be taken to the page with all your lists.
Wait until the list shows “ready” in status to indicate it’s done processing:
Once your list is in “ready” state, you may use it in other segments. You may combine multiple lists in your segment to include/exclude users depending on their list membership.
For one time campaigns, you can enable automatic winner selection on messaging triggers that have multiple templates. Using this feature, you can let the campaign automatically select the best template based on the metric you are trying to optimize for.
Automatic winner selection performs the following actions:
- During campaign execution, it splits qualifying users for the messaging trigger into a test and holdout group.
- Instead of messaging all users, it only messages the users in the test group. Each test user is assigned a template in a round robin fashion.
- After the test group has been messaged, the campaign waits for some specified time period so that interactions can occur.
- After the waiting period is up, the campaign selects the winning template based on a chosen metric and messages the holdout group with this template.
Setting up automatic winner selection
This feature is toggled in the "Optimize allocation to creatives" section just below the template selection. It is disabled by default, as indicated by the "None, use custom percentages" description.
The "Optimize allocation to creatives" section will only appear if you have more than one template in the messaging trigger. If you still do not see the "Optimize allocation to creatives" section, contact Support to have the feature enabled in your account.
Clicking on "None, use custom percentages" will open a model where you can enable automatic winner selection. Select "Use automatic winner selection" to expose the input parameters on the right.
Automatic winner selection uses three parameters:
- Test group size: This is the number of users to use for the test group, expressed as a percentage of the users who qualify for the messaging trigger. You should set this appropriately, so that each template variation gets enough users. This percentage will depend on several factors such as the number of template variations, the size of your segment and whether or not the campaign has multiple messaging triggers.
- Holdout period: This is the amount of time, in hours, the campaign will wait before selecting the winner. The minimum time is 1 hour and the maximum time is 24 hours.
- Optimizing metric: This is the metric used to select the winning template. The metric is calculated as a rate per users. For example, "unique impressions rate" is the number of impressions divided by the number of users. The built in metrics that can be selected are "unique impressions rate", "unique click rate", "revenue per user" and "orders per user". In addition, you can select any custom goals as your metric. These will also be calculated as a rate per user.
Push and SMS do not have impressions, so "unique impressions rate" is not available for these channels.
In order for automatic winner selection to work as intended, you must be careful when choosing the inputs. In particular, you should make sure you set the holdout period and metric such that there will be enough conversions on the metric you selected. You will probably need to take a look at your past campaigns in order to get a sense of what values will make sense. For example, let's say you notice that most orders are placed 3 hours after users receive a message. If you setup automatic winner selection with the metric "orders per user", but only use a holdout period of 1 hour, you are probably not going to get the desired result.
After hitting save, you will see the automatic winner selection settings reflected in the UI.
The percentages next to each template are hidden when using automatic winner selection because templates will be assigned to test users in a round robin fashion.
Note that send time optimization (in the "When" section) is unavailable when you are using automatic winner selection. Send time optimization introduces it's own delays that does not work well with automatic winner selection, which is why it is disabled. For that same reason, automatic winner selection is disabled when a one time campaign is using send time optimization.
As the campaign is executing, you can see how many users are in the holdout group on the campaign's detail page.
The campaign will remain in an "executing" status until all winners have been selected and all the users in holdout groups have been messaged. Pausing or re-launching the campaign may cause users to not get messaged.
When the campaign is complete, you will see an info box that identifies the winning template and how much lift was generated compared against the worst performing template.
If there is a tie, the template with more users is selected as the winner.
You will also see a "Winner Selection" section appear below the table. You can expand this section to get more detailed information on how the winner was selected for each messaging trigger.
You may notice that the number of test users for each template may not always be equal. Templates are assigned in a round robin fashion, but it is possible a test user will not get messaged. One reason may be that the user has no products for the particular recommendation you are using. Therefore, if you are using a recommendation, you should make sure that most users in your segment will have products returned.
The number of test users for each template do not need to be the same in order for automatic winner selection to work well. However, you do need to ensure that each template receives enough users such that the difference in users between templates does not skew the results. That is why it is important to set an appropriate test group percentage.
Blueshift’s advanced capability, Engage Time Optimization, enables marketers to optimize their campaigns by sending messages when their customers are more likely to engage with their brand. In today’s always-connected world, it is no longer sufficient to assume that historical open/click data is the best way to optimize send times. Always on connectivity and easy access to apps and sites, on mobile devices in particular, leads to many short lived sessions. Such sessions cannot be used as a full gauge of consumer interest. Marketers need to look deeper at the total amount of time spent by customers and prioritize times of the day where customers are more likely to click through, spend time and are more likely to browse or buy. So the true measure of optimal send time is one where the customer is likely to give you their full attention.
At Blueshift we look at the sum total of time spent by each customer and rank each hour in the day based on time spent and how deep in the purchase funnel they got to.
You can use these computed scores to optimize the send time of your messaging. These capabilities can be accessed through the Send Timing modal:
Click "No optimization, ..." within the "When" subsection to access the send timing modal.
Within the Send Timing modal, you can enable Blueshift's Send Time Optimization by selecting "Predictive optimization":
The send timing modal provides all of the controls where you can programmatically change the message's send time.
When enabled, we will use our computed scores to send the message at the precise time in which the user is most likely to engage with your content. However, if your message is time-sensitive, you can set a limit on the amount of time the message will be queued.
Note that an upper limit is required. However, without dayparting windows, a limit of 24 hours will guarantee that the user will be messaged at their optimum hour.
If this trigger has defined dayparting windows, those windows will take precedence over send time optimization. For example, if a user's optimum send time is at 11:00AM and dayparting windows only allow messages from 3:00PM to 5:00PM, we will send the message between 3:00PM and 5:00PM.
The dayparting window and send time optimization limit are always guaranteed to be applied. If you specify that the user should be messaged between 3:00PM and 6:00PM, the user will be messaged within that time. If you specified that the send time optimization limit is 1 hour, the user will be messaged within 1 hour of eligibility.
Send time optimization is applied only if the previous two checks are true, so there are cases where it will not be applied.
Blueshift also supports using custom hour affinities to be used in lieu of Blueshift's computed hour affinities:
This behaves exactly the same as predictive optimization, except you are responsible for computing the scores.
Simply calculate an hour affinity for each individual user and update those users with a field. If the custom field is missing or malformed, send time optimization will simply be a no-opt and the message will continue.
As a marketer you can access “hour affinity” for each user through the segments panel under “User Affinity” tab. Here is a sample screenshot:
Engage Time Optimization
You can use these “hour affinities” like any other attribute in the segment creation and tailor campaign send times to each specific audience. For example, you can create segments of users who prefer “morning” hours by picking 6am to 9am or those who prefer “evening” hours by picking 5pm to 8pm, as well as any other combination. The campaign scheduling is still in the hands of the marketer, allowing full control over the send times. We encourage marketers to experiment with different messaging based on different hours of the day and A/B test their campaigns to achieve optimal results. Do not hesitate to contact email@example.com with any questions or comments.
Dayparting is an advanced Blueshift feature that allows you to choose which times of the day to send your triggered campaigns (to avoid undesirable times). This can be set for the user or account timezone.
In order to set your desired send times, go to Advanced Settings under each trigger and select the clock icon to open up the send timing modal. Here, you can define easily define when a campaign should be run.
Click "Anytime is eligible" to open up the Send Timing window.
To select the hours when the messages will be delivered, simply click and drag to create your preferred dayparting windows. The check boxes next to the day of the week can be used to mark an entire day as eligible. Likewise, the times at the top of the table can be clicked to mark a time as eligible for all days.
In this example, all messages will be sent between the hours from 2AM to 10PM for Monday through Friday in the user's timezone.
The dayparting window indicates when the campaign is active. We queue up sends for Event Triggered, but not for Segment triggered campaigns.
Event Triggered Campaign
Event triggered campaigns have additional options to dayparting.
Notice the ability to choose what occurs if the send time falls outside dayparting windows.
By default, events/eligible users that fall outside of the dayparting window, will send on the next available window.
Standard business hours
For example, if the send time fell on a Saturday in the above dayparting scheme, the message would be queued to send 9AM Monday morning. This can be used as an effective tool to ensure that your messages are sent at specific times of the week.
Alternatively, there is an option to skip the user if they fall outside dayparting times.
If "Skip message" is selected, if the send time does not fall within your dayparting windows, the message will be discarded and not sent. This can be used to ensure that time-sensitive triggered emails are not sent outside of specific hours.
Examples: Here are some scenarios, if you have set dayparting between 10a-12p for all days and no delays on campaign trigger.
- Event comes in at 9.45am on Monday - will get queued up until the next eligible window i.e. Monday at 10am
- Event comes in at 11a on Tue - gets sent immediately since it is within 10am-12p
- Event comes in at 4p on Wed - gets queued up to be sent at 10am on Thu, but campaign messaging limit for Thu will be applied
Segment Triggered Campaign
Unlike event triggered, we do not queue up messaging. The combination of day parting and segment definition controls when a user get a message.
Example: Here are some scenarios, if you have set day parting between 10am-12p for all days and no delays on campaign trigger. Also, assume segment definition looks back for users with joined_at date in the past 1 hour only.
- User joined_at is 9.45am on Monday - campaign won't execute until 10am, but at 10am, it will look for users who have joined between 9am-10am and therefore this user will get an email
- User joined_at is 10.45am on Monday - Within day parting window and user gets emails
- User joined_at is 1.45p on Tuesday - outside of day parting window and campaign won't execute until 10am on Wednesday. At 10am on Wed, segment will only look at users from 9am-10am (past 1 hour) and this and other users will get skipped. So in this scenario, you will need to increase your segment and/or day parting window to make sure users are not getting missed.
User attribute called bsft_control_bucket is assigned to every user upon account creation. This user attribute is used to split up segments into smaller groups or buckets.
Every user is randomly assigned a number between 1 to 100, so you can use this to help create random segments or for splitting up users for A/B testing.
Blueshift launches ability to test event-triggered flows by publishing a test event from the Campaign Journey page. This capability makes it easy to experience and validate a journey right from the Campaign page.
Clicking on the Test campaign icon, shows a modal to simulate an event with pre-populated values (from the most recent event received). After hitting publish, it generates the sample event and triggers the event flow for the user.
Example: a signed_up event is being generated to trigger a Welcome email journey.
The test campaign capability is available for all launched event-triggered campaigns.
8. Test Mode
Campaigns have the ability to run in test mode. This feature is disabled by default - contact firstname.lastname@example.org to have it enabled.
If set to yes, the campaign will have the following characteristics for one-time, recurring, and segment triggered:
- It does not apply messaging limits.
- It does not send a message. Instead, it will archive the message in your Blueshift S3 bucket in campaigns/debug/<campaign_uuid>/<execution_time>/. The files have a user attribute and trigger UUID in the file name.
- Since it does not send a message, it also does not add to the user’s messaging counts.
- Links in the message body get rewritten so that they do not generate click stats.
- No stats of any kind get created.
If the campaign is event or API triggered, it has the same characteristics above and also:
- It ignores the campaign journey concurrency setting.
- It archives the message in your Blueshift S3 bucket in campaigns/debug/<campaign_uuid>/. The files have a user attribute, trigger UUID and execution time in the file name.
- Any live users that get processed while the campaign is in test mode will exit the campaign. This ensures no messaging occurs while the campaign is in test mode, even if the live user had received a message from a previous trigger when the campaign was live.
- Any test users that get processed while the campaign is live will exit the campaign. This ensures test users don't receive any subsequent messages if the campaign gets switched from test mode to live.
This test mode feature should be used sparingly for campaigns. It should not be used to do a dry run of every campaign before making it live. Some possible use cases for this feature are:
- Debugging a specific message rendering issue across many users
- QA-ing sends across many users before making a campaign live
- Testing external fetch rate limiting against your own systems before making a campaign live
**Your account is limited to 5 test mode campaigns. Contact email@example.com if you need more campaigns allocated for testing.