This guide shows you how to start tests in Campaign Optimizer Agent, either from system-suggested opportunities or directly from a campaign. It also explains how to review, approve, and launch a test variant.
Two ways to start campaign optimization
- Flow 1: Use Pending actions (system-suggested opportunities).
- Flow 2: Set up from a launched campaign (manual).
Recommended starting point
If you’re new to Campaign Optimizer Agent, start with Pending Actions. It highlights high-potential tests and simplifies setup.
Flow 1: Use Pending Actions
Go to Customer AI > Agents > Campaign Optimizer and open the Pending Actions tab. This tab shows a list of test opportunities with their status, source content, and proposed variants.
You can either:
- Click Discover Opportunities to refresh the list with new suggestions.
- Act on existing rows that appear in the tab.
The Status column helps you identify where each opportunity came from:
- Approval Pending — Triggered manually by a user in the campaign. Content is ready for review.
-
Opportunity — Indicates that the test was suggested by the Campaign Optimizer Agent. The label next to it reflects the current state:
-
agent
— Content generation is still in progress. -
user
— Content is ready for review and approval.
-
Each row summarizes the campaign, trigger, template, current content, and a proposed test variant. Click on any row to open the review screen where you can approve or reject the suggestion.
Review & approve in Flow 1
Click Configure to open the approval screen for this opportunity. You’ll see:
- Control on the left (your current content).
-
Suggested test variants on the right:
- A–C: top-ranking variant from the tournament.
-
D (Custom): create your own test using one of:
- Pick from Content Library — a collection of all variants generated by the tournament.
- Copy Variant A/B/C — clone one suggestion and edit.
- Start Fresh (Blank) — write your custom subject/preheader.
- Variant navigation (e.g., 1 of 20 samples), plus Show Reasoning and Show Raw Content (Liquid) for deeper inspection.
- Test Settings with the initial traffic split (computed by the Campaign Optimizer Agent, typically between 15–50% based on send volume, adjustable).
When you’re ready, choose a test and click Pre-approve variant. You’ll be taken to the campaign trigger with the Campaign Optimizer Agent pre-selected. Save the campaign for the test to begin.
If none of the suggestions work, you can click Reject and Generate New Variants. The system will regenerate a new set of variants and notify you by email once they’re ready for review.
Flow 2: Manual setup from a campaign
Open a draft or launched campaign, then select an eligible email trigger that isn’t already in an A/B test. In the trigger’s setup panel, select Campaign Optimizer Agent, choose what to optimize for (e.g., Unique Clicks Rate), optionally add custom instructions, and Save.
After saving, the Campaign Optimizer Agent generates test ideas (typically within ~20 minutes) and sends you an email. From that email—or by going to Actions & Opportunities or Optimizer Tests—open the approval screen.
You can also click Explore & Pre-approve in the trigger panel to view or generate content in advance. This step is optional and does not start a test.
The same test will also appear in the Pending Actions tab under Customer AI > Agents > Campaign Optimizer, where you can review and approve it alongside other pending items.
Review & approve in Flow 2
The approval screen layout is the same:
- Control on the left; A–C suggestions and D (Custom) on the right.
- For D (Custom), you can:
- Pick from Content Library (all other generated ideas from the tournament),
- Copy Variant A/B/C and edit, or
- Start Fresh (Blank).
- Variant navigation, Show Reasoning, Show Raw Content, and Test Settings (the initial split is computed by the Campaign Optimizer Agent, typically 15–50% based on send volume, adjustable).
When ready, select a test variant and click Begin testing with variant to begin the test.
Need a fresh set of ideas? Click Reject and Generate New Variants to regenerate.
Post-launch: view and manage tests
Use the Optimizer Tests tab to monitor test status, review results, and track history.
To stop a running test, select Stop Current Test. Traffic is then redirected 100% to the control variant, and a new test variant is generated for approval.
Using Campaign Optimizer Agent in one-time campaigns
In addition to triggered and recurring campaigns, you can also use the Campaign Optimizer Agent in one-time campaigns. The setup is similar, but the execution flow and statuses have some crucial differences.
Follow these steps to set up the Campaign Optimizer Agent in one-time campaigns, preferably during the draft stage, before launch.
Setup tip
The Optimizer Agent option only appears after you save the campaign once with the trigger added. Then re-open the trigger to configure the Optimizer. This behavior will be updated in a future release.
- Add the email trigger to your one-time campaign.
- If this is the first time you are saving the campaign, save it once with the trigger added. The Optimizer Agent option only appears after that initial save.
- Re-open the trigger and select Optimizer Agent in the A/B Test option.
- Set Optimize for (for example, Unique click rate) and add any custom instructions.
- Enter a percentage in Test on (for example, 20%). This is the initial group that is split and A/B tested to determine a winner among the variants.
- Enter a duration in End test (for example, 2 hours). This is the evaluation window for the A/B test.
- Click Done, and then save or launch the campaign.
Variant approval
- The Campaign Optimizer Agent starts generating test variants once the one-time campaign is launched.
- Content generation takes about 20 minutes. You’ll receive an email when variants are ready, and the campaign’s Optimizer Tests tab will also show them as pending approval.
- Approve a test variant before the campaign’s start time. For example, if you launch at 8 AM with a 10 AM start time, you must approve before 10 AM so the A/B test can run when execution begins.
Post approval
- The system sends the Test on % audience first, split 50/50 across variants.
- The test runs for the End test duration you set (evaluation window).
- At the end of the window, results are compared on the chosen Optimize for metric, and a winner is selected.
- The remaining audience is automatically sent the winning variant.
- You’ll also receive an email confirming the winner. The test page continues to update with hourly monitoring for about one week.
Example of a one-time campaign
Audience: 10,000 • Test on: 20% • End test: 2 hours
- Evaluation group = 2,000 users → 1,000 get Variant A (control), 1,000 get Variant B (test).
- After 2 hours: control = 150/1000 conversions (15%), test = 160/1000 (16%).
- Winner = test → remaining 8,000 users receive Variant B (test).
Statuses in one-time campaign tests
After you configure the Campaign Optimizer Agent, the system waits for your approval. The status you see in the Optimizer Tests tab depends on what you do (or don’t do):
- Approval Pending — Variants are ready and waiting for approval. If not approved before the campaign’s start time, the status changes to Approval Expired.
- Approval Expired — No approval was given before the start time. The campaign runs as a regular one-time send without optimization, and the user-authored variant is sent to the entire audience.
- Test Running — You approved a variant. The test group (based on the Test on percentage) is split across your control and the approved test variant. At the end of the evaluation window, the system applies automatic winner selection and sends the winning variant to the remainder. After execution, the status remains Test Running for about one week to collect performance data.
- Success — After the monitoring period, the test is marked a success if the test variant (Optimizer Agent–authored) won and was applied to the remaining audience.
- Failure — After the monitoring period, the test is marked a failure if the control variant (user-authored) won and was applied to the remaining audience.
How winners are chosen
One-time campaigns use automatic winner selection logic (quick test → pick winner → send remainder). For more details, refer to the documentation.
Next up - analyze performance
You can continue with performance analysis to review metrics, compare test variants with the control, and see how the Campaign Optimizer Agent improves results.
Comments
0 comments