Use A/B testing to compare different versions of your sign-up units. A/B tests can help you learn what resonates best with your audience to maximize subscriber growth.
Why A/B test sign-up units
- A/B testing lets you systematically compare multiple variants (e.g., control vs. one or more test variants) and determine which version drives higher opt-in rates — whether that means more SMS sign-ups, email sign-ups, or both.
- Because all test variants go live when you activate the test, and traffic is split among them, you get real-world performance data.
- As a best practice, we recommend starting tests with sign-up units that don’t have existing performance data: clone an existing unit (if needed), deactivate the original, then run the A/B test.
How to create an A/B test
- Go to Sign-up units.
- From the context menu (three dots) on the unit you want to test, click + Create A/B test.
- In the New A/B Test window:
- Enter a name for your A/B test.
- Select the sign-up unit type (mobile or desktop).
- In the Test groups section, set up your test variants. You can add up to four variants (total of five including control). Group A is the default control; all performance lifts are calculated against it.
- For each variant, select the sign-up unit and allocate a percentage of traffic (the total across variants must equal 100%). Optionally, add more variants if needed.
Click Create.
Note: Activating this A/B test automatically activates all the sign-up units included in the test.
How to view A/B test results
After you’ve created and launched your A/B test, you can monitor its performance from a dedicated dashboard:
-
Go to Sign-up units → A/B tests tab.
- For each test you’ll see:
- Test Name
-
Status (Active or Completed). A “Winner” tag appears next to the variant when statistical significance is reached.
Note: The A/B test statistical significance is currently only calculated on text subscribers. If you would like other metrics to be supported to calculate statistical significance on, please let your CSM know and pass along the feedback.
- Start/End date
-
Total number of new SMS and email subscribers per variant and performance lift compared to controlStart/End date: The date the test started and ended.
Note: The A/B test data might not immediately appear after you launch the test. This is because the early phase of A/B test results often fluctuate significantly due to small sample size. Once a sufficient sample size or test duration is reached, the data will display in the dashboard.
End an A/B test
You can end an A/B test at any time. After a test ends, the data for each variant remains available in the A/B tests dashboard:
- Go to Sign-up units and click the A/B tests tab.
- Click the three dots next to the A/B test you want to end and select End A/B test.
-
A confirmation message appears. Click Confirm to end the test.
Note: We recommend you let a test run until it has reached statistical significance to ensure the results are reliable. If you end a test early, a warning will appear.
Ideas to test with A/B testing
Not sure where to start? Here are some common variables marketers test to optimize sign-up units.
- Sign-up unit format — e.g., full-screen vs. partial overlay
- Timing — display sign-up unit immediately vs. after a user scroll or spends time on site
- Incentive — percentage discount, dollar-off, free gift, etc.
- Creative elements — imagery, button color/style, CTA copy, copy length, etc.
- Copy length & messaging tone — concise vs. descriptive; different CTA phrasing
- Behavioral triggers and display rules — exit intent, scroll delay, add-to-cart, etc. (especially useful if using different site contexts)
Pro tip: Start with one variable at a time to clearly see what’s driving the difference in performance.
Looking for inspiration? Check out sign-up units in texts we love for real-world examples.
Scheduling A/B Tests (Open Beta)
To make running sign-up unit A/B tests more flexible and convenient, you can schedule A/B tests. This feature is currently in open beta. For access, please reach out to your CSM.
Why should you schedule A/B tests?
- Schedule when an A/B test starts: Instead of launching immediately, you can pick a specific start date and time for the test.
- Optionally deactivate existing sign-up units that might otherwise conflict with the test when it begins — helps prevent overlapping or conflicting display rules. (Note: you could also manage this separately via Sign-up Unit Schedules; scheduling during A/B test setup just makes it more convenient.)
-
Schedule when an A/B test ends — two options:
- Automatically end when statistical significance is reached: Once our system detects a statistically significant winner, the test will end, the losing variant(s) will be deactivated, and the winning variant(s) will remain active.
-
End at a specific date and time: At the designated end time, you can choose whether to:
- Automatically keep the better-performing variant(s) active, or
- Revert and keep the original control unit active (if that’s the preference).
Recommended use cases
- Planning an A/B test to begin or end around a marketing milestone (e.g., a sale start, holiday, flash sale).
- Avoiding overlap or display conflicts with currently active sign-up units.
- Automating test wrap-up: letting the system end the test when a winner is clear, or at a scheduled time — without manual intervention.
- Coordinating A/B test cycles across multiple regions or site variations.
When scheduling A/B tests is especially helpful
- During high-traffic periods (e.g., holiday shopping, Black Friday / Cyber Monday), when timing and efficiency matter.
- When running multiple experiments successively and wanting a clean switch from one test to the next.
- When you expect overlapping sign-up units — scheduling ensures only the right units are active at any given time.