Lesson Goal
In Lesson 11 you set budgets, runway guards, and a monthly stoplight so you do not overspend or ignore risk.
Now you need to use that safety net to run real monetization experiments without chaos.
In this lesson you will:
- Prioritize a short list of post-launch experiments (offers, bundles, ad placements, retention tweaks).
- Define each experiment with a hypothesis, success metric, and exit condition.
- Tie experiments to your conservative scenario so you only run what your runway and budget allow.
By the end, you will have a small, ordered backlog of experiments you can run one at a time and measure.
Step 1 – List Candidate Experiments
Start with a simple list of things you could try after launch. Do not commit yet; just brainstorm.
Offers and pricing
- New bundle (e.g. “Starter Pack” at a fixed price).
- Limited-time discount on an existing IAP.
- New price point for an existing item (e.g. test $2.99 vs $4.99).
- Subscription trial length (e.g. 3 days vs 7 days).
Ads (if you use them)
- Ad placement (e.g. reward after level vs optional continue).
- Ad frequency (e.g. one rewarded per session vs two).
- Mediation order (which ad network is tried first).
Retention and funnels
- Onboarding step (e.g. one more tutorial screen vs skip).
- First-time offer timing (e.g. after level 3 vs after first death).
- Push or email timing (e.g. day 1 vs day 3 re-engagement).
Mini-task:
Write down 5–8 possible experiments. Mix at least one from offers, one from ads (if applicable), and one from retention or funnel.
Step 2 – Score by Impact, Confidence, and Cost
You cannot run everything. Score each idea on three axes (rough is fine):
- Impact – If it works, how much could it move revenue or retention? (e.g. low / medium / high.)
- Confidence – How sure are you that you can measure the result and that the change is safe? (e.g. low / medium / high.)
- Cost – How much time, engineering, or art does it take? (e.g. low / medium / high.)
Rule of thumb: Prefer high impact, high confidence, low cost first.
Then high impact, medium confidence, low cost.
Avoid starting with high-cost, low-confidence bets.
Mini-task:
Add three columns (Impact, Confidence, Cost) next to your list and score each. Sort so the best tradeoffs are at the top.
Step 3 – Write a Hypothesis and Success Metric for Each
For the top 3–5 experiments, turn each into a one-sentence hypothesis and a single primary metric.
Good hypothesis:
“Adding a $4.99 Starter Pack will increase payers by at least 10% without hurting ARPU.”
Bad hypothesis:
“We will try a bundle and see if it makes more money.” (Too vague; no clear success.)
Good metric:
“Conversion to first purchase within 7 days” or “ARPU in the first 14 days.”
Bad metric:
“Revenue” without a time window or segment. (Too broad; hard to attribute.)
Mini-task:
For your top 3 experiments, write one hypothesis and one primary metric per experiment. Add a minimum sample size (e.g. “need at least 500 new users per variant”) so you do not decide too early.
Step 4 – Define Exit Conditions
Before you run an experiment, decide when you will stop or roll back.
- Win – Metric beats the bar you set; consider rolling out to 100%.
- Lose – Metric is clearly worse or unchanged after enough data; roll back and document.
- Inconclusive – Not enough data or too much noise; extend the test or pause and try something else.
Also set a time cap: “We will decide after 2 weeks no matter what, to avoid endless tests.”
Mini-task:
For each of your top 3 experiments, write:
- Win condition (e.g. “+10% conversion”).
- Lose condition (e.g. “no change or worse after 1,000 users per variant”).
- Time cap (e.g. “14 days”).
Step 5 – Check Against Your Runway and Budget
In Lesson 11 you set UA budget, content budget, and stoplight rules.
Use them here.
- Only run experiments you can afford. If an experiment needs paid UA to get enough users, stay within your UA cap.
- One at a time (or very few). Running five things at once makes it impossible to know what worked.
- If you are in Yellow or Red on the stoplight, pause new experiments and focus on retention and funnel clarity first.
Mini-task:
Look at your conservative revenue and your monthly UA/content budget.
Write one sentence: “We will run at most X experiments per month and spend no more than Y (time or money) on experiment setup.”
Step 6 – Put It in a Simple Backlog
Turn your prioritized, hypothesis-driven list into a backlog you can share with yourself or your team.
For each experiment include:
- Title (e.g. “Starter Pack $4.99 test”).
- Hypothesis (one sentence).
- Primary metric and min sample size.
- Exit conditions (win / lose / time cap).
- Rough cost (time or money).
- Status (Not started / Running / Done – Win/Lose/Inconclusive).
Keep this in a doc or sheet and update it every time you start or finish an experiment.
Mini-task:
Create a small table (or section in your monetization doc) with the columns above.
Fill it for your top 3 experiments.
That is your first post-launch experiment backlog.
Common Mistakes to Avoid
- Running too many experiments at once. You will not know what moved the needle.
- No clear hypothesis or metric. “Let’s try it” is not an experiment.
- Deciding too early. Stick to your minimum sample size and time cap.
- Ignoring the stoplight. If you are in Red or Yellow, fix retention and funnel before adding more monetization tests.
- Forgetting to document. Write down what you ran, what you saw, and what you decided so you do not repeat failed ideas.
Quick Recap
In this lesson you:
- Listed candidate post-launch experiments (offers, ads, retention).
- Scored them by impact, confidence, and cost.
- Wrote a hypothesis and primary metric for the top few.
- Set exit conditions (win / lose / time cap) and checked them against your runway and budget.
- Built a small experiment backlog you can run one at a time.
In the next lesson, you will focus on store presence and positioning so your monetization is supported by a clear, honest store page and positioning that matches how you want players to perceive your game.
Next Steps
- Revisit your budget and runway decisions if your experiment plan would exceed your guards.
- Check the A/B testing lesson for how to run tests safely and fairly.
- Bookmark this lesson and update your experiment backlog as you launch and learn.
Found this useful? Share it with your team or bookmark it for when you are ready to run post-launch experiments.