How the Optimizer Thinks
Every recommendation your Action Agent makes — pause this placement, lower this bid, shift this budget — is produced by a two-part brain: a statistics engine we call the Analyst, followed by an LLM that reviews and explains its findings.
This page walks through what the Analyst does, why it's there, and why the combination is more trustworthy than an LLM alone.
The problem with asking an LLM to optimize campaigns
Campaign data is a hypercube. For a single campaign you might have thousands of rows — every combination of placement, device, age, gender, country, creative, audience, and time of day. A few of those rows are quietly wasting spend. A few are outperforming. Most are noise.
LLMs are brilliant at language and judgment. They are not reliable at arithmetic over hundreds of numbers. Ask an LLM to read raw campaign data and propose optimizations, and three things go wrong:
- It invents numbers that aren't in the data.
- It misses real winners and losers because it can't hold the full table in working memory.
- It skips the statistical check: is this difference real, or just noise?
So we don't ask the LLM to do that job.
What the Analyst does
The Analyst is deterministic TypeScript that runs before the LLM. It performs the work a human analyst would do by hand, but across thousands of cells in a fraction of a second:
- Pulls dimensional data from Meta, Google Ads, DV360, or TikTok — broken out by placement, device, demographics, creative, audience, and more.
- Finds statistically significant cells — rows whose performance is genuinely different from the rest, not just noisy.
- Shrinks small samples toward the mean so a cell with 3 conversions isn't treated like a cell with 300.
- Applies multiple-testing correction so if 200 cells are examined, we don't flag 10 of them just by chance.
- Filters out unstable cells whose behavior changed recently — those need more data before acting.
- Estimates the dollar impact of fixing each finding, and ranks findings by expected impact — not just by p-value.
- Translates each finding into a concrete action — "exclude Audience Network", "cap bid on 55+ mobile", "pause creative 3 in line item X" — shaped by what the platform actually allows and what your operator policy permits.
The output is a ranked list of ProposedAction objects, each carrying evidence and an expected impact.
Where the LLM comes in
The LLM receives the ranked, evidence-backed actions — not the raw data. Its job is the part LLMs are genuinely good at:
- Selection. Pick which of the ranked actions are worth surfacing to the operator in this run.
- Pruning. Drop duplicates or actions that don't fit the current context.
- Narration. Explain each recommendation in plain English, using the evidence the Analyst provided.
- Context checks. Apply brand-safety rules, seasonality knowledge, and agent memory.
The LLM never invents a number. Every figure in a recommendation traces back to a cell the Analyst measured.
How data flows
Platform APIs
↓
Dimensional metrics store
↓
Analyst — statistical primitives
↓
Per-platform translator
↓
Ranked ProposedAction[] + rationale
↓
LLM — select, prune, narrate
↓
Recommendations shown to the operatorEvery step above the LLM is deterministic. Same inputs produce the same outputs. Every run writes an audit trace you can replay and inspect.
Why this matters for you
- Trustworthy numbers. Every metric in a recommendation is traceable to real data. No hallucinations.
- Fewer false positives. The Analyst applies the statistical checks a real analyst would — shrinkage, FDR correction, regime detection — so you don't chase noise.
- Real dollar impact. Recommendations are ranked by expected impact on your objective, not by how "interesting" they sound.
- Auditable. Every run writes a full trace: which cells were examined, which were flagged, what the LLM chose to show, and what the operator approved.
- Platform-aware. The translator layer understands what Meta, Google, DV360, and TikTok each allow — so recommendations are always executable actions, not abstract advice.
Supported platforms
The Analyst currently ships with translators for:
- Meta Ads — placement exclusions, audience exclusions, age carve-outs, creative pauses, budget caps
- Google Ads — placement/topic/channel exclusions, device and geo bid modifiers, creative pauses
- Display & Video 360 — line-item placement exclusions, device bid modifiers, creative pauses, line-item budget adjustments
- TikTok Ads — ad and ad-group pauses
More platforms — including publisher-side monetization (Google Ad Manager, AdMob) — share the same statistical core and will be added through new translators.
Further reading
- Action Agents — how to configure an optimizer agent
- Intelligence & Learning — how memories and rules shape what the Analyst and LLM will propose
- Insights — where optimizer runs and their recommendations surface in the dashboard