An end-to-end operating system that takes a campaign brief and runs the full lifecycle — research, strategy, build, launch, monitor, optimise — with zero idle human hours between steps.
Every step that used to need a meeting, a deck, or a Slack thread is now an event in a Make.com scenario or a call to Claude. Humans only show up at the four moments where judgement actually changes the outcome.
One brand memory layer feeds four autonomous scenarios. One Google Sheet is the source of truth. Claude is the brain. The result is a campaign machine that compounds with every brief you put through it.
Voice, audience, geo, competitors, historical CPA/CTR/ROAS, negative-keyword seeds, preferred creative angles, conversion IDs — all in one Make data store. Every scenario reads from it before doing anything.
Brief → 4 campaigns × 15 RSAs with keywords, negatives, geo targeting. Auto-seeds Benchmarks tab. Pings Slack when ready.
Brief → 5 creative concepts (2 static, 2 carousel, 1 video) with hook, copy, visual direction. Auto-routes to designer.
Three branches: build campaigns paused → 24h health diagnose → 72h search-terms optimiser proposes negatives.
Three branches: build paused → 24h CPL/CTR/frequency diagnose → 72h fatigue detection + new-angle proposals.
A strategist drops a brief — platforms, budget, KPI, target value, landing URL — into the Briefs tab. Within the polling window, the pre-launch scenarios pick it up and walk the seven steps below for each platform in parallel.
It doesn't matter who drops the brief. The system isn't waiting on a particular strategist, a particular copywriter, or a particular account lead. Anyone with sheet access starts the chain.
Polls the Briefs tab every 15 min. Routes by platform — Google, Meta, or Both.
Updates row status so the team can see at a glance what's in flight.
Pulls the brand memory record so Claude has full context before generating anything.
Returns structured JSON — Google: 4 campaigns × 15 RSAs. Meta: 5 ad concepts with hook + copy + visual brief.
Each campaign/concept is written as a row in the Strategy tab. Benchmarks tab gets seeded with target KPIs.
Row status flips to Strategy_Ready. The strategist now has something to review.
"Strategy ready — review on Sheet." Human approves; scenario hands off to Phase 2.
The only human moment in Phase 1 — strategist confirms benchmarks and sets status to Approved.
The gear flips here. In Phase 1, the AI did the work and the human approved. In Phase 2, the AI watches and proposes — and the human decides. Every action visible, every action reversible, every action logged.
Once approved, the post-launch scenarios pick up the same row hourly and route it through one of three branches based on its current state. Build it. Diagnose it. Optimise it. Then keep cycling.
Reads strategy rows for the brief, creates campaigns paused in the Ads UI, marks the row Built, pings Slack. Human flips Live with a single click.
Pulls last-7-day performance, hands it to Claude for diagnosis against Benchmarks, logs the verdict to the OptimizationLog, alerts Slack only if something's outside tolerance.
Google: pulls last-7-day search terms → Claude proposes negative keywords. Meta: detects ad fatigue → Claude proposes 2-3 fresh angles. Human 👍 to apply.
Measured against a traditional agency workflow operating at parity on the same client and the same media budget. The system doesn't replace the strategist — it removes everything between the strategist's decisions.
A modeled CAC curve for the same e-commerce account run two ways. The traditional agency runs weekly reviews; the AI-native system runs an autonomous 72-hour optimisation loop. Both lines start at the same CAC. Both hit the same three fatigue events. Only one of them stops bleeding.
Traditional: ads keep running at the same hooks until next Monday's review. CAC climbs 80%.
AI-native: the 72h scenario detects CTR decay, Claude proposes 2-3 fresh angles, designer ships them. CAC bumps for ~3 days, then resumes decline.
Traditional: frequency spirals; the same audience sees the same ad 8+ times. CAC spikes 120%.
AI-native: frequency thresholds in Benchmarks tab trip the alert. Claude proposes new lookalike + interest combinations. CAC bumps once, then keeps falling.
Traditional: irrelevant search terms eat 30%+ of spend before the next manual search-term review. CAC peaks 175% above start.
AI-native: the 72h search-terms scenario hands Claude a list, Claude proposes negatives, human 👍 applies them. Wasted spend never compounds.
Traditional agencies are bottlenecked by people. The senior strategist holds the brand context. One designer holds the creative voice. One media buyer holds the platform expertise. Lose any one of them — to a holiday, a flu, a notice period — and the pipeline stalls.
Brand voice, audience, geo, competitors, historical KPIs, irrelevant-keyword seeds — all in one Make data store record. Not in someone's head. Not in someone's deck.
The "thinking" — how to translate a brief into 4 Google campaigns or 5 Meta concepts — is in versioned prompts. Not in a senior strategist's intuition.
Every brief, every status flip, every benchmark, every optimisation proposal is a row. Anyone can see exactly where every campaign sits at any moment.
Building campaigns, pulling insights, applying negatives — all scenario modules calling Google Ads + Meta APIs. Not "the one person who knows how to do it."
The result: any team member with access can pick up any brief at any step. Onboarding a new operator takes hours, not months. Holidays don't pause the pipeline. Resignations don't lose institutional knowledge. The system is the institutional knowledge.
Every step is a real Make.com module wired to Google Sheets, a brand-memory data store, Claude, and Slack. Below is exactly what's running — these are the live scenarios, not mockups.