Performance Marketing · Reimagined

The agency
runs itself.

An end-to-end operating system that takes a campaign brief and runs the full lifecycle — research, strategy, build, launch, monitor, optimise — with zero idle human hours between steps.

Scroll
The Shift

Traditional agencies move in days and weeks.
This one moves in minutes.

Every step that used to need a meeting, a deck, or a Slack thread is now an event in a Make.com scenario or a call to Claude. Humans only show up at the four moments where judgement actually changes the outcome.

Traditional agency

  • Brief → strategy: 3–7 days
  • Strategy → launch: 5–10 days
  • Weekly review cadence
  • 2.5 FTE per active client
  • 25–35% wasted spend per campaign
  • Anomalies caught 5–7 days late

AI-native system

  • Brief → strategy: under an hour
  • Strategy → launch: same day
  • 72-hour optimisation cadence
  • 0.5 FTE per active client
  • 8–12% wasted spend per campaign
  • Anomalies surfaced inside 24 hours
The Architecture

Six pieces.
One operating system.

One brand memory layer feeds four autonomous scenarios. One Google Sheet is the source of truth. Claude is the brain. The result is a campaign machine that compounds with every brief you put through it.

Memory

Brand Memory Layer

Voice, audience, geo, competitors, historical CPA/CTR/ROAS, negative-keyword seeds, preferred creative angles, conversion IDs — all in one Make data store. Every scenario reads from it before doing anything.

Scenario G1

Google Pre-Launch

Brief → 4 campaigns × 15 RSAs with keywords, negatives, geo targeting. Auto-seeds Benchmarks tab. Pings Slack when ready.

Scenario M1

Meta Pre-Launch

Brief → 5 creative concepts (2 static, 2 carousel, 1 video) with hook, copy, visual direction. Auto-routes to designer.

Scenario G2

Google Post-Launch

Three branches: build campaigns paused → 24h health diagnose → 72h search-terms optimiser proposes negatives.

Scenario M2

Meta Post-Launch

Three branches: build paused → 24h CPL/CTR/frequency diagnose → 72h fatigue detection + new-angle proposals.

Phase 1 · AI in the loop

Brief in. Strategy out.
Zero human time in between.

A strategist drops a brief — platforms, budget, KPI, target value, landing URL — into the Briefs tab. Within the polling window, the pre-launch scenarios pick it up and walk the seven steps below for each platform in parallel.

It doesn't matter who drops the brief. The system isn't waiting on a particular strategist, a particular copywriter, or a particular account lead. Anyone with sheet access starts the chain.

Trigger

Polls the Briefs tab every 15 min. Routes by platform — Google, Meta, or Both.

Mark Researching

Updates row status so the team can see at a glance what's in flight.

Read Memory

Pulls the brand memory record so Claude has full context before generating anything.

Claude Generates

Returns structured JSON — Google: 4 campaigns × 15 RSAs. Meta: 5 ad concepts with hook + copy + visual brief.

Write Strategy

Each campaign/concept is written as a row in the Strategy tab. Benchmarks tab gets seeded with target KPIs.

Mark Ready

Row status flips to Strategy_Ready. The strategist now has something to review.

Slack Ping

"Strategy ready — review on Sheet." Human approves; scenario hands off to Phase 2.

Human Gate

The only human moment in Phase 1 — strategist confirms benchmarks and sets status to Approved.

Phase 2 · Human in the loop

Live. Watched. Optimised.
Forever.

The gear flips here. In Phase 1, the AI did the work and the human approved. In Phase 2, the AI watches and proposes — and the human decides. Every action visible, every action reversible, every action logged.

Once approved, the post-launch scenarios pick up the same row hourly and route it through one of three branches based on its current state. Build it. Diagnose it. Optimise it. Then keep cycling.

Branch A

Build

Reads strategy rows for the brief, creates campaigns paused in the Ads UI, marks the row Built, pings Slack. Human flips Live with a single click.

Branch B · 24h

Health Check

Pulls last-7-day performance, hands it to Claude for diagnosis against Benchmarks, logs the verdict to the OptimizationLog, alerts Slack only if something's outside tolerance.

Branch C · 72h

Optimise

Google: pulls last-7-day search terms → Claude proposes negative keywords. Meta: detects ad fatigue → Claude proposes 2-3 fresh angles. Human 👍 to apply.

Outcomes

The numbers that matter.

Measured against a traditional agency workflow operating at parity on the same client and the same media budget. The system doesn't replace the strategist — it removes everything between the strategist's decisions.

~95%
Faster brief → strategy
days → hours
~90%
Faster strategy → launch
days → hours
<24h
Anomaly detection lag
vs 5-7 days weekly review
~65%
Less wasted spend
25-35% → 8-12% per campaign
Optimisation cadence
every 72h vs weekly
~80%
Lower FTE per client
2.5 FTE → 0.5 FTE
30 min
To onboard a new brand
data store + scenario clones
Scale across clients
one architecture, many brands
0
Surprises
every action human-approved
45 days, same client, same budget

The math the agency
doesn't put in the deck.

A modeled CAC curve for the same e-commerce account run two ways. The traditional agency runs weekly reviews; the AI-native system runs an autonomous 72-hour optimisation loop. Both lines start at the same CAC. Both hit the same three fatigue events. Only one of them stops bleeding.

3.0× 2.5× 2.0× 1.5× 1.0× 0.75× Cost per acquisition (relative) Day 0 Day 9 Day 18 Day 27 Day 36 Day 45 Days since launch starting CAC Creative fatigue Day 14 Audience fatigue Day 21 Keyword fatigue Day 28 +205% CAC Traditional · Day 45 −40% CAC AI-native · Day 45
Traditional agency · weekly review cadence
AI-native system · 72-hour autonomous loop
Day 14

Creative fatigue

Traditional: ads keep running at the same hooks until next Monday's review. CAC climbs 80%.

AI-native: the 72h scenario detects CTR decay, Claude proposes 2-3 fresh angles, designer ships them. CAC bumps for ~3 days, then resumes decline.

Day 21

Audience fatigue

Traditional: frequency spirals; the same audience sees the same ad 8+ times. CAC spikes 120%.

AI-native: frequency thresholds in Benchmarks tab trip the alert. Claude proposes new lookalike + interest combinations. CAC bumps once, then keeps falling.

Day 28

Keyword fatigue

Traditional: irrelevant search terms eat 30%+ of spend before the next manual search-term review. CAC peaks 175% above start.

AI-native: the 72h search-terms scenario hands Claude a list, Claude proposes negatives, human 👍 applies them. Wasted spend never compounds.

No key-person risk

The system doesn't
care who's on holiday.

Traditional agencies are bottlenecked by people. The senior strategist holds the brand context. One designer holds the creative voice. One media buyer holds the platform expertise. Lose any one of them — to a holiday, a flu, a notice period — and the pipeline stalls.

01

Context lives in the Data Store

Brand voice, audience, geo, competitors, historical KPIs, irrelevant-keyword seeds — all in one Make data store record. Not in someone's head. Not in someone's deck.

02

Strategy lives in Claude prompts

The "thinking" — how to translate a brief into 4 Google campaigns or 5 Meta concepts — is in versioned prompts. Not in a senior strategist's intuition.

03

State lives in the Sheet

Every brief, every status flip, every benchmark, every optimisation proposal is a row. Anyone can see exactly where every campaign sits at any moment.

04

Actions live in Make + the APIs

Building campaigns, pulling insights, applying negatives — all scenario modules calling Google Ads + Meta APIs. Not "the one person who knows how to do it."

The result: any team member with access can pick up any brief at any step. Onboarding a new operator takes hours, not months. Holidays don't pause the pipeline. Resignations don't lose institutional knowledge. The system is the institutional knowledge.

Inside the system

Four scenarios.
One loop that never stops.

Every step is a real Make.com module wired to Google Sheets, a brand-memory data store, Claude, and Slack. Below is exactly what's running — these are the live scenarios, not mockups.

make.com · scenarios · G1
Google Pre-Launch scenario
G1 · Google · Brief

Google Pre-Launch

Sheets trigger → brand memory → Claude → strategy rows in the Sheet → Slack ping. Linear, 12 modules.

make.com · scenarios · G2
Google Post-Launch scenario
G2 · Google · Automation

Google Post-Launch

Router with 3 branches: Build, 24h Health, 72h Optimise. Each branch hits Google Ads API + Claude + Slack.

make.com · scenarios · M1
Meta Pre-Launch scenario
M1 · Meta · Brief

Meta Pre-Launch

Same backbone as G1, tuned for Meta: 5 creative concepts per brief delivered as designer-ready briefs.

make.com · scenarios · M2
Meta Post-Launch scenario
M2 · Meta · Automation

Meta Post-Launch

Router with 3 branches over the Meta Graph API: Build, 24h CPL/CTR diagnose, 72h fatigue + new angles.