AI for Growth

I Tried Running Facebook Ads with AI Instead of an Agency — Here's What Happened

No agency. No designer. No copywriter. I built an autonomous AI agent that researches competitors, designs creatives, deploys campaigns via the Meta API, and monitors performance every 12 hours — all without me touching Ads Manager. $1-2 leads on a $25/day budget. Here's exactly how it works.

February 22, 2026 · Espen · 14 min read
Most small businesses pay $1,500-2,500/month to an agency just to manage their Facebook ads — before a single dollar goes to ad spend.

I wanted to know: could an AI agent replace the agency entirely? Not "AI wrote my ads" — an autonomous system that researches, creates, deploys, monitors, and optimizes campaigns on its own. This is the honest story of what happened when I built one.

The Agency Problem Nobody Talks About

Let me paint a picture you might recognize.

You're a business owner. You know you need Facebook ads to grow. You've tried boosting posts yourself — it felt like throwing money into a bonfire. So you start looking at agencies.

The quotes come in: $1,500/month on the low end. $2,500/month if they're "good." Plus a setup fee. Plus a minimum 3-month commitment. That's $4,500-7,500 before they've run a single ad.

And here's the part that really stings: most agencies managing accounts under $5,000/month in ad spend aren't giving you their A-team. You're getting a junior media buyer following a template. They'll create some generic creatives, throw up a few audiences, and send you a monthly report with enough jargon to justify the invoice.

I've been in growth and marketing for years. I know how Facebook ads work. But I also know that the actual labor — competitor research, copywriting, creative production, campaign setup, performance monitoring, optimization — is tedious, repetitive work. Exactly the kind of work an AI agent should be able to handle autonomously.

So I didn't just ask AI to "write me some ad copy." I built an autonomous ad operations system. An AI agent that researches competitors, designs a creative strategy, builds the ads, deploys them via the Meta Ads API, monitors performance on a schedule, and improves itself based on what it learns.

Spoiler: it works. But the path there was messier than I expected.

The Autonomous Ad System

This isn't "I asked ChatGPT to write some headlines." This is an AI agent that runs my ad operations end-to-end. Here's the architecture:

The Agent Stack

I used Claude Code, Anthropic’s official CLI. Claude Code isn’t a chatbot — it’s an agentic runtime that connects to real APIs via MCP servers, spawns parallel sub-agents, runs scheduled jobs, and maintains persistent memory across sessions through CLAUDE.md.

The key difference between this and "using AI for ads" is autonomy. Here's what the agent actually does:

Phase 1: Competitive Research (10 Parallel Subagents)

The agent spawns 10 subagents simultaneously, each researching a different competitor's ad strategy. They analyze ad libraries, landing pages, creative styles, hook patterns, and offer structures. All 10 run in parallel — work that would take a human researcher days happens in minutes. The main agent then synthesizes the findings into a unified competitive intelligence brief.

Phase 2: Creative Strategy & Production

Based on the research synthesis, the agent designs a creative strategy — which visual styles to test, which hook angles to pursue, which offers to lead with. Then it builds the actual ad creatives: it writes HTML for each ad image, renders them in a browser, and screenshots them to PNG. No designer. No Canva. The agent produces production-ready ad creatives autonomously.

Phase 3: Campaign Deployment via Meta Ads API

The agent connects directly to the Meta Ads API. It creates campaigns, ad sets, and ads programmatically. It uploads the creatives, sets the copy, configures targeting, and launches — all without me opening Ads Manager. The campaign structure follows a testing-then-scaling framework: ABO for testing ($10/day per ad set, one ad each), CBO for scaling winners ($15/day).

Phase 4: Autonomous Performance Monitoring (Every 12 Hours)

This is where it gets truly agentic. A cron job triggers the agent every 12 hours to pull fresh performance data from the Meta API. It analyzes CPL (cost per lead), CTR (click-through rate), and ROAS. It flags underperformers, recommends budget reallocation, and identifies which creatives and hooks are winning. It doesn't just report numbers — it interprets them and suggests next actions.

Phase 5: The Self-Improving Loop

After each performance check, the agent updates a persistent learnings database. Every insight — which creative styles convert, which hooks fail, which audiences respond — gets recorded. Future creative decisions are informed by this accumulated knowledge. The agent doesn't repeat mistakes. It builds on what it's learned.

This is the difference between "AI wrote my ads" and an autonomous ad operations system. The system gets smarter with every cycle.

Total daily budget: $25. Total monthly budget: roughly $750. Compare that to $1,500-2,500/month for an agency — and the agent works 24/7, checks performance twice a day, and never takes a vacation.

What the Agent Tested

The agent didn't just generate random variations. It designed a systematic testing framework based on its competitive research, then executed it.

Creative Styles

Based on the competitor analysis, the agent identified three dominant visual approaches worth testing:

Each style was built entirely in HTML by the agent, screenshotted to PNG, and deployed via the API. Total production time per creative: zero human minutes. The agent handled it all.

Hook Types: The Callaway Method

This is where the agent's research capability really shone. I pointed it at a video transcript from a direct-response marketing method (the Callaway approach to desire-based advertising). The agent didn't just watch and summarize — it extracted 5 distinct hook templates from the methodology, then autonomously generated ad variations using each template.

The research → synthesis → application pipeline looked like this:

Instructional Hooks (Template 1-2)

These teach something. "Here's how to run Facebook ads." "5 steps to getting more leads." "The framework top marketers use." They lead with value and education.

Desire-Based Hooks (Templates 3-5)

Extracted from the Callaway method, these tap into what your audience wants. "What if you could get leads while you sleep?" "Imagine waking up to 10 new leads every morning." "Business owners are getting $2 leads with this." They lead with the outcome, not the process.

The agent generated 3-4 variations per hook template, matched with each creative style. That gave it a test matrix of about 15-20 unique ad combinations — researched, written, designed, and deployed in a single session.

An agency would charge you a month's fee to produce that volume of creative. The agent did it autonomously, from research to live campaigns.

Want to see the full AI growth system behind this? I put together a free breakdown showing how I used AI agents to go from zero to 100+ daily visitors in 14 days — ads, SEO, funnels, all of it. Get the free breakdown →

What Worked (The Agent Figured This Out, Not Me)

Here's the part that surprised me most: the agent didn't just execute my strategy — it discovered the winning strategy by analyzing its own performance data. I didn't tell it what was working. It told me.

1. Editorial Cream Style Creatives Crushed Everything

During one of its 12-hour performance checks, the agent flagged a clear pattern: the editorial cream creatives were outperforming everything else by a wide margin. Dark/aggressive creatives were burning budget with terrible conversion rates. The agent identified this, flagged the underperformers for killing, and recommended doubling down on the editorial style.

I didn't tell it cream was better. It analyzed the CPL data across creative styles and drew that conclusion itself.

Why does it work? The agent's analysis suggested the same theory I would have reached: they don't look like ads. In a feed full of screaming colors and "🔥 LIMITED TIME 🔥" energy, something that looks like a thoughtful article or newsletter snippet stands out precisely because it's calm. It pattern-interrupts through subtlety.

The agent nailed this aesthetic in its creative production. It wrote HTML that looked like it came from a design studio — cream background, elegant serif typography, clean layout. Screenshot, upload via API, done. No human touched the design process.

2. Desire-Based Hooks Beat Instructional Hooks

This was the single biggest insight the agent surfaced from its performance analysis.

The hooks extracted from the Callaway method transcript — the desire-based templates — consistently outperformed the instructional hooks. The agent's learnings database captured it precisely:

Instructional hooks ("Here's how to...") got decent click-through rates. People are curious. They'll click to learn. But they didn't convert into leads.

Desire-based hooks ("What if you could..." / "Business owners are getting...") had slightly lower CTR but dramatically better conversion rates. The people who clicked on desire-based hooks were pre-qualified — they weren't just curious, they actually wanted the outcome.

Key insight (from the agent's learnings database): Desire-based hooks attract buyers. Instructional hooks attract learners. If you're running lead generation ads, you want buyers. This finding was recorded and now informs all future creative decisions automatically.

This is what makes a self-improving system powerful. The agent didn't just discover this once — it encoded the learning so every future ad creative benefits from it. The next batch of ads will lead with desire-based hooks by default, because the system learned that's what works.

3. Conversion Objective from Day One

This is a technical point but it's critical: always run your tests on conversion campaigns, never traffic campaigns.

When you run a traffic campaign, Facebook optimizes for clicks. It finds people who like to click on things. These are not the same people who fill out forms, sign up for things, or buy stuff. They're professional clickers.

When you run a conversion campaign optimized for leads, Facebook finds people who actually convert. The algorithm is shockingly good at this — but only if you tell it what you actually want.

I'll explain more about why this matters in the "What Failed" section. Because I learned this lesson the expensive way — and the agent now has it permanently recorded in its learnings database so it never repeats the mistake.

What Failed (Badly)

I'm going to be honest about the failures because they're more useful than the wins. And the self-improving loop means every failure made the system permanently better.

Failure #1: The Dark Aggressive Creatives Were a Disaster

Remember those black-background, red-accent, "YOUR BUSINESS IS DYING" style creatives? Complete waste of money.

They got attention, sure. People clicked. But the conversion rate was abysmal. The vibe attracted the wrong audience — people looking for drama, not people looking for solutions. The entire aesthetic screamed "internet marketer scam" and repelled exactly the kind of business owners I wanted to reach.

Here's where the agentic system proved its value: during its automated 12-hour performance check, the agent identified the pattern, flagged the dark creatives as underperformers, and recommended killing them. It didn't need me to look at the data. It analyzed CPL by creative style, saw the dark variants were 3-4x more expensive per lead, and drew the conclusion on its own.

That finding went straight into the learnings database. Every future creative the agent produces will avoid the dark/aggressive style — not because I told it to, but because the data told it to.

Lesson (now encoded in the system): Aggressive, fear-based creatives might work for some audiences, but for professional business owners, they're a repellent. Match your creative energy to your ideal customer's taste, not to what you see working for internet gurus.

Failure #2: Testing on Traffic Campaigns

This was my most expensive mistake — and the one that best illustrates why the self-improving loop matters.

Early on, I tried to save money by testing creatives on traffic campaigns first. The logic seemed sound: traffic campaigns are cheaper per click, so I could test more variations for less money, then move winners to conversion campaigns.

It didn't work. At all.

Ads that had beautiful click-through rates on traffic campaigns — 2%, 3%, even 4% CTR — completely flopped when switched to conversion campaigns. The leads didn't come. The cost per lead was atrocious.

Meanwhile, some ads that had mediocre CTR on traffic turned out to be conversion machines when given the right objective.

CTR on traffic campaigns does not predict cost per lead on conversion campaigns. They're measuring completely different audiences with completely different behaviors. I burned about a week of budget learning this.

The agent recorded this as a hard rule in its learnings database: "Never test on traffic campaigns. CTR does not predict CPL. Always test on the conversion objective you care about." Every future campaign the system deploys follows this rule automatically.

Lesson: Test on the objective you actually care about. If you want leads, test on conversion campaigns from day one. Yes, it costs more per test. No, there's no shortcut.

Failure #3: Moving Winning Ads the Wrong Way

When a winning ad emerged from the testing campaign, I initially duplicated it into the scaling campaign. The duplicate flopped. Same copy, same creative, same audience — terrible results.

Turns out, when you duplicate an ad, it loses all its accumulated social proof (likes, comments, shares) and the algorithm treats it as a brand new ad. It has to re-learn from scratch.

The fix: use the post ID. Every Facebook ad creates a post with a unique ID. When you create a new ad in your scaling campaign, paste the post ID instead of building a new ad. This carries over all the social proof and the algorithm's learnings. Night and day difference.

The Real Numbers

No vague claims. No "amazing results." Here are the actual numbers from the autonomous ad system:

Metric Result
Daily ad budget $25/day ($10 testing + $15 scaling)
Monthly ad spend ~$750
Cost per lead $1-2
Click-through rate (winning ads) ~1.6%
Best performing creative Editorial cream style (agent-identified)
Best performing hook type Desire-based (Callaway method extraction)
Creative production cost $0 (agent-built HTML → PNG)
Human time managing ads ~0 min/day (agent runs autonomously)
Performance check frequency Every 12 hours (automated cron)
Agency cost saved $1,500-2,500/month

The Real Comparison

With an agency, I'd be paying $1,500-2,500/month for management plus the ad spend. That's $2,250-3,250/month total. And the agency checks performance maybe once a week.

With the agent, I paid $750/month in ad spend. That's it. The agent researched competitors, built creatives, deployed campaigns, and monitors performance every 12 hours — automatically. It maintains a learnings database that makes every future campaign better.

Better results. More frequent optimization. $1,500-2,500/month less. Zero human hours in Ads Manager.

Are these world-class numbers? For a $25/day budget with zero creative production costs and zero human management time, $1-2 per lead is excellent. Most businesses I've seen pay $5-15 per lead on Facebook, and they're spending significantly more on creative production and agency fees.

But I want to be clear: these numbers are for lead generation (email signups for a free resource). If you're running ads for a high-ticket offer or e-commerce, your benchmarks will be different. The autonomous system, however, works the same way regardless of what you're selling.

How You Can Do This: From Chatbot to Agent

You can start with AI-assisted ads today and build toward a fully autonomous system. Here's the progression:

Level 1: AI-Assisted (Start Here)

Use any AI tool (ChatGPT, Claude) to generate ad copy and creative concepts. You still manage everything manually — but the AI handles the creative labor.

This alone saves you $1,500-2,500/month vs. an agency.

Level 2: AI-Powered Research

Before writing any ads, have the AI research your competitors. Feed it competitor landing pages, ad library screenshots, and relevant marketing methodology transcripts. Let it extract hook templates and creative patterns — then apply them to your ads.

This is the step most people skip, and it's the difference between "AI wrote some copy" and "AI designed a strategy based on competitive intelligence."

Level 3: Autonomous Operations

Connect your AI agent to the Meta Ads API. Set up automated performance monitoring on a schedule. Build a learnings database that persists across sessions. This is the full autonomous system — and it's what I run.

You’ll need an agent that supports tool use, scheduled jobs, and persistent memory. Claude Code is what I built this on — free to install, with Skills, sub-agents, hooks, and MCP servers for connecting to the Meta Ads API.

The Campaign Structure (All Levels)

Regardless of automation level, use this proven structure:

  1. Create a testing campaign with ABO (Ad Set Budget Optimization), $10/day per ad set
  2. One ad per ad set — each gets a fair shot at its own budget
  3. Kill rule: If an ad set spends 2-3x your target CPA with zero conversions, kill it
  4. Move winners to a scaling campaign (CBO, $15/day) using the post ID method
  5. Always use the Leads or Conversions objective — never traffic

The Desire-Based Hook Formula

Whether you're using Level 1 or Level 3, this is the hook type that wins. Give your AI this prompt:

I'm running Facebook ads for [YOUR OFFER].
Target audience: [WHO THEY ARE].
They want: [DESIRED OUTCOME].

Write 5 Facebook ad variations using desire-based hooks.
For each, give me:
- A headline (under 40 characters)
- Primary text (under 125 words)
- A description line

Focus on the outcome they want, not the process.
Make it conversational, not salesy.
Lead with what they'll GET, not what they'll LEARN.
Pro tip: Once you find a winning static ad (message + creative that converts), film a video version. Static ads are great for finding winning messages cheaply. Video ads are great for scaling those messages to larger budgets. The agent can identify your winners automatically — you just need to show up with a camera.

The Bigger Picture: From "AI Wrote My Ads" to "AI Runs My Ads"

Most people talking about "AI for Facebook ads" mean they asked ChatGPT to write some headlines. That's useful. It's also 2024 thinking.

The shift happening right now is from AI as a tool (you prompt, it generates, you copy-paste) to AI as an agent (you set the goal, it researches, creates, deploys, monitors, learns, and improves).

Here's what my agent does that a chatbot can't:

The agent doesn't suggest. It acts. It takes real actions in the real world — creating ads that spend real money, monitoring real performance data, and making real optimization recommendations based on accumulated learning.

This is the difference between an assistant and an agent. An assistant helps you do your job. An agent does the job.

The $1-2 leads I'm getting aren't because AI is magic. They're because an autonomous system tested more variations, faster, than any human could — and it learned from every dollar spent. The agent identified that dark creatives fail and editorial cream wins. The agent discovered that desire-based hooks from the Callaway method outperform instructional hooks. The agent encoded these learnings so they compound over time.

The question isn't whether AI Facebook ads work. They do. The question is whether you're still thinking about AI as a copywriting tool — or whether you're ready to build an autonomous system that runs your ad operations while you focus on the business.

For most business owners, once they see what's possible, the answer is obvious.

Frequently Asked Questions

Q: Can AI really write Facebook ad copy that converts?

Yes. AI can generate high-quality ad copy — headlines, primary text, and variations — that performs on par with or better than agency-written copy. The key is giving it the right inputs: your offer, your audience's desires, and real examples of what's working. In my testing, AI-written desire-based hooks consistently outperformed instructional hooks, producing leads at $1-2 each. And when you layer in autonomous performance monitoring, the system identifies what's working and doubles down — without you touching it.

Q: How much do Facebook ad agencies charge?

Most Facebook ad management agencies charge $1,500-2,500 per month, often with a minimum 3-month commitment. That's $4,500-7,500 before you've spent a dollar on actual ad spend. For small businesses spending $500-1,000/month on ads, the agency fee can exceed the ad budget itself.

Q: What budget do I need to test AI Facebook ads?

You can start testing with $10/day using an ABO campaign structure. This lets you test one ad variation per ad set and quickly identify winners. Once you find winning ads, scale them in a separate CBO campaign at $15-25/day. Total testing budget: $300-750/month.

Q: Should I test Facebook ads with traffic or conversion campaigns?

Always test on conversion campaigns, never traffic. High CTR on traffic campaigns does not predict low cost per lead on conversion campaigns. Facebook optimizes for what you tell it — if you optimize for clicks, you'll get clickers, not converters. I learned this the hard way — and my agent now has this encoded as a permanent rule in its learnings database.

Free: The AI Growth Breakdown

See how one business went from 0 to 100+ daily visitors in 14 days using AI agents. The exact tools and results.

Get the Free Breakdown →