A single Claude Code slash command can audit your entire Meta ad account in under 5 minutes — checking spend waste, creative fatigue, audience overlap, Advantage+ misconfigurations, CPA spikes, and generating budget reallocation recommendations plus a client-ready report. No API keys required. You export a CSV from Ads Manager, type /meta-audit, and get a structured markdown report covering everything a senior media buyer would check in a 3-hour manual review. This guide gives you the complete skill file, installation steps, and a walkthrough of every audit module. If you're looking for a repeatable way to run a Claude Code Meta ads audit without developer overhead, this is the exact setup.
What Does This Skill Actually Do?
The /meta-audit skill runs seven sequential audit modules against your exported Meta Ads Manager CSV. Each module targets a specific failure mode that costs advertisers money when left unchecked.
- Spend Bleed Detection — finds ad sets burning budget with zero conversions or a cost per result more than 3x your account average. Calculates total waste and monthly projected waste so you know the exact dollar figure to act on.
- Creative Fatigue Scoring — assigns each ad a fatigue score from 0–10 based on frequency, CTR decay, CPM spikes, and ROAS. Industry benchmarks put the fatigue threshold at frequency 3.0+ for cold audiences — the skill flags these automatically.
- Audience Overlap Audit — maps ad sets competing in the same auction for the same people. Audience overlap above 30% causes meaningful self-competition, inflating CPMs silently across your whole account.
- Advantage+ Setup Check — catches the most common ASC misconfigurations: wrong conversion event, missing existing-customer exclusions, too few creative variations, and manual placement restrictions that limit Meta's optimization.
- CPA Spike Diagnosis — splits your 30-day dataset into two 15-day periods and identifies the root cause of cost increases — whether the problem is audience exhaustion, creative fatigue, landing page drop-off, or auction competition.
- Budget Reallocation — ranks all active ad sets by efficiency and models where to shift spend. Identifies the top 25% to scale, the bottom 25% to reduce, and shows the math on projected savings.
- Weekly Client Report — aggregates all findings into a plain-English executive summary ready to send to a client or stakeholder. No platform jargon. Dollar amounts for every recommendation.
No MCP servers. No API keys. No coding. Drop in a CSV, run the command, get a structured audit.
How Do You Install Claude Code?
Claude Code is Anthropic's agentic coding tool that runs in your terminal. It supports slash commands — markdown files that become reusable prompts you can call from anywhere in a session.
System Requirements
macOS 13+, Linux (Ubuntu 20.04+ / Debian 10+), or Windows 10+ with WSL2. Node.js 22+ (LTS).
Step 1: Install Node.js
If you don't have Node.js 22+ installed, use nvm:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash
source ~/.zshrc
nvm install 22
Verify with node --version — you should see v22.x.x or higher.
Step 2: Install Claude Code
Recommended method (native installer):
curl -fsSL https://claude.ai/install.sh | bash
Alternative via npm:
npm install -g @anthropic-ai/claude-code
Step 3: Authenticate
Navigate to your project folder and launch Claude Code:
cd ~/your-project
claude
A browser window opens for authentication. You need one of: Claude Pro ($20/month), Max, Teams, or Enterprise subscription — or Anthropic Console API credits. Follow the on-screen prompt to complete auth.
Step 4: Verify the Installation
claude --version
You should see a version number. Inside a Claude Code session, run /doctor to verify your configuration is healthy.
How Do You Create the /meta-audit Slash Command?
Slash commands in Claude Code are markdown files stored in .claude/commands/ in your project root. Every .md file in that directory becomes a callable command in any Claude Code session you start from that project.
Step 1: Create the Commands Directory
mkdir -p .claude/commands
Run this from your project root. The .claude/ directory and commands/ subdirectory will be created if they don't exist.
Step 2: Create the Skill File
touch .claude/commands/meta-audit.md
Open the file in your editor. Paste the complete skill file content below.
Step 3: The Complete Skill File
Copy all content below and paste it into .claude/commands/meta-audit.md:
# /meta-audit — Full Meta Ads Account Audit
Run a complete Meta Ads account audit from a single CSV export. No API keys, no MCP servers, no developer setup. Export your last 30 days from Ads Manager, drop the file, and get a structured audit covering spend waste, creative fatigue, audience overlap, Advantage+ misconfigurations, CPA spikes, budget reallocation, and a client-ready summary.
## How to Use
/meta-audit $ARGUMENTS
Where $ARGUMENTS is the path to your exported CSV file from Meta Ads Manager. Example: /meta-audit ~/Downloads/ads_report_march.csv
If no file path is provided, ask the user to provide the path to their Meta Ads Manager CSV export.
## What to Export from Meta Ads Manager
Tell the user to export at the ad set level with these columns enabled before exporting.
Required columns:
- Campaign name
- Ad set name
- Ad name
- Delivery (status)
- Campaign objective
- Bid strategy
- Budget type (daily/lifetime)
- Amount spent
- Results
- Cost per result
- Impressions
- Reach
- Frequency
- CPM
- Clicks (all)
- CTR (all)
- CPC (all)
- Link clicks
- CTR (link click-through rate)
- Purchases
- Cost per purchase
- Purchase ROAS
- Add to cart
- Cost per add to cart
- Landing page views
- Video plays (if running video)
- ThruPlay (if running video)
- Reporting starts
- Reporting ends
Recommended additional columns: Audience (targeting summary), Placement, Age, Gender.
Tell the user: In Ads Manager, click "Columns" → "Customize columns", add the above metrics, then click "Export" → "Export Table Data (.csv)". Select "Ad set" level and last 30 days.
## Audit Execution Plan
Read the CSV file. Parse all rows. Then run each of the 7 audit modules below sequentially. Compile the results into a single structured report at the end.
### Module 1: Spend Bleed Detection
Purpose: Find ad sets burning budget with zero or near-zero conversions.
Logic: Filter ad sets where: Status is "Active" or was active during the reporting period; Amount spent > $50 in the reporting period; Purchases = 0 OR Cost per purchase > 3x the account average CPA. For each flagged ad set, calculate: Total spend during the period; Daily average spend (total spend ÷ days in period); Projected monthly waste (daily average × 30); What the spend bought (impressions, clicks, add-to-carts if any). Sort by total spend descending.
Output format: ## 🚨 Spend Bleed Detection | {N} ad sets are burning budget with zero or near-zero conversions. | Total waste identified: ${total} over the last 30 days (${monthly_projected}/month projected) | Table: Ad Set | Campaign | Spend | Purchases | CPA | Daily Avg | Monthly Waste | Issue | Recommendation: Pause these ad sets immediately. Reallocate ${total} to top-performing ad sets identified in Module 6.
### Module 2: Creative Fatigue Scoring
Purpose: Identify creatives that are past their peak and dragging down performance.
Logic: For each ad (or ad set if ad-level data isn't available): Check frequency — flag if frequency > 3.0 for cold/prospecting campaigns, > 5.0 for retargeting. Check CTR — if CTR is below 1.0% for feed placements or below 0.5% overall, flag as fatigued. Check CPM — if CPM is more than 30% above the account average CPM, flag. Check ROAS — if purchase ROAS < 1.0, the creative is actively losing money.
Fatigue score (0-10): Frequency > 3.0: +2 points (>4.0: +4 points, >5.0: +6 points). CTR below account average: +2 points. CPM 30%+ above account average: +1 point. ROAS < 1.0: +3 points. Score 7+: Critical fatigue. Score 4-6: Warning. Score 0-3: Healthy. Sort by fatigue score descending.
Output format: ## 🔄 Creative Fatigue Scoring | {N} creatives showing fatigue. {critical_count} critical, {warning_count} warning. | Table: Ad/Ad Set | Frequency | CTR | CPM | ROAS | Fatigue Score | Status | Critical (score 7+) — pause or replace immediately. Warning (score 4-6) — refresh within this week. Recommendation: Replace critical creatives with new angles. For warning-level creatives, test new hooks while keeping the current versions live but monitored.
### Module 3: Audience Overlap Audit
Purpose: Identify ad sets whose audiences overlap, causing you to bid against yourself.
Logic: Group ad sets by campaign objective (purchases, leads, traffic, etc.). Within each objective group, compare every ad set's targeting: Extract audience/targeting text from the "Audience" column or ad set name patterns. Look for identical or near-identical targeting descriptions. Look for lookalike audiences with the same source (e.g., "1% Lookalike — Purchasers" vs "2% Lookalike — Purchasers"). Look for broad/Advantage+ audiences running in parallel across multiple ad sets.
Overlap severity estimates: Same targeting description across ad sets = HIGH overlap (likely 60%+). Nested lookalikes (1% vs 2% vs 3% of same source) = MEDIUM overlap (40-60%). Same interest categories across ad sets = MEDIUM overlap. Different targeting but same campaign objective = LOW (possible auction competition). For overlapping pairs, calculate combined spend, which ad set has better CPA/ROAS (the winner), and estimated wasted spend (spend of the losing ad set × estimated overlap %).
Output format: ## 🎯 Audience Overlap Audit | {N} ad set pairs have significant audience overlap. Estimated overlap waste: ${total}/month. | High Overlap (likely 60%+) and Medium Overlap (30-60%) tables. Recommendation: Consolidate overlapping ad sets. Keep the winner, pause the loser. For nested lookalikes, use exclusions or merge into a single broader audience.
### Module 4: Advantage+ Setup Check
Purpose: Catch misconfigurations in Advantage+ Shopping Campaigns (ASC).
Logic: Identify Advantage+ campaigns (look for "Advantage+ Shopping" or "ASC" in campaign name or campaign type). For each ASC campaign check: (a) Conversion event — is it optimizing for "Purchase"? If optimizing for "Add to Cart" or "Landing Page View" in a shopping campaign, flag. (b) Existing customer exclusions — if there's no indication of customer exclusion or budget cap, flag: "No existing customer budget cap detected — ASC may be spending on existing customers instead of prospecting". (c) Creative volume — if < 10 ads, flag: "ASC works best with 10+ creative variations. You have {N}." (d) Placements — if specific placements are selected (not Advantage+ placements), flag. (e) Budget — if daily budget is < $50, flag.
Also check non-ASC campaigns for missed opportunities: If running 5+ ad sets with broad targeting and purchase optimization, suggest consolidating into a single ASC campaign.
Output format: ## ⚙️ Advantage+ Setup Check | {N} issues found across {campaign_count} Advantage+ campaigns. | For each issue: ⚠️ {Issue type}: {description} — Fix: {specific action}. Missed Opportunities section. Recommendation: Fix critical misconfigurations first (wrong conversion event, missing exclusions).
### Module 5: CPA Spike Diagnosis
Purpose: Isolate the root cause of CPA increases by comparing current vs. previous period performance.
Logic: If the CSV contains 30 days of data, split into two 15-day periods (recent vs. prior). If the CSV has date columns, split by date; if not, ask the user to provide a second CSV for the comparison period. Calculate CPA, CTR, CPM, frequency, conversion rate for both periods and the % change for each metric.
Root cause diagnosis: CPM up + CTR stable = audience fatigue or increased competition. CTR down + CPM stable = creative fatigue. CTR stable + conversion rate down = landing page issue or offer fatigue. Frequency up + everything else down = audience exhaustion. CPM up + CTR down = double hit — both audience and creative fatigued simultaneously. For each cause, calculate the dollar impact if the metric returned to prior period levels.
Output format: ## 📊 CPA Spike Diagnosis | Account-level CPA changed by {change}% ({direction}) — from ${old_cpa} to ${new_cpa}. Root Cause Breakdown table ranked by $ impact. Detailed Diagnosis for the top cause with specific action items.
### Module 6: Budget Reallocation
Purpose: Model optimal budget distribution based on actual performance data.
Logic: For each active ad set, calculate CPA, ROAS, current daily budget or daily spend average, and marginal efficiency. Rank ad sets by efficiency (lowest CPA first, or highest ROAS first). Create reallocation recommendations: Scale (top 25% by efficiency) — increase budget 20-30%. Maintain (middle 50%) — keep current budget. Reduce (bottom 25% by efficiency) — reduce budget 30-50% or pause. Never recommend more than 20-30% budget increase in a single move to avoid exiting the learning phase. Show the math: "${X} moved from {loser} (CPA: ${Y}) to {winner} (CPA: ${Z}) = estimated ${savings} in savings over 30 days".
Output format: ## 💰 Budget Reallocation | Recommended budget shifts to reduce account CPA by an estimated {X}%. Scale Up table, Maintain table, Reduce or Pause table. Net projected savings: ${total_savings}/month.
### Module 7: Weekly Client Report
Purpose: Generate an executive summary in plain English, ready to send to a client or stakeholder.
Logic: Aggregate account-level metrics: total spend, total purchases (or total results), account CPA, account ROAS, average CPM, average CTR, average frequency. Identify the top 3 performing and top 3 underperforming campaigns/ad sets. Pull key findings from modules 1-6. Write in plain English: no platform jargon (say "cost to acquire one customer" not "CPA"), no abbreviations without explanation, clear actionable recommendations, dollar amounts for impact wherever possible.
Output format: ## 📋 Weekly Report — Executive Summary | Period: {start_date} to {end_date}. Total ad spend, customers acquired, cost per customer, return on ad spend. What's Working section (2-3 sentences on top performers with specific numbers). What Needs Attention section (2-3 sentences on problems). Recommended Actions This Week (numbered list with specific plain-English actions and expected $ impact). Account Health: 🟢 Healthy / 🟡 Needs Attention / 🔴 Urgent Action Required.
## Final Report Assembly
After running all 7 modules, compile the full audit into a single markdown report with this structure: # Meta Ads Audit — {date}. Module 7 Executive Summary placed FIRST as the overview. Then Module 1 through Module 6 in sequence, each separated by a horizontal rule. Then a Total Waste Identified table: Spend bleed (Module 1) | ${bleed_waste}. Audience overlap (Module 3) | ${overlap_waste}. Suboptimal budget allocation (Module 6) | ${reallocation_savings}. Total | ${total_waste}/month. Then Priority Actions (Do This Week) — numbered list of the 5 highest-impact actions from the audit.
Also save the full report to ./reports/meta-audit-{YYYY-MM-DD}.md.
## Important Notes
- All analysis is based on the exported CSV data. Accuracy depends on the columns and date range included in the export.
- For audience overlap: without the Meta Audience Overlap tool (which requires API access), overlap estimates are based on targeting descriptions and ad set naming patterns. These are directional, not precise.
- For Advantage+ checks: some configuration details (like existing customer definitions and budget caps) may not appear in the CSV export. Flag these as "unable to verify from export — check in Ads Manager."
- Always caveat recommendations with "based on the data provided" and note any limitations.
- Round all dollar amounts to whole dollars. Round percentages to one decimal place.
Step 4: Verify the Command Appears
Exit Claude Code and relaunch from the same project directory, then type / in the prompt. You should see meta-audit listed alongside any other commands in your .claude/commands/ directory. If it doesn't appear, check that the file is saved as .claude/commands/meta-audit.md with no extra characters in the filename.
How Do You Export the Right Data from Meta Ads Manager?
The skill expects an ad-set-level CSV export from the last 30 days. Getting the column configuration right the first time saves you from re-running the export.
- Open Meta Ads Manager (business.facebook.com/adsmanager) and log in to the correct ad account.
- Set the date range to Last 30 days using the date picker in the top-right corner.
- Set the view level to Ad sets — click the row-level selector and choose "Ad sets". The skill is built around ad-set-level data. Running it against campaign-level data will reduce accuracy.
- Click Columns → Customize Columns.
- Add the following columns in the customization panel:
Performance: Campaign name, Ad set name, Ad name, Delivery status, Amount spent, Results, Cost per result.
Delivery: Impressions, Reach, Frequency, CPM.
Engagement: Clicks (all), CTR (all), CPC (all), Link clicks, CTR (link click-through rate).
Conversions: Purchases, Cost per purchase, Purchase ROAS, Add to cart, Cost per add to cart, Landing page views.
- Click Apply to save the column set to your current view.
- Click the Export icon (download arrow) → Export Table Data (.csv).
- Save the file somewhere easy to reference — for example, ~/Downloads/meta_ads_march_2026.csv.
Pro tip: Before clicking Apply, click Save as preset and name it something like "Meta Audit Export". This saves the entire column configuration so your next export takes 30 seconds instead of 3 minutes.
How Do You Run the Audit?
Once the skill file is installed and your CSV is exported, the entire workflow is three commands:
cd ~/my-ads-project
claude
/meta-audit ~/Downloads/meta_ads_march_2026.csv
Claude Code reads the file path from $ARGUMENTS, loads the CSV, and begins processing each module sequentially. You'll see output for each module as it completes.
What to expect: Processing time is 3–5 minutes depending on account size and number of active ad sets. Output is a formatted markdown report displayed in the terminal and saved to ./reports/meta-audit-{YYYY-MM-DD}.md. Each module header appears as Claude works through the analysis, so you can follow along in real time.
Sample Output — Spend Bleed Detection
🚨 Spend Bleed Detection
3 ad sets are burning budget with zero or near-zero conversions. Total waste identified: $1,847 over the last 30 days ($1,847/month projected). Example rows: Broad — 25-44 US | TOF Prospecting | $892 spend | 0 purchases | Zero conversions. LAL 3% — Visitors | Retargeting | $614 spend | 0 purchases | Zero conversions. Interest — Fitness | TOF Cold | $341 spend | 2 purchases | $170 CPA (4.8x average). Recommendation: Pause these ad sets immediately. Reallocate $1,847 to top-performing ad sets identified in Module 6.
Sample Output — Creative Fatigue Scoring
🔄 Creative Fatigue Scoring
7 creatives showing fatigue. 2 critical, 5 warning. Example rows: UGC — Testimonial 1 | Frequency 5.2 | CTR 0.6% | CPM $28.40 | ROAS 0.8x | Score 9/10 | Critical 🔴. Static — Product A | Frequency 4.1 | CTR 0.9% | CPM $24.10 | ROAS 1.1x | Score 7/10 | Critical 🔴. Video — Hook B | Frequency 3.3 | CTR 1.2% | CPM $19.80 | ROAS 1.9x | Score 5/10 | Warning 🟡.
Agencies that automate audit workflows like this have reported up to a 90% reduction in manual operations time (Advolve agency case study). The skill brings that same efficiency to a single operator.
What Does Each Audit Module Check?
Module 1: Spend Bleed Detection
Filters for active ad sets spending more than $50 with zero purchases, or a CPA more than 3x your account average. Every day these run undetected, the waste compounds. An ad set spending $30/day with zero conversions burns $900/month — the module finds it and projects the monthly cost so the number is concrete, not abstract.
Module 2: Creative Fatigue Scoring
Creative fatigue triggers at frequency 3.0+ for cold audiences — an industry-standard benchmark — and 5.0+ for retargeting audiences. The skill scores each creative on a 0–10 scale combining frequency, CTR relative to account average, CPM elevation, and ROAS. Score 7+ means the creative is actively degrading performance and should be paused immediately.
Once the audit identifies which creatives need replacing, tools like HeyOz can generate replacement ad creative from a product URL in minutes — 11+ formats including video, static, UGC, and carousel ads, starting at $44.99/month. The audit tells you what to fix. HeyOz handles the creative production.
For a deeper look at the creative replacement process, see how to generate ad variations and the full roundup of best AI ad generators .
Module 3: Audience Overlap Audit
Overlap above 30% causes meaningful auction self-competition, per Meta's own Audience Overlap tool documentation. The skill estimates overlap severity by comparing targeting descriptions, lookalike source audiences, and objective groupings. High-overlap pairs are flagged with combined spend and an estimated waste calculation — the losing ad set's spend multiplied by the estimated overlap percentage.
Module 4: Advantage+ Setup Check
Advantage+ Shopping Campaigns deliver an average 22% ROAS improvement according to Meta's internal studies — but the gains depend on correct configuration. The most damaging misconfigurations are the wrong conversion event (optimizing for Add to Cart in a purchase campaign) and missing existing-customer budget caps, which allow ASC to spend heavily on people who already bought. The skill checks both and flags the fix.
Module 5: CPA Spike Diagnosis
The module splits your 30-day CSV into two 15-day windows and compares CPA, CTR, CPM, frequency, and conversion rate across both periods. It then matches the pattern of metric shifts to a root cause: CPM up with CTR stable points to audience-level competition; CTR down with stable CPM points to creative fatigue; CTR stable with conversion rate down points to a landing page or offer problem. Each cause is ranked by projected dollar impact.
Module 6: Budget Reallocation
The top 25% of ad sets by efficiency get a recommended 20–30% budget increase. The bottom 25% get a reduction or pause recommendation. The module never recommends more than a 30% single-move increase, which preserves learning-phase stability. The math is shown explicitly: dollars moved from underperformer to top performer, with projected monthly savings. For more on the time and dollar impact of systematic optimization, see time and budget savings with AI ads .
Module 7: Weekly Client Report
Aggregates all findings into a plain-English executive summary. Dollar amounts for every recommendation. No platform jargon. The section is structured as: What's Working → What Needs Attention → Recommended Actions This Week → Account Health rating. The output is ready to copy into an email or Slack message without editing.
Frequently Asked Questions
Do I need to pay for Claude Code to use this?
Yes. Claude Code requires an active Claude subscription to run. The minimum is Claude Pro at $20/month. Max, Teams, and Enterprise plans also work. Alternatively, if you have Anthropic Console API credits, you can authenticate through the API instead of a subscription. There is no free tier for Claude Code.
Can I use this without technical skills?
Yes. The hardest part is the CSV export from Ads Manager — and this guide covers that step-by-step. Installing Claude Code requires running three terminal commands. Creating the skill file is copy-paste. Once it's set up, running the audit is a single command. If you've ever used Terminal or Command Prompt for anything, you can do this.
How accurate is the audience overlap detection?
Directional, not precise. The skill estimates overlap based on targeting descriptions in your CSV export and ad set naming patterns — not the actual audience intersection data from Meta's backend. For exact overlap percentages, use the Audience Overlap tool in Ads Manager (under the Audiences section). The skill's overlap detection is useful for identifying obvious cases of self-competition; treat the numbers as estimates.
Does this work for lead gen campaigns, not just e-commerce?
Yes. The skill adapts to whatever result type your campaigns optimize for. If your conversion event is Lead or Contact Form Submit rather than Purchase, the audit uses Cost per Result as the efficiency metric throughout. The spend bleed, fatigue scoring, overlap audit, and budget reallocation modules all work on any campaign objective.
Can I customize the thresholds?
Yes. Open .claude/commands/meta-audit.md in any text editor and change the numbers directly. The frequency threshold for cold audiences (3.0), the CPA spike multiplier (3x), the overlap warning threshold (30%), and the budget increase cap (30%) are all plaintext values you can edit. Save the file and the next /meta-audit run uses your updated thresholds.
Can I schedule this to run automatically?
Not with the CSV approach. This workflow is manual: export CSV, run command, review report. For automated scheduling — pulling data daily without a manual export — you would need to add Meta Marketing API access and connect it via MCP. That's a more involved setup outside the scope of this guide. The CSV approach has the advantage of zero ongoing infrastructure to maintain.
Is this the same as hiring an agency to do an audit?
It covers the same checklist a senior media buyer would run — spend waste, creative fatigue, audience overlap, campaign configuration, CPA diagnosis, and budget optimization. The difference: an agency brings cross-account context from managing hundreds of clients, pattern recognition from seeing the same problems across industries, and judgment calls the skill can't make without that history. What the skill brings is consistency, speed, and zero scheduling overhead. It's the right tool for weekly account hygiene. For a full strategic overhaul or a new account setup, a human audit adds dimension the skill can't replicate.
About the author
Ahad Shams
Ahad Shams is the Founder of HeyOz, an all-in-one ads and content platform built for founders and small teams. He has worked across consumer goods and technology, with experience spanning Fortune 100 companies such as Reckitt Benckiser and Apple. Ahad is a third-time founder; his previous ventures include a WebXR game engine and Moemate, a consumer AI startup that scaled to over 6 million users. HeyOz was born from firsthand experience scaling consumer products and the need for a unified, execution-focused marketing platform.

