How to Build a Creative Fatigue Predictor in Claude Code

Written By
Ahad ShamsAhad Shams
hero=section

Most advertisers discover creative fatigue after ROAS has already collapsed. The signals — CTR erosion, CPC inflation, frequency spirals — appear 3–5 days before returns break down. A creative fatigue predictor built as a Claude Code skill can catch those signals automatically by scanning your Meta Ads Manager CSV export and scoring each creative across five fatigue dimensions. This post walks through every signal, the scoring logic, and the complete /fatigue-scan skill file you can install in under two minutes. By the end, you will have a working tool that tells you which creatives to pause, which to replace, and how many days each one has left before ROAS hits breakeven.

What Is a Creative Fatigue Predictor and Why Does It Matter?

A creative fatigue predictor is a diagnostic tool that monitors early-warning signals in your ad performance data and flags creatives that are losing effectiveness — before the damage shows up in your ROAS column.

Creative fatigue follows a predictable pattern. According to research tracking 847 Meta ads from launch to death , a consistent daily CTR decay of 0.05% or more for three consecutive days preceded fatigue in 81% of cases. CTR drops 35–55% after 10–14 days of running the same creative, per Adligator’s fatigue detection framework . Most ads fatigue in 7–10 days, not 30.

The problem is that Meta Ads Manager gives you no composite fatigue score. You get individual metrics — CTR, frequency, CPC — spread across columns and time ranges. By the time a human spots the pattern, the creative has already burned through budget at declining efficiency. AdAmigo.ai’s 2025 benchmarks show 56% of campaign outcomes are driven by creative quality, making fatigue detection the single highest-leverage monitoring activity in your ad account.

A Claude Code skill solves this by encoding the diagnostic logic into a repeatable command. Export your CSV, type /fatigue-scan, and get a per-creative diagnosis with a traffic-light classification and estimated days until ROAS breakdown. No API keys, no MCP server, no developer setup. If you already use Claude Code skills for Meta ads workflows , this adds a critical monitoring layer to your toolkit.

What Are the 5 Signals That Predict Creative Death?

Creative fatigue is not a single metric — it is a convergence of five distinct signals. Each one captures a different dimension of declining creative effectiveness. The /fatigue-scan skill weighs them based on their predictive reliability.

Signal 1 — CTR Trajectory (Weight: 30%)

CTR is the earliest and most reliable fatigue signal. It drops before CPC spikes, before conversions dry up, before ROAS craters. The 847-ads study on Reddit confirmed that CTR is the canary in the coal mine — daily decay of 0.05% or more for three consecutive days preceded fatigue in 81% of cases.

The skill calculates peak CTR from the first seven days of data, compares it to the average of the last three days, and measures the decline percentage. It also fits a linear regression to the last seven days to calculate the daily decay rate and counts consecutive decline days. A 20% or greater CTR drop from peak over a three-day rolling window is a strong fatigue signal, per Adligator’s detection framework .

Scoring runs from 0 to 10: a 5–10% decline from peak scores 2 points, 10–20% scores 4, 20–35% scores 6, and 35%+ scores 8. Bonus points add for three or more consecutive decline days (+1), five or more consecutive days (+2), and a decay rate steeper than −0.05% per day (+1). The score caps at 10.

Signal 2 — Frequency at Creative Level, Cross-Ad-Set (Weight: 20%)

Most frequency monitoring happens at the ad set level, which misses the real problem. If the same creative runs in three ad sets targeting overlapping audiences, a user might see it nine times while each ad set reports a frequency of three. CPC increases 20–40% once frequency crosses 2.5–3.0 , so accurate frequency measurement matters.

The skill groups all rows by ad name — not ad set name — to catch the same creative running across multiple ad sets. For ads in multiple ad sets, it estimates combined frequency using a 0.6 overlap factor (Meta audiences overlap 40–60% by default). It also tracks frequency acceleration: if frequency increased by more than 0.5 per day over the last three days, the creative is in a frequency spiral.

Scoring: creative-level frequency of 2.0–3.0 scores 1 point, 3.0–4.0 scores 3, 4.0–5.0 scores 5, 5.0–6.0 scores 7, and 6.0+ scores 9. Running across two or more ad sets adds +1, and frequency accelerating faster than 0.5/day adds +1. Cap at 10.

Signal 3 — CPC Inflation (Weight: 20%)

When engagement declines, Meta compensates by showing your ad to less optimal audience segments — which cost more. CPC inflation is the algorithm’s confession that your creative is losing relevance. The skill compares current CPC (average of last three days) against baseline CPC (average of the first five days) to measure inflation percentage. It also fits a linear regression for trend direction and flags any creative with CPC 30% or more above the account-wide average.

Scoring: 10–20% CPC inflation scores 2, 20–35% scores 4, 35–50% scores 6, and 50%+ scores 8. An upward CPC trend adds +1, and CPC 30%+ above account average adds +1. Cap at 10.

Signal 4 — Hook Rate Decay (Weight: 15%)

Hook rate — 3-second video views divided by impressions — is the most undermonitored metric in video-heavy accounts . The three-second mark is the moment of decision: viewers either watch or scroll. A healthy hook rate benchmarks at 25%+ for feed placements and 30%+ for Reels, per Motion’s creative performance metrics guide . Below 15% is critical regardless of trend.

The skill applies to video ads only. It calculates peak hook rate from the first seven days, compares to the last three days, and measures the decline. For static image ads, this signal is skipped and its weight redistributed.

Scoring: 5–15% hook rate decline scores 2, 15–25% scores 4, 25–40% scores 6, and 40%+ scores 8. Current hook rate below the 15% absolute floor adds +2, and three or more consecutive decline days adds +1. Cap at 10.

Signal 5 — Engagement Cliff (Weight: 15%)

An engagement cliff is a sudden drop in reactions, comments, shares, and saves. Unlike CTR, which measures click intent, engagement cliff captures the broader signal that people have collectively tuned out the creative. The fatigue sequence typically runs: CTR drops, then CPM rises, then frequency accelerates, then CPA creeps up, then negative feedback increases.

The skill calculates a weighted engagement score per 1,000 impressions: reactions + (comments × 3) + (shares × 5) + (saves × 4), divided by impressions, multiplied by 1,000. Comments and shares carry higher weights because they indicate deeper engagement. If engagement columns are unavailable, it falls back to a proxy: (clicks − link clicks) / impressions × 1,000.

Scoring: 10–25% engagement decline scores 2, 25–40% scores 4, 40–60% scores 6, and 60%+ scores 8. A cliff pattern (30%+ single-day drop in the last seven days) adds +2, and engagement below 1.0 per 1,000 impressions adds +1. Cap at 10.

How Does the Scoring and Classification System Work?

The five signal scores combine into a single weighted composite score per creative:

For video ads: CTR Trajectory × 0.30 + Frequency × 0.20 + CPC Inflation × 0.20 + Hook Rate Decay × 0.15 + Engagement Cliff × 0.15

For static ads (no hook rate data): CTR Trajectory × 0.35 + Frequency × 0.25 + CPC Inflation × 0.20 + Engagement Cliff × 0.20

For ads missing engagement data: CTR Trajectory × 0.35 + Frequency × 0.25 + CPC Inflation × 0.25 + Hook Rate Decay × 0.15

The composite score maps to a traffic-light classification. A score of 0.0–2.5 is Healthy (green) — continue running, monitor weekly. A score of 2.6–5.0 is Warning (yellow) — prepare replacement creative and replace within 48 hours. A score of 5.1–10.0 is Critical (red) — pause now, every hour this runs wastes budget.

This three-tier system eliminates ambiguity. You know immediately which creatives need action and what that action is.

How Do You Estimate Days Until ROAS Breakdown?

For every creative scored as Warning or Critical, the skill projects how many days remain until ROAS drops below 1.0 (breakeven). It fits a linear regression to the last seven days of ROAS data. If the slope is negative, it calculates: days to breakeven = (current ROAS − 1.0) / absolute daily decline rate.

If ROAS is already below 1.0, the output is immediate: “ROAS already below breakeven — pause immediately.” If the ROAS trend is flat or positive, the skill notes that fatigue has not yet impacted returns but will if signals worsen. When ROAS data is unavailable, the skill falls back to CPA trend projection instead.

Each estimate includes a confidence qualifier based on data quality. Seven or more data points with R² above 0.7 gets “High confidence.” Five to seven data points or R² between 0.4 and 0.7 gets “Medium confidence.” Fewer data points or R² below 0.4 gets “Low confidence — directional estimate only.” Estimates cap at 30 days because too many variables change beyond that horizon.

This projection gives you a deadline. Instead of “this creative might be fatiguing,” you get “this creative has approximately 4 days before ROAS hits breakeven at current trajectory.” That specificity drives faster decisions. Combined with a full Meta ads audit skill , you can diagnose fatigue and audit account structure in the same session.

How Do You Install the /fatigue-scan Skill in Claude Code?

Installation takes under two minutes. Four steps.

Step 1 — Create the Skills Directory

Open your terminal in any project directory and run: mkdir -p .claude/skills

This creates the directory where Claude Code looks for skill files.

Step 2 — Save the Skill File

Copy the entire skill file from the section below and save it as fatigue-scan.md inside the .claude/skills/ directory. The file path should be: .claude/skills/fatigue-scan.md

Step 3 — Export Your CSV from Meta Ads Manager

In Ads Manager, select the Ads tab. Click Breakdown, then By Time, then Day. Click Columns, then Customize columns, and add these metrics: Campaign name, Ad set name, Ad name, Day, Impressions, Reach, Frequency, Clicks (all), CTR (all), CPC (all), Link clicks, CTR (link click-through rate), Amount spent, Purchases, Cost per purchase, Purchase ROAS. For video ads, also add: Video plays, 3-second video plays, and ThruPlay. Optionally add: Post reactions, Post comments, Post shares, Post saves.

Click Export, then Export Table Data (.csv). Select the last 14–30 days. The daily breakdown is what makes trend analysis possible — without it, the skill can only compare against benchmarks, not detect trajectory changes.

Step 4 — Run the Scan

Open Claude Code in your project directory and type: /fatigue-scan ~/Downloads/your_export.csv

Claude reads the CSV, runs all five signal detectors on every creative, calculates composite scores, estimates days until ROAS breakdown for flagged creatives, and compiles a full report with per-creative diagnosis and recommended actions.

What Does the Output Look Like?

The final report follows a structured format. It starts with a summary table showing how many creatives are Healthy, Warning, and Critical, along with estimated budget at risk per day and projected waste over the next seven days.

Critical creatives appear first with full signal breakdowns: CTR trajectory (peak to current with decline percentage), frequency (cross-ad-set combined estimate), CPC inflation (baseline to current with percentage increase), hook rate (for video ads), engagement score, and the composite score. Each critical creative includes the estimated days until ROAS breakdown with confidence level, current daily spend, and a specific recommended action.

Warning creatives follow the same format with guidance to reduce budget by 30–50% while replacement creative is in production. Healthy creatives appear in a summary table.

The report closes with three tiers of recommended actions: immediate (pause all critical creatives today), this week (produce replacements for warning creatives), and ongoing (run /fatigue-scan weekly). When you need replacement creatives fast, HeyOz generates ad creatives from a product URL — useful for producing fresh variations to replace fatigued ads without waiting on a designer or editor. You can also explore strategies for generating ad variations at scale to maintain a steady pipeline of creatives ahead of fatigue cycles.

The Complete /fatigue-scan Skill File

Copy the entire content below and save it as fatigue-scan.md in your .claude/skills/ directory.

/fatigue-scan — Creative Fatigue Predictor

Scan your Meta Ads creative performance data for early fatigue signals before ROAS collapses. No API, no MCP server, no developer setup. Export your ad-level CSV from Ads Manager, drop the file, and get a per-creative fatigue diagnosis with estimated days until ROAS breakdown.

How to Use

/fatigue-scan $ARGUMENTS

Where $ARGUMENTS is the path to your exported CSV file from Meta Ads Manager.

Example: /fatigue-scan ~/Downloads/ads_report_march.csv

If no file path is provided, ask the user to provide the path to their Meta Ads Manager CSV export.

What to Export from Meta Ads Manager

Tell the user to export at the ad level with daily breakdown and these columns enabled:

Required columns: Campaign name, Ad set name, Ad name, Day (reporting date — this is critical for trend analysis), Impressions, Reach, Frequency, Clicks (all), CTR (all), CPC (all), Link clicks, CTR (link click-through rate), Amount spent, Purchases (or Results), Cost per purchase (or Cost per result), Purchase ROAS (or ROAS).

Required for video ads (hook rate analysis): Video plays, 3-second video plays (ThruPlays at 3s), ThruPlay.

Recommended additional columns: Post reactions, Post comments, Post shares, Post saves, Reporting starts, Reporting ends.

Tell the user: In Ads Manager, select the Ads tab. Click “Breakdown” then “By Time” then “Day”. Click “Columns” then “Customize columns”, add the above metrics. Then click “Export” then “Export Table Data (.csv)”. Select last 14–30 days.

Important: The daily breakdown is what makes trend analysis possible. Without it, we only get aggregate numbers and cannot detect trajectory changes.

Fatigue Scan Execution Plan

Read the CSV file. Parse all rows. Group rows by ad name (each ad will have multiple rows — one per day). Then run the 5 signal detectors below on each creative, calculate the composite fatigue score, estimate days until ROAS breakdown, and compile the final report.

Signal 1: CTR Trajectory (Weight: 30%)

What it detects: Click-through rate declining day-over-day, the earliest and most reliable fatigue signal. CTR drops before CPC spikes, before ROAS craters. This is the canary in the coal mine.

Logic: (1) For each ad, sort daily rows by date ascending. (2) Extract the daily CTR values to form a time series. (3) Calculate the ad’s peak CTR (highest CTR in the first 7 days of data, or overall peak if less than 7 days available). (4) Calculate the current CTR (average of last 3 days). (5) Calculate CTR decline percentage: (peak_CTR - current_CTR) / peak_CTR × 100. (6) Calculate consecutive decline days: count how many of the most recent days show CTR lower than the previous day (allow one flat day in the streak without breaking it). (7) Calculate daily decay rate: fit a simple linear regression to the last 7 days of CTR values. The slope (CTR change per day) is the decay rate.

Scoring (0–10): CTR declined 5–10% from peak = 2 points. CTR declined 10–20% from peak = 4 points. CTR declined 20–35% from peak = 6 points. CTR declined 35%+ from peak = 8 points. 3+ consecutive decline days = +1. 5+ consecutive decline days = +2. Daily decay rate steeper than −0.05% per day = +1. Cap at 10.

Output per ad: CTR Trajectory: {peak_ctr}% to {current_ctr}% ({decline}% drop over {days} days). Decay rate: {rate}%/day. Consecutive decline: {n} days. Signal score: {score}/10.

Signal 2: Frequency at Creative Level — Cross-Ad-Set (Weight: 20%)

What it detects: The same creative being shown too many times to the same people, aggregated across all ad sets running that creative. Most tools check frequency at the ad set level, which misses the real problem: if the same creative runs in 3 ad sets targeting overlapping audiences, the user might see it 9 times while each ad set reports frequency of 3.

Logic: (1) Group all rows by ad name (not ad set name). This catches the same creative running across multiple ad sets. (2) For each unique ad name, collect all ad sets it appears in. (3) Calculate creative-level frequency: If the ad appears in only 1 ad set, use the reported frequency directly. If the ad appears in N ad sets with frequencies f1, f2, ..., fN, estimate combined frequency as sum(f1, f2, ..., fN) × overlap_factor where overlap_factor = 0.6 (conservative estimate — Meta audiences overlap ~40-60% by default). Use the most recent day’s frequency values for the calculation. (4) Calculate frequency acceleration: compare frequency from 3 days ago to today. If frequency increased by more than 0.5 per day, the creative is in a frequency spiral.

Scoring (0–10): Creative-level frequency 2.0–3.0 = 1 point. 3.0–4.0 = 3 points. 4.0–5.0 = 5 points. 5.0–6.0 = 7 points. 6.0+ = 9 points. Ad runs across 2+ ad sets (overlap risk) = +1. Frequency accelerating > 0.5/day = +1. Cap at 10.

Output per ad: Frequency: {frequency} (across {n} ad sets). Est. creative-level: {combined_freq}. Acceleration: {accel}/day over last 3 days. Overlap ad sets: {list of ad set names}. Signal score: {score}/10.

Signal 3: CPC Inflation (Weight: 20%)

What it detects: Cost per click rising as Meta’s algorithm compensates for declining engagement by showing the ad to less optimal (more expensive) audience segments. CPC inflation is the algorithm’s confession that your creative is losing relevance.

Logic: (1) For each ad, extract daily CPC values. (2) Calculate the ad’s baseline CPC (average CPC during the first 5 days of data, or first 3 days if less data available). (3) Calculate the current CPC (average of last 3 days). (4) Calculate CPC inflation percentage: (current_CPC - baseline_CPC) / baseline_CPC × 100. (5) Calculate CPC trend: fit a linear regression to the last 7 days. Positive slope = inflation. (6) Compare to account average: calculate the account-wide average CPC across all ads. Flag if this ad’s current CPC is 30%+ above account average.

Scoring (0–10): CPC inflated 10–20% from baseline = 2 points. 20–35% = 4 points. 35–50% = 6 points. 50%+ = 8 points. CPC trending upward (positive slope last 7 days) = +1. CPC 30%+ above account average = +1. Cap at 10.

Output per ad: CPC: ${baseline_cpc} to ${current_cpc} ({inflation}% increase). Trend: {direction} at ${slope}/day. vs. Account avg: ${account_avg} ({comparison}). Signal score: {score}/10.

Signal 4: Hook Rate Decay — First 3 Seconds (Weight: 15%)

What it detects: The percentage of impressions that result in a 3-second video view declining over time. Hook rate is the first engagement metric to degrade — people decide in 3 seconds whether to watch or scroll. A declining hook rate means your opening is no longer stopping the scroll.

Applies to: Video ads only. Skip this signal for static image ads and set the score to "N/A — static ad".

Logic: (1) For each video ad, calculate daily hook rate: 3-second video plays / impressions × 100. (2) If "3-second video plays" column is not available, try "ThruPlay" as a proxy (less accurate but directional). (3) Calculate the ad’s peak hook rate (highest hook rate in the first 7 days). (4) Calculate the current hook rate (average of last 3 days). (5) Calculate hook rate decline: (peak - current) / peak × 100. (6) Calculate consecutive decline days for hook rate (same logic as CTR trajectory). (7) Benchmark: A healthy hook rate is 25%+ for feed placements, 30%+ for Reels. Below 15% is critical regardless of trend.

Scoring (0–10): Hook rate declined 5–15% from peak = 2 points. 15–25% = 4 points. 25–40% = 6 points. 40%+ = 8 points. Current hook rate below 15% (absolute floor) = +2. 3+ consecutive decline days = +1. Cap at 10.

Output per ad: Hook Rate: {peak}% to {current}% ({decline}% drop). Absolute level: {current}% ({above/below} 25% benchmark). Consecutive decline: {n} days. Signal score: {score}/10. For static ads: Hook Rate: N/A (static ad — signal skipped). Signal score: N/A.

Signal 5: Engagement Cliff (Weight: 15%)

What it detects: A sudden drop in overall engagement (reactions, comments, shares, saves) that indicates the audience has collectively tuned out the creative. Unlike CTR which measures click intent, engagement cliff captures the broader "people don’t even react to this anymore" signal.

Logic: (1) For each ad, calculate a daily engagement score: If engagement columns are available (reactions, comments, shares, saves): engagement_score = (reactions + comments × 3 + shares × 5 + saves × 4) / impressions × 1000 (weighted engagement per 1000 impressions — comments and shares are higher-intent signals). If engagement columns are NOT available: fall back to engagement_proxy = (clicks - link_clicks) / impressions × 1000 (non-link clicks indicate reactions/comments/shares). If neither is available: skip this signal, set score to "N/A — insufficient data". (2) Calculate peak engagement (highest engagement score in first 7 days). (3) Calculate current engagement (average of last 3 days). (4) Calculate engagement decline: (peak - current) / peak × 100. (5) Detect cliff pattern: if engagement dropped 30%+ between any two consecutive days in the last 7 days, flag as a cliff (sudden drop vs. gradual decline).

Scoring (0–10): Engagement declined 10–25% from peak = 2 points. 25–40% = 4 points. 40–60% = 6 points. 60%+ = 8 points. Cliff detected (30%+ single-day drop) = +2. Engagement below 1.0 per 1000 impressions = +1. Cap at 10.

Output per ad: Engagement: {peak}/1K to {current}/1K ({decline}% drop). Cliff detected: {yes/no} ({details if yes}). Signal score: {score}/10.

Composite Fatigue Score

For each ad, calculate the weighted composite score:

For video ads: composite = (signal_1_score × 0.30) + (signal_2_score × 0.20) + (signal_3_score × 0.20) + (signal_4_score × 0.15) + (signal_5_score × 0.15)

For static ads (no hook rate data): composite = (signal_1_score × 0.35) + (signal_2_score × 0.25) + (signal_3_score × 0.20) + (signal_5_score × 0.20)

For ads missing engagement data: composite = (signal_1_score × 0.35) + (signal_2_score × 0.25) + (signal_3_score × 0.25) + (signal_4_score × 0.15)

Status Classification: Composite 0.0–2.5 = Healthy (green). Creative performing well. Continue running, monitor weekly. Composite 2.6–5.0 = Warning (yellow). Early fatigue signals detected. Prepare replacement creative. Replace within 48 hours. Composite 5.1–10.0 = Critical (red). Creative is actively fatiguing. Pause now. Every hour this runs wastes budget.

Estimated Days Until ROAS Breakdown

For each ad scored Warning or Critical, estimate how many days until ROAS drops below 1.0 (breakeven) based on current trajectory.

Logic: (1) Extract daily ROAS values (or calculate from spend and purchase revenue). (2) If ROAS is already below 1.0: output "ROAS already below breakeven — pause immediately". (3) If ROAS data is available and there are 5+ data points: Fit a linear regression to the last 7 days of ROAS values. If the slope is negative (ROAS declining): calculate days_to_breakeven = (current_ROAS - 1.0) / abs(daily_decline_rate). If the slope is flat or positive: output "ROAS stable — fatigue not yet impacting returns (but will if signals worsen)". (4) If ROAS data is not available: estimate from CPA trend instead. Calculate current CPA and CPA trend (slope). If the user’s target CPA is known, estimate days until CPA exceeds it. If target CPA is not known, estimate days until CPA doubles from baseline: days_to_double = baseline_CPA / daily_CPA_increase. (5) Cap the estimate at 30 days (beyond that, too many variables change). (6) Add a confidence qualifier: 7+ data points and R-squared > 0.7 = "High confidence". 5–7 data points or R-squared 0.4–0.7 = "Medium confidence". Fewer data points or R-squared < 0.4 = "Low confidence — directional estimate only".

Output per ad: Est. days until ROAS breakdown: {days} days ({confidence} confidence). Current ROAS: {roas}x. Trend: {slope}x/day. At current rate: ROAS hits 1.0x by {projected_date}.

Final Report Format

After scanning all creatives, compile the report in this structure:

Creative Fatigue Scan — {date}. Scanned: {total_ads} creatives across {campaigns} campaigns. Period: {start_date} to {end_date} ({days} days of data).

Summary table: Status, Count, Action Required. Critical = Pause immediately. Warning = Replace within 48 hours. Healthy = Continue running.

Estimated budget at risk: ${amount}/day across all Critical and Warning creatives. Projected waste if unchanged: ${amount} over next 7 days.

Critical — Pause Now section: For each critical ad, show the ad name, composite score, a table of all 5 signal scores with details, status, estimated days until ROAS breakdown, daily spend, and recommended action: "Pause this creative immediately. Replace with a new hook on the same offer or test a completely new angle."

Warning — Replace Within 48 Hours section: Same format as critical. Action: "Start producing replacement creative now. This ad has 2–5 days of usable life remaining. Reduce budget by 30–50% while replacement is in production."

Healthy section: Summary table with ad name, score, CTR, frequency, CPC, ROAS, and status.

Recommended Actions: (1) Immediate (today): Pause all critical creatives. Reallocate their budget to healthy creatives. (2) This week: Produce replacement creatives for all warning ads. Focus on new hooks — keep the same offer/product but change the first 3 seconds. (3) Ongoing: Run /fatigue-scan weekly to catch fatigue before it impacts ROAS. The signals detected here typically precede ROAS decline by 3–5 days.

Also save the full report as a markdown file to the current working directory.

Column Name Flexibility

CSV exports from Meta Ads Manager may have slightly different column names depending on the user’s language, account settings, or export version. Apply fuzzy matching:

CTR (all) — also accept: CTR, Click-through rate, CTR (All). CPC (all) — also accept: CPC, Cost per click, CPC (All). Amount spent — also accept: Spend, Total spend, Amount Spent. Purchases — also accept: Purchase, Total purchases, Results. Cost per purchase — also accept: CPA, Cost per result, Cost Per Purchase. Purchase ROAS — also accept: ROAS, Return on ad spend, Purchase ROAS (total). 3-second video plays — also accept: 3s video plays, 3-Second Video Plays, Video plays at 3s. Frequency — also accept: Avg. frequency, Average frequency. Day — also accept: Date, Reporting date. Post reactions — also accept: Reactions. Post comments — also accept: Comments. Post shares — also accept: Shares. Post saves — also accept: Saves.

If a required column is missing, note it in the report header and explain which signals were affected.

Important Notes

All analysis is based on the exported CSV data. Accuracy depends on the columns, date range, and breakdown level included in the export. The daily breakdown ("By Day") is required for trend analysis. If the CSV only has aggregate data (no day column), warn the user and provide aggregate-only analysis (current values vs. benchmarks, no trend detection). Hook rate analysis requires video metrics columns. If missing, that signal is skipped and weights are redistributed automatically. The ROAS breakdown estimate is a linear projection based on recent trend. It does not account for seasonal changes, algorithm updates, or budget changes. Treat it as directional guidance. Cross-ad-set frequency estimation uses a 0.6 overlap factor. Actual overlap varies. If the user knows their exact audience overlap %, they can provide it and you should use that instead. Round all dollar amounts to two decimal places. Round percentages to one decimal place. Round days estimates to whole numbers.

Frequently Asked Questions

How many days of data does /fatigue-scan need?

A minimum of 7 days produces useful results. The skill needs daily breakdowns to calculate trajectories and decay rates. With fewer than 5 days, trend detection is unreliable and the skill defaults to benchmark comparisons only. For the most accurate ROAS breakdown estimates, 14–30 days of data is ideal.

Does this work for static image ads or only video?

It works for both. Signal 4 (hook rate decay) applies only to video ads. For static image ads, the skill automatically skips that signal and redistributes its weight across the other four signals. The remaining four signals — CTR trajectory, frequency, CPC inflation, and engagement cliff — apply equally to static and video creatives.

Can the skill detect fatigue in Advantage+ campaigns?

Yes. The skill analyzes at the ad level regardless of campaign type. As long as your CSV export includes ad-level data with daily breakdowns, the signals work the same way. Advantage+ campaigns sometimes mask fatigue at the campaign level because Meta shifts budget between creatives automatically, which makes ad-level analysis even more important.

What if my CSV is missing some columns?

The skill handles missing columns gracefully. Required columns (CTR, CPC, Frequency, Ad name, Day) are needed for core analysis. If video columns are missing, Signal 4 is skipped. If engagement columns are missing, Signal 5 falls back to a proxy calculation using (clicks − link clicks). If ROAS data is missing, the breakdown estimate uses CPA trend instead. The report header notes which signals were affected by missing data.

How often should I run /fatigue-scan?

Weekly for accounts spending under $10K/month. Twice weekly for accounts spending $10K–$50K/month. Daily during peak spend periods (product launches, Black Friday, holiday pushes). Since most ads fatigue in 7–10 days and fatigue signals appear 3–5 days before ROAS collapse, weekly scans catch problems with enough lead time to produce replacement creatives.

Why does the skill use a 0.6 overlap factor for cross-ad-set frequency?

Meta audiences overlap approximately 40–60% by default, especially when targeting similar interests or lookalike audiences from the same source. The 0.6 factor is a conservative estimate. If you know your actual audience overlap percentage from Audience Overlap reports in Ads Manager, you can provide it and the skill will use your custom value instead of the default.

What is the difference between /fatigue-scan and the /meta-audit skill?

The /meta-audit skill performs a broad account-level audit covering campaign structure, audience architecture, budget allocation, and overall performance. The /fatigue-scan skill focuses specifically on per-creative fatigue detection with five signal scoring, composite classification, and ROAS breakdown estimation. They complement each other: /meta-audit for structural diagnosis, /fatigue-scan for creative lifecycle monitoring.

About the author

Ahad Shams

Ahad Shams is the Founder of HeyOz, an all-in-one ads and content platform built for founders and small teams. He has worked across consumer goods and technology, with experience spanning Fortune 100 companies such as Reckitt Benckiser and Apple. Ahad is a third-time founder; his previous ventures include a WebXR game engine and Moemate, a consumer AI startup that scaled to over 6 million users. HeyOz was born from firsthand experience scaling consumer products and the need for a unified, execution-focused marketing platform.