5 Claude Scheduled Tasks That Automate My Entire Meta Ads Creative Pipeline

Written By
Ahad ShamsAhad Shams
hero=section

Key Takeaways

  • 5 Claude scheduled tasks can automate your entire Meta Ads creative pipeline: fatigue scanning, competitor research, winner analysis, brief generation, and pre-launch creative scoring.
  • The system creates a closed feedback loop — winning patterns from this week automatically shape next week's creative briefs, replacing the reactive guess-and-check approach.
  • Tasks 1-4 run in the cloud (your computer can be off) with an MCP connector, or on your machine with CSV exports. Task 5 (creative scorer) always runs locally because it needs to read image files.
  • Total cost starts at $20/month (Claude Pro). The manual version of this workflow takes 5-8 hours per week — the automation pays for itself in the first week.
  • No code or API keys required. Setup uses plain English prompts, a folder on your computer, and a markdown file with your brand guidelines.

Introduction

My Meta ad creative pipeline runs itself now. I do not open the Ad Library on Mondays. I do not manually check which creatives are dying. I do not write creative briefs from scratch. Claude does all of it on a schedule.

Most marketers treat creative production as a linear, manual process: make ads, launch them, wait a week, check numbers, maybe spot a winner, maybe remember to replicate what worked next time. This system closes the loop automatically. Insights from this week automatically shape next week's creative output.

This guide gives you the exact setup, the exact prompts, and the full workflow to build the same system. No code. No API keys. Just Claude and a folder on your computer.

What Does This System Actually Do?

Five scheduled tasks form a continuous creative feedback loop:

  • Creative Fatigue Scanner (daily, 8 AM) — tags every active ad as healthy, warning, or critical based on CTR decay, frequency creep, and CPC inflation
  • Competitor Creative Scrape (Monday, 7 AM) — researches what your competitors launched this week on Meta, surfaces new hooks, formats, and angles
  • Winning Creative Analyzer (Tuesday, 9 AM) — documents exactly why your top 5 ads from last week worked, identifying patterns in hooks, copy structure, format, and audience
  • Creative Brief Generator (Wednesday, 10 AM) — writes 5 ready-to-produce briefs using winning patterns and competitive intelligence from the previous tasks
  • Pre-Launch Creative Scorer (daily, 4 PM) — reviews new ad creatives you drop into a folder, scores them against your brand guidelines and Meta specs, flags issues before launch

The old workflow was reactive. This one is a continuous loop where winners get analyzed, competitor moves get surfaced, briefs get written, and new creatives get quality-checked — all automatically.

What Do You Need Before Starting?

Required:

  • Claude Pro ($20/month) or Claude Max ($100-200/month) — the free plan does not support scheduled tasks
  • A Meta Business Manager account with ad accounts you want to monitor
  • Claude Desktop app installed on your computer (download from claude.ai/download) — required for Task 5

For live data access (recommended):

An MCP connector for Meta Ads data. Top options: Adspirer (easiest, 2-minute setup, free tier), Pipeboard (most tools, 30+ features, $49/month), or Adzviser (multi-platform analytics, free tier with 15 calls/month). Setup: go to claude.ai, then Settings, then Integrations, then Add Integration, paste the connector URL, and authenticate with Meta.

For the no-API route:

A folder on your computer where you save CSV exports from Meta Ads Manager, and a separate folder where you drop new ad creatives for review. No third-party tools needed.

Brand assets to prepare:

Your brand guidelines saved as a markdown or text file. Include specific hex codes for colors, font names, tone of voice rules with examples, logo placement rules, and mandatory elements. The more specific this file is, the better Claude scores your creatives.

How Do You Set Up Your First Scheduled Task?

Three options. Pick whichever feels easiest.

Option 1: Web Interface

Go to claude.ai/code/scheduled. Click New scheduled task. Fill in the task name, prompt (we give you the exact prompts below), schedule, and connectors. Click Create, then Run Now to test immediately.

Option 2: Claude Desktop App

Open Claude Desktop. Click Schedule in the sidebar. Click New task. Choose New remote task (runs in cloud, computer can be off) or New local task (runs on your machine, can read local files and images). Fill in details and save.

Option 3: /schedule Command

Type /schedule in any Claude conversation. Claude walks you through it interactively — what to automate, how often, and at what time.

Which type for which task:

Tasks 1 through 4 work best as cloud scheduled tasks if you have an MCP connector, or desktop local tasks if you use CSV exports. Task 5 (pre-launch creative scorer) must always be a desktop local task because Claude needs to read image files from a folder on your machine.

What Are the 5 Tasks and Their Exact Prompts?

Copy these prompts directly into the prompt field when creating each scheduled task.

Task 1: Creative Fatigue Scanner — Daily at 8:00 AM

Tags every active ad as healthy, warning, or critical based on fatigue signals. Prompt: Analyze all my active Meta ads for creative fatigue. For each ad, evaluate frequency creep (flag if above 2.5), CTR decay (flag if declined more than 10% week over week), CPC inflation (flag if increased more than 15%), hook rate for video ads (flag if declining), and days running without refresh. Tag each ad as Healthy (frequency below 2.5, CTR stable or rising, CPC stable), Warning (frequency 2.5-4 or CTR declined 10-20% or CPC up 15-25% or running 14+ days with flat performance), or Critical (frequency above 4 or CTR declined more than 20% or CPC up more than 25% or running 21+ days with declining metrics). Output a table sorted by severity: Ad Name, Campaign, Status, Frequency, CTR Trend, CPC Trend, Days Active, Recommended Action. For each Warning and Critical ad, recommend whether to pause immediately, refresh creative, narrow audience, or reallocate budget. End with a summary of how many ads are in each category and total daily budget at risk.

Task 2: Competitor Creative Scrape — Weekly, Monday at 7:00 AM

Researches what your competitors are currently running on Meta. Prompt: Research the current Meta advertising activity for these competitors: [REPLACE WITH YOUR COMPETITOR NAMES, WEBSITES, AND META PAGE NAMES]. For each competitor, find active ad volume, new creative themes this week, ad format breakdown (video vs static vs carousel), hook analysis (categorize as question, bold claim, social proof, problem agitation, curiosity gap, direct offer), CTA patterns, landing page patterns, and offer analysis. Format as a competitor briefing with sections per competitor, then a Competitive Intelligence Summary answering: What new angles are competitors testing that we are not? What formats are they doubling down on? What gaps exist? Three specific creative ideas inspired by competitor moves, adapted for our brand.

Note: Claude uses web search to find competitor ad activity including Ad Library listings, social media posts, and ad intelligence sites. The Meta Ad Library API is limited to political ads in the EU only and does not provide API access to commercial ads. For deeper tracking, consider pairing with a tool like Foreplay or MagicBrief.

Task 3: Winning Creative Analyzer — Weekly, Tuesday at 9:00 AM

Documents what made your top performers work. Prompt: Pull performance data for all ads active in the last 7 days. Identify the top 5 by ROAS. For each winner, analyze: Performance snapshot (ROAS, CPA, CTR, CPC, spend, conversions vs account averages). Hook analysis (opening line, hook category, why it resonates). Copy structure (length, formula used — problem-agitate-solve, benefit-proof-CTA, story-lesson-offer, etc., emotional triggers present). Format analysis (image vs video vs carousel, aspect ratio, placement performance). CTA analysis (button type and closing line). Audience context (which ad set and what that tells us). After all 5, create a Winning Patterns Summary: the 3 most consistent patterns across winners, the dominant format, the best-performing copy length, and the most responsive audience segments. Title it Winning Creative Patterns — Week of [date].

Task 4: Creative Brief Generator — Weekly, Wednesday at 10:00 AM

Writes next week's briefs using data from Tasks 2 and 3. Prompt: Using the Winning Creative Patterns analysis from Tuesday and the Competitive Intelligence Summary from Monday, generate 5 creative briefs for next week's Meta ad batch. Each brief targeting a different angle. For each brief include: Brief title, strategic rationale (which pattern or gap it exploits), target audience, 3 hook/headline variations, 2 primary text versions (short under 50 words, medium 50-100 words), detailed visual direction (format, aspect ratio, scene description, color palette, text overlay needs), CTA, placement priority, success metric, and A/B test plan. Ensure the 5 briefs include at least 1 building on a winning pattern, 1 testing a competitor-inspired angle, 1 in a different format than current rotation, and 1 targeting an underinvested audience segment. Format each brief as a standalone document ready to hand to a designer.

Task 5: Pre-Launch Creative Scorer — Daily at 4:00 PM (Desktop Local Task)

Before creating this task, set up two folders: a review inbox at ~/Documents/ad-creatives/to-review/ where you drop new creatives, and a reviewed archive at ~/Documents/ad-creatives/reviewed/. Also save your brand guidelines as ~/Documents/ad-creatives/brand-guidelines.md with your colors (hex codes), fonts, logo rules, tone of voice with examples, and mandatory elements.

Prompt: Check ~/Documents/ad-creatives/to-review/ for new files. If none, respond No new creatives to review and stop. If files exist, score each creative on 4 criteria (25 points each, 100 total): Brand Compliance (colors match hex codes in brand-guidelines.md, logo present, correct fonts, mandatory elements included), Hook Strength (clear compelling hook, matches winning patterns, would stop a thumb-scroller, value proposition clear in 2 seconds), Meta Placement Compliance (correct aspect ratios — 1:1 or 4:5 for feed, 9:16 for stories/reels, text under 125 characters, headline under 40 characters, key text outside 250px top/bottom safe zones), Performance Potential (proven copy structure, clear CTA, emotional triggers, creative diversity from current rotation). Grade: 80-100 ready to launch, 60-79 needs minor fixes, 40-59 needs significant revision, below 40 reject. Output per creative: File name, Score, Grade, Top 3 Issues, Specific Fixes. Move reviewed files to ~/Documents/ad-creatives/reviewed/ with date appended. End with summary.

How Do You Test Each Task Before Going Live?

Do not create all 5 tasks at once. Set them up one at a time and validate each one.

For Tasks 1 and 3 (fatigue scan, winning analyzer): Run manually with Run Now. Cross-check 2-3 ads against Ads Manager to verify the data and severity tags are accurate.

For Task 2 (competitor scrape): Run once with your actual competitor names. Open the Ad Library yourself and compare what Claude found. Refine competitor names and add website URLs if results are too generic.

For Task 4 (brief generator): Run Tasks 2 and 3 first so the brief generator has input data. Then evaluate: are the briefs actionable enough to hand directly to a designer? If not, refine the prompt.

For Task 5 (creative scorer): Drop 3-4 existing creatives into the review folder, including one you know is off-brand. Run the task. Check that it correctly identified the bad creative and that scores are reasonable against your brand guidelines.

What Does the Weekly Rhythm Look Like?

Monday 7 AM (automatic): Competitor creative scrape runs. By the time you open your laptop, you have a full competitor briefing with new hooks, formats, and angles they are testing.

Monday through Friday 8 AM (automatic): Creative fatigue scan runs. Every morning you know which ads are healthy, which are warning, and which need to be pulled. Takes 5 minutes to review.

Tuesday 9 AM (automatic): Winning creative analyzer runs. Your top performers from last week are documented with detailed pattern analysis — hook types, copy formulas, format breakdowns, audience insights.

Wednesday 10 AM (automatic): Creative brief generator runs. Five ready-to-produce briefs appear, each tied to a proven winning pattern or competitive opportunity. You review, approve, and hand to your designer or production team.

Wednesday through Friday (you): New creatives get produced based on the briefs.

Daily 4 PM (automatic): Pre-launch creative scorer reviews any new creatives you dropped into the folder. Scores, fixes, and green/yellow/red status before anything goes live.

Friday (you): Launch the new batch. The cycle begins again Monday. Total hands-on time: reviewing reports (10 minutes per day) plus creative production itself. The analysis, pattern recognition, brief writing, and QA all run on autopilot.

Should You Use MCP Connectors or CSV Exports?

Use an MCP connector if:

  • You want Tasks 1, 3, and 4 to pull live performance data automatically without manual exports
  • You are comfortable authorizing a third-party tool on your Meta account (all use official Meta Marketing API)
  • Top options: Adspirer (easiest, free tier at adspirer.com), Pipeboard (most tools at pipeboard.co, $49/month), Adzviser (multi-platform at adzviser.com, free tier)

Use CSV exports if:

  • You want zero third-party access to your ad account
  • You are okay exporting a CSV a few times per week from Ads Manager
  • Add this line to prompts: Read the most recent CSV file from ~/Documents/meta-ads-data/ and use it as the data source

Task 5 (creative scorer) always runs as a desktop local task regardless of your choice, because it reads image files from your computer.

How Do You Prepare Your Brand Guidelines File?

Task 5 depends heavily on a good brand guidelines file. Save it as ~/Documents/ad-creatives/brand-guidelines.md and include these sections:

Brand Colors: List each color role with its exact hex code — primary, secondary, accent/CTA, background, text. Typography: Include font names, weights, and size ranges for headlines, body text, and CTA buttons. Logo Usage: Specify required position, minimum clear space, acceptable color variants, and contrast requirements. Tone of Voice: List 3-5 adjectives, provide an on-brand example sentence and an off-brand example sentence, and list words you always use and words you never use. Mandatory Elements: Logo on every creative, CTA button required, any legal disclaimers. Meta Placement Rules: Feed ads 1:1 or 4:5 only, Story/Reel ads 9:16 only, critical text outside top and bottom 250px safe zones, primary text under 125 characters, headline under 40 characters.

The more specific you make this file, the better Claude scores your creatives. Vague guidelines like "be professional" produce vague scores. Specific guidelines like "headlines must be 8 words or fewer, sentence case, never use exclamation marks" produce actionable feedback.

What Does This Cost?

Claude Pro subscription: $20/month. MCP connector (free tiers available): $0-49/month. Meta Ad Library access: free. Total with free connector tier: $20/month. Total with paid connector: $69/month.

The manual version of this workflow — checking fatigue signals daily, browsing the Ad Library weekly, analyzing winners, writing briefs, QA-ing every creative — takes 5-8 hours per week. At any reasonable hourly rate, the automation pays for itself in the first week.

Frequently Asked Questions

Do I need to know how to code?

No. Everything here uses plain English prompts and point-and-click setup. The most technical step is creating folders on your computer and saving a text file.

Can Claude actually see and analyze images?

Yes. Claude has vision capabilities and can analyze PNG, JPG, and WebP files. It can evaluate composition, check text content, identify colors, and assess layout. It is not pixel-perfect on exact hex code matching from screenshots, but it catches off-brand elements, missing CTAs, and layout issues effectively.

Does my computer need to be on?

For Tasks 1-4 using cloud scheduled tasks with an MCP connector, no. They run on Anthropic's servers 24/7. For Task 5 and any tasks using CSV exports, yes — your computer and Claude Desktop must be on and awake when the task fires.

Will the competitor scrape get me blocked?

No. Claude uses web search to research competitor activity, not browser automation or scraping. It is equivalent to you Googling your competitors and browsing their public ads. No Terms of Service are violated.

Can I use this for TikTok or Google Ads too?

Yes. The creative scoring task works for any format — update the placement specs in the prompt. The fatigue scanner and winning analyzer work if your MCP connector supports those platforms (Adzviser and Pipeboard support Google Ads, Adspirer supports TikTok).

What if I do not have a designer?

The briefs from Task 4 are detailed enough to use with AI image generation tools, Canva templates, or freelance designers. You can also use tools like HeyOz to generate ad creatives directly from a product URL, then run them through Task 5 for scoring before launch.

How long before I see results?

You will see operational results immediately — less time on manual analysis, faster creative QA. The performance impact from faster creative cycling and more data-driven angles typically shows within 2-3 weeks.

About the author

Ahad Shams

Ahad Shams is the Founder of HeyOz, an all-in-one ads and content platform built for founders and small teams. He has worked across consumer goods and technology, with experience spanning Fortune 100 companies such as Reckitt Benckiser and Apple. Ahad is a third-time founder; his previous ventures include a WebXR game engine and Moemate, a consumer AI startup that scaled to over 6 million users. HeyOz was born from firsthand experience scaling consumer products and the need for a unified, execution-focused marketing platform.