10 min
technical

The 90-Second Task That Was Quietly Stealing My Attention (And How I Killed It Forever)

My AI told me this couldn't be automated. One weekend later, every new Substack subscriber lands in Kit while I sleep. Here's the system.

AutomationPythonClaude CodeBuilding in PublicWorkflow OptimizationAPI
TL;DR — Quick Summary
Substack has no API. Every guide online tells you to download your subscribers as a CSV and upload them to your new email tool manually. I refused. I built a 3-station pipeline that handles it for me every Sunday morning while I sleep. This post walks through the architecture in plain English and ends with 7 extensions that are now possible because the foundation exists.

The 90-Second Task That Was Quietly Stealing My Attention (And How I Killed It Forever)

Published: April 20, 2026 • 10 min read

Last Sunday I asked my AI strategic coach how to migrate my Substack subscribers to Kit. He researched it, came back, and told me to keep doing it manually. "It takes ninety seconds."

Ninety seconds, every single time a new person subscribes, forever. That is not a plan. That is a leak I would spend the rest of my life patching.

So I didn't take that for an answer. I got creative. A little too creative, probably. In this post I'll walk through how I automated the Substack to Kit sync end to end without any external automation tools like Make or Zapier. If you're non-technical, keep reading. I'll make sure it makes sense.

The Problem No One Online Will Solve for You

Every guide you find online says the same thing: log into Substack, go to your settings, download the subscriber CSV, upload it to Kit, done. This is the official advice. It is also the advice my AI coach gave me.

The issue is not the ninety seconds. The issue is the shape of the task. A recurring manual step that has no trigger, no reminder, and no deadline is not a ninety-second task. It is a slow tax on attention. Every time I published a Substack post and someone new subscribed, I would either remember to sync them that week or I wouldn't. Over a month, that is a handful of people who never got my Kit welcome sequence, never got my Thursday newsletter, never existed in the system where my actual marketing lives.

So I reframed the problem. I didn't need an API. I needed the outcome an API would have given me. A new subscriber shows up on Substack, and a few minutes later, they are in Kit. That's it. That's the whole requirement.

Once I reframed it that way, I knew what I had to do.

The System, in Plain English

The easiest way to picture this is a factory line with three stations, all sitting inside a single shared workspace. Every Sunday at 7 AM Eastern Time, the line turns on. A CSV of new subscribers enters on one end. A clean set of tagged Kit subscribers comes out the other end. Then the line shuts itself off until the next scheduled run.

Before I describe the stations themselves, I need to give you a quick tour of the workspace they all share. It looks boring on the surface and it is actually the backbone of the whole system.

The Shared Workspace: The Folder Lifecycle

This is not a station. It is the workspace every station shares. Think of it like sorting physical mail on your kitchen counter: a stack that just came in, a pile you've already handled, and a box in the closet for the old stuff you don't need on hand but don't want to throw out. That's the whole idea. The system lives inside a single folder with five numbered drawers that each hold one stage of that flow:

  • The Playbook Drawer (00-Context/) — where the instructions live, the rules Claude-in-Chrome reads before every run
  • The Inbox (01-Queue/) — new subscriber lists waiting to be processed
  • The Done Pile (02-Completed/) — lists that already synced cleanly, named with the completion date (this is also what the upstream filter reads from, more on that in a second)
  • The Logbook (03-Logs/) — a record of every run, split into a "success" drawer (03-Logs/sync-logs/) and a "problems" drawer (03-Logs/error-logs/). Two drawers, not one, so I can glance at the problems drawer and immediately know whether anything needs my attention
  • The Attic (04-Archive/) — long-term backup for anything old enough that the Done Pile would otherwise get cluttered

Two other folders sit alongside these — one holds the migration script, the other holds the background watcher. The stations below explain what those do.

The numbers match the order work happens. 00 first, 04 last. If something ever breaks, I just open the folders and see exactly where things got stuck.

That is enough orientation to follow the three stations below.

Station 1: The Scheduled Browser Task

This is the part that replaces "me, logging into Substack." It uses two Anthropic products that naturally work together: Claude Cowork and Claude-in-Chrome.

Claude Cowork is a feature inside the Claude desktop app that lets you create scheduled agent tasks. You give it a task description and a cadence ("every Sunday at 7 AM, do this"), and Cowork is the thing that actually remembers to fire the task on time. It is the scheduler. It holds the instructions and the calendar entry. It does not click buttons or navigate web pages. Its only job is to decide when the work should happen and to launch the thing that actually does the work.

Claude-in-Chrome is a Chrome browser extension that puts Claude inside Chrome itself. Once installed, Claude can read the page you are on and drive the browser the same way a human would: click buttons, fill forms, navigate pages, download files. It is the executor. It sits inside the browser like a patient assistant, doing what a human would do, except it does not get distracted and it does not forget.

When the scheduled time hits every Sunday at 7 AM ET, Cowork fires the task and hands the instructions to a Claude-in-Chrome session. Claude-in-Chrome then logs into Substack, navigates to the subscriber dashboard, applies a date filter, clicks through the export flow, and downloads the CSV into a specific folder on my computer.

Cowork is the alarm clock with the instructions taped to it. Claude-in-Chrome is the hands.

The pairing is natural, not something I engineered. These are two separate products from Anthropic that happen to be designed to hand off to each other. Cowork knows when. Chrome knows how. I just had to plug them together and write the task description.

The Early-Stage Filter Technique

This is a subtle but important part. Before Chrome exports the CSV, it checks one thing first. It opens my 02-Completed/ folder, finds the most recent completed migration filename, and extracts the date from that filename. Then it applies that date as a "Start date is on or after [date]" filter inside Substack's own UI before hitting export.

What this means in practice is that I never download the full subscriber list. I only ever download the people who signed up since the last successful sync. If the last sync ran a week ago, I download the week of new people. Clean. Small. Nothing duplicated.

Most people would write a step later in the pipeline to handle catching duplicate subscribers. I pushed the filter all the way upstream so the script almost never has to catch duplicates. The work is prevented, not cleaned up.

Station 2: The Watcher

Once Chrome drops the CSV into my Downloads folder, a second program is watching. It is a small background program (PowerShell on Windows, LaunchAgent on Mac) whose only job is to check the Downloads folder every five seconds and notice when a Substack CSV lands.

When the watcher sees the file, it moves it out of Downloads and into a folder called 01-Queue/. That is the launch pad for the next station.

Station 3: The Migration Script

This one is a bit more technical but bear with me. The migration script, a small script on my computer whose only job is to read the CSV and hand each new subscriber over to Kit, lives inside a migration-script/ subfolder and watches the 01-Queue/ folder. When a CSV shows up, it wakes up, reads the file, and does the actual work.

For each email in the CSV, it sends each subscriber over to Kit through the Kit API (an API is just a way for one program to politely ask another program to do something, in this case, "please add this subscriber"). It passes the email, the name, and a tag that marks these subscribers as Substack-migrated, so I can see at a glance which subscribers came in through this pipeline.

Here is the clever piece for catching duplicate subscribers. If Kit tells the script "I already have this person," the script doesn't panic and skip. It re-applies the tag anyway and moves on. This is the one case where I want the script to act twice on the same data, because tagging the same person twice does no harm, and I would rather over-tag than miss a tag. There is also a small bookkeeping file the script updates after every successful run, which is my safety net in case a filename ever gets mangled.

Once every row is processed, the CSV moves from 01-Queue/ to 02-Completed/. If anything fails partway through, the CSV stays in 01-Queue/ and tries again on the next scheduled run. No silent data loss. Worst case, a migration takes an extra week.

The Four Things That Can Actually Go Wrong

I am not going to pretend this is bulletproof. Every pipeline has failure modes. The question is never "will this break someday?" The question is: when it breaks, does it break loudly and safely, or quietly and destructively? Mine breaks loudly and safely.

  1. Substack redesigns their login flow or export page. A major redesign breaks Claude-in-Chrome, no CSV lands, and no subscribers get added to Kit incorrectly. The gap shows up because that Sunday's file never lands in the completed drawer.
  2. My computer is asleep at 7 AM on Sunday. That week's run just doesn't happen. I open Claude Cowork and hit the "Run now" button by hand, and the rest of the pipeline runs exactly the same way.
  3. The Kit API returns an unexpected error. The script stops rather than pushing through half-correct data, and the CSV stays in 01-Queue/ so the next run retries it. This is retry-safe design.
  4. A malformed CSV lands in the queue. The script checks the file first and exits without touching Kit. My Kit list never gets corrupted by a bad upstream file.

The pattern is the same across all four: when something goes wrong, the system stops, writes down exactly what happened, and refuses to corrupt the Kit list in the meantime. Confidence doesn't come from believing nothing will ever break. It comes from knowing that when things break, they break safely, in a drawer I can find, with enough detail to fix them on a weekday morning with a cup of tea.

That is the whole architecture. Three stations, one shared folder lifecycle, one clever upstream filter, and a lot of "just retry next Sunday" safety nets.

Why the Ninety-Second Answer Was Technically Right and Still Wrong

My AI coach's research was accurate. There is no official way for outside tools to plug into Substack. If I had stopped at that conclusion, I would still be downloading CSVs every Sunday for the rest of my life.

Substack has no API, but Substack has a dashboard. Dashboards are just web pages. Web pages can be read by browsers. Browsers can be automated. Suddenly, I have an API.

Substack has no API is true. Every new Substack subscriber must land in Kit automatically, and I will never touch this again in my life is also true. Both things are allowed to be true at the same time.

What This Unlocks Now That the Foundation Exists

A Substack to Kit sync is not a feature. It is a foundation. Once you have a system that can read a new list, check it against an old list, and act on what's new, you have a tiny factory you can point at almost anything.

Here are seven extensions the foundation now makes possible:

  1. Sync notifications. One extra line at the end of the script and I get a Slack, Discord, or SMS ping every Sunday: "Seven new subscribers migrated. Zero errors."
  2. A different welcome sequence for Substack migrants. They already know me, so a tag that marks these subscribers as Substack-migrated can trigger a "thanks for coming along" sequence instead of a "welcome, stranger" one.
  3. Google Sheets live mirror. Every row the script processes also appends to a Sheet. My mom could read it.
  4. Subscriber intelligence dashboard. The logs are already a data set. Pipe them into a dashboard and I'm watching list health in real time.
  5. Auto-tag by email domain. @gmail.com one tag, @company.com a "possible B2B" tag. Free segmentation for four extra lines of code.
  6. One-way cancellation sync. Same pipeline, but on unsubscribes. Substack to Kit only, no reverse.
  7. Platform portability. The day I leave Kit for Beehiiv or whatever comes next, I swap one function inside the script. Everything else keeps working.

The Payoff

The coach in my computer said ninety seconds. I said forever. He was right about the ninety seconds. I was right about the forever.

Every Sunday at 7 AM, the factory line turns on without me. I find out about it from the logs.

Mission accomplished.

Want to Build This Yourself?

I'm thinking about creating a detailed tutorial with everything you need to set this up on your own list. If enough people want it, I'll make it. Leave your email and I'll send it when it's ready.

Count me in.


Want to discuss this post?

Ask questions, share your thoughts, or join the conversation on Substack.

Read & Discuss on Substack

Continue Reading

Share this article

Found this helpful? Share it with others who might benefit.

Enjoyed this post?

Get notified when I publish new blog posts, case studies, and project updates. No spam, just quality content about AI-assisted development and building in public.

No spam. Unsubscribe anytime. I publish 1-2 posts per day.

Want This Implemented, Not Just Explained?

I work with a small number of clients who need AI integration done right. If you're serious about implementation, let's talk.