90 Days CC'ing
Everything I've built with Claude Code in 90 days - systems, skills, and the workflows that changed how I work.
Good morning and happy Wednesday. This time from Austin Texas. This month has been a travel month, so not as much writing as I would have liked to get out. But a lot of thinking. I had a race on Sunday - which Iāll talk about more today and next episode. The cold months of January and February are finally breaking. Iām ready for the spring.
Iāve been using Claude Code to automate my work and improve my life everyday for nearly 90 days. It started with a suggestion from a friend. Now itās completely altered the way I think about work and the world.
If youāve been following along in the journey so far, weāve had an interesting winter. We went deep on Attribution, and we are wrapping the series with Attribution 3 in the next few weeks. Weāve also started to lay the groundwork for AI Marketing & Systems - I wrote about Taurus and Gary a few weeks ago.
The vast majority of the people I know are still using AI as an assistant to help perform discrete tasks one at a time from the UI. They havenāt bought into the primitives I talked about here, like skills, and they havenāt gotten the courage yet to open up terminal and try.
Well the good news is that things are getting a little less scary everyday. My friend and colleague Jonathan Martinez published a free course called Claude Marketers and in it he covers a lot of great basics.
The better news? He and I are teaming up to bring a deeper, and even more technical version of this program to help growth marketers and technical marketers use Claude Code to automate their work, improve their efficiency.
I talk about this quite a bit but the outcome of AI is trending in two directions 1) where you work harder and more because AI enables it, or 2) where you work less and are more free because AI enables it.
Right now is probably the best time if youāre optimizing for #2. There isnāt widespread knowledge or adoption. Learning how to use Claude Code to get work done more effectively can free you to be a superhuman. Equally, I think salaries and total comp for individuals who know how to wield these tools is going to go up. Thereās never been a better time to learn, tinker and grow.
As a primer for whatās to come, I wanted to open up my brain a bit and share exactly what Iāve been building with Claude Code - along with some tips and speculation.
Next week (or perhaps this weekend) I want to talk about Vendor procurement, and then weāll wrap up the Attribution series.
90 Days CCāing
Before I get into the specific skills, I want to talk about systems. Skills are great, but theyāre useless without the scaffolding underneath them. Over the last 90 days Iāve completed 146 Linear tickets across 3 workspaces, built 45+ skills, and pushed hundreds of commits. None of that would have been possible without getting the foundation right first.
Linear as the Source of Truth
The first thing I did was make Linear my single source of truth for all work - not just client work, but personal projects, infrastructure, content, everything. I built org-aware routing so one command can target any of my three workspaces (my own, Replitās, AtoBās). Then I built skills on top of it.
Why linear? The problem is that LLMs are very good at acting on small, discrete amounts of context. But they are very bad at āremembering thingsā. They are also bad at remembering how to log data to memory. Even if you write a skill or memory to tell them to, they wonāt do it 100% of the time. A linear board and tracking system forces work into bite-sized chunks, which is perfect for the limited context window brain of current LLMs. Here are some of the skills I built...

/linear-status: Pulls a read-only status board of all open tickets. I run this every day when I need to know what to work on. This skill is also baked into other skills. Itās the āwhatās on my plateā command. Taurus is setup with a Linear seat and its own API token, so I can talk to it both in CC as well as directly in tickets, being able to @ it and give it comments that it āknowsā are from me.
/linear-feed-me: Feeds me one actionable ticket at a time - tickets I need to QA or that Iām blocking. Tells me exactly what to do. Type ānextā for the next one. This changed how I context-switch. Instead of scanning a board and deciding what to work on, the system decides for me. I also find that in CC its hard to scroll and parse data. Feed me solves that problem but just focusing one thing at a time (and forcefully telling Claude to assign a subagent based on my response)
/linear-triage: Pulls tickets in Triage status, evaluates each one, groups them by theme, suggests parent issues, and provides prescriptive recommendations. I run this a few times a week to keep the backlog clean. There was a really funny Substack note last week that said before CC I had 2 unfinished projects, and now I have 137. The reality is, this happens when youāre not tracking projects and continuously pruning them.
A Codified Memory
Claude Code has a memory system - persistent context that carries across sessions. But out of the box itās just a flat file. I built a structured memory layer on top of it: user profile, project context, feedback loops, reference pointers. Thereās a memory for my Slack writing voice. Thereās a memory for how I like Ramp memos written. Thereās a memory for which Linear workspace maps to which MCP route.
The structure looks like this: at the top, thereās an index file (MEMORY.md) that acts as a table of contents. Each entry points to a dedicated topic file - user_slack_voice.md, ramp_memo_style_guide.md, taurus_finances_domain.md, etc. The files have typed frontmatter (user, feedback, project, reference) so Claude knows what kind of information itās looking at and when to apply it.

Hereās why this matters: without structured memory, every Claude Code session starts cold. Youāre re-explaining context, re-establishing preferences, re-teaching the same lessons. With it, I open a terminal and Claude already knows that Replitās Linear workspace routes through Taurus MCP, that I want terse responses, that my Ramp memos need full travel context with names and trip purposes. It remembers feedback I gave three weeks ago about how to format tickets. It picks up where we left off.
The big callout though is that memory isnāt just a single file. Itās a collection of files and folders, structured with context. Whatās ironic is that for the last 10 years weāve gotten so accustomed to being lazy with organization. Search makes it easy to find anything you want. In the 90ās that was the opposite. Directories with specific names reigned king because search sucked. Now we are moving back to directories, not because search is bad but because LLMs need small bitesized chunks of memory to work with. They need a directory map of where to go and what to look for when solving problems.
The investment is front-loaded - you spend time building the memory system once, and every future session gets compounding returns.
Skills as Primitives
If you take one thing from this article, let it be this: a skill is a reusable prompt + instructions that Claude Code executes like a slash command. You type /start-day and it calls Linear, reads your calendar, checks Slack, and builds a prioritized action plan. You type /done and it summarizes your work, pushes code, syncs to Linear, and cleans up.
Skills compose. They chain. They call each other. Youāre not chatting with AI - youāre programming it. The difference between someone who uses Claude Code casually and someone who builds a system is skills.
Client Work - Rolling Your Own RETL Syncs
Letās talk about this with real application. Iām currently working as a fractional CMO and technical marketer. I split my time across multiple companies. This is what āusing AI at workā actually looks like for me.
First, Replit: at Ramp, wiring up a new Reverse ETL sync to an ad platform took months. Me, Paloma, Akash, Ian, and Eric spent 4+ months building a sophisticated backend system to handle ad optimization properly across multiple tools. We needed a data engineer to build the model in DBT, a growth engineer to configure the telemetry data on web, a growth marketer to give us creds to different ad accounts, a QA pass, a deploy, cross-functional coordination across three teams. It was a multi-sprint affair.
With Claude Code, I built skills that can roll a RETL ad optimization pipeline in < 1 day. Additionally, I built skills around the way I work with the team at Replit to minimize time lost āunderstandingā what work to do and maximize time āexecuting.ā
The flow now?
Sprint standup Monday -> Raw conversation into tickets
Everyday .. /start-day -> execute -> /end-day

/start-day {{ args }}: Morning execution engine. Ingests overnight / yesterdayās and weekends meeting data, Slack, Linear tickets, GitHub PRs, and builds a prioritized action plan. I have different variants for different clients - /start-day:replit and /start-day:atob each pull from different data sources and prioritize differently.
/replit-retl-new-channel {{ args }}: The big one. This is an end-to-end playbook for setting up a new RETL channel from scratch. It checks prerequisites, orchestrates model creation, builds the mapping, and validates the pipeline. I used this to stand up Meta CAPI and TikTok CAPI syncs via Segment - each in a single session.
/replit-retl-create-model: Creates a Segment RETL model from a dbt table via API. It shows me the curl command for approval before executing. I like this pattern - the AI proposes, I approve, it executes.
/replit-retl-create-mapping: Creates a Segment RETL mapping (subscription). Constructs the correct payload shape based on channel and event type. The payload shapes are different for Meta vs Google vs TikTok - the skill knows which one to use.
/replit-retl-validate: Validates the whole pipeline end-to-end. Compares BigQuery source data with Segment sync status and ad platform results. This is the QA step - did the data actually make it from warehouse to ad platform?
/done: The punctuation mark at the end of a task batch. Summarizes what was done, pushes code, cleans worktrees, runs QA checks, and syncs everything to Linear. Itās the difference between āI think Iām doneā and āI know Iām done.ā
The problem I was facing is that Iād have these long running sessions where context was compacting, and Iād move from problem to problem. Instead, I work in bite-sized chunks on specific tickets. When the ticket is done, itās /doneād. I work to improve this command so that every sub agent working on a parent ticket or child ticket has the right level of context to perform the job, while minimizing token consumption that could lead to poorer performance.
/end-day {{ args }}: End-of-day ops engine. Captures memories from the session, syncs completed work to Linear, checks deploy state, preps context for tomorrow, and cleans up the workspace. Most people forget this. They end their day by closing the session, or wandering off to do something else. Systems arenāt currently good at self-management. Instead, build this into your workflow (PS I actually have this run automatically at midnight in all open warp sessions, so that no matter what I wake up to a fresh context window). You can automate skills with cron jobs. The cron triggers and can perform the skill you want.
Content & Publishing
I also have improved the skills I use to write. The writing is always my own - I donāt use AI to generate the words. But everything around the writing? Thatās automated. Kyle Poyar wrote a great piece on becoming an AI-native operator that resonated with me, and this is my version of that for content.
I built a small suite of skills that handle the publishing pipeline:
/newsletter-seo: Generates SEO titles, meta descriptions, email subject line A/B options, keyword clusters, and audits internal links to prior GSM articles. I run this on every draft before publishing. Itās the kind of thing I used to spend 20 minutes doing manually - picking subject lines, writing meta descriptions, making sure Iām linking back to the Attribution series where relevant.
/newsletter-editor: A 3-pass copy editing skill that checks for consistency, voice, and formatting without touching the substance. It catches things like stray em dashes that would make people think Iām an AI barfer, inconsistent heading levels, and sections that run too long. It edits in my voice, not its own.
/substack-draft: The last mile. Takes a markdown file from my drafts folder and loads it into Substack via Playwright. Handles formatting cleanup, strips image slot annotations, and gets the post ready to publish. What used to be 15 minutes of copy-pasting and reformatting is now one command.
Personal Workflows - The Life Stuff
Look, I know what youāre thinking. āAustin, this is a B2B newsletter. Why are you telling me about Craigslist?ā
Because this is the part that nobody talks about. Everyoneās writing about AI agents for sales pipelines and AI copilots for code review. Nobodyās writing about the fact that I automated posting my old IKEA bookshelf to Craigslist from my phone while working out at the gym. And thatās kind of the point.
The unlock with Claude Code isnāt just about work. Itās about reclaiming all the small, annoying tasks that eat your day. There used to be 10-15 of these micro-chores every month that Iād procrastinate on, batch into a painful Saturday morning, and still not finish. Now theyāre skills. Some of them are silly. Some of them save me real hours.
/craigslist-post: Iām in that stage of my life where I am a chronic FB marketplace and Craigslist poster. I hate to see old household items go to the landfill, but taking the time to post stuff, even for free, is a hassle. This skill posts a for-sale-by-owner listing on Craigslist. I specify the item, price, and photos - it launches Playwright, fills the form for Arlington VA or Palmyra VA, uploads images, and submits. Iāve sold furniture, electronics, and random stuff without ever opening a browser. Best of all, itās wired into Taurus as a single shot -> photo+ text -> uploads via Gary.
/fidelity-get-statements: Downloads monthly Fidelity bank statements and CSVs from my Mac Mini, then uploads them to Google Drive. Tax season in one command. I run this once a month and never think about it. There used to be 10-12 of these small micro tasks I had to complete every month, and Iāve been on a rampage trying to skill-ify them to save me time.
/icloud-file-organize: Scans files dumped into iCloud (photos, PDFs, documents) and organizes them into structured folders. The digital equivalent of cleaning your desk. I get like 2-3 physical mailed things every month, and I like to keep and organize things. But scanning, renaming, and finding the right place to store files? No thank you. Automated. I havenāt done this yet, but an extension to this would be to have a skill that actually cleans up all the EXISTING files in iCloud and better organizes them than I can.
/github-cleanup: Audits GitHub repos across my org and personal accounts. Recommends what to keep, archive, or delete. Clones everything locally before deleting anything. I ran this once and cleaned up 30+ dead repos Iād forgotten about.
/analyze-memos: Pulls investor memos from all my investments, scores them on a framework, and provides strategic recommendations. I use this when evaluating new opportunities or prepping for board-level conversations.
/hbe-sow: Generates a full Statement of Work for a new client. Researches prior calls and emails via Taurus, clones the SOW template from Google Drive, fills in the details, stores the finished doc, and drafts a Gmail to send. What used to take me 2 hours of context-gathering now takes 5 minutes.
/consulting-request-triage: Iām on Guidepoint, AlphaSights, and Capvision as an expert. This skill auto-triages incoming consultation requests - evaluates topic fit against my expertise, checks the rate, and determines scheduling availability. The ones that are a clear fit get accepted automatically and it pulls from Martech Demigod to prep answers for the call. Literally free money.
/ramp-triage: Triages Ramp corporate card transactions. Categorizes expenses, writes detailed memos (with travel context pulled from my calendar), and flags anything that needs attention. My bookkeeper loves me now.
/icloud-sync: Syncs iCloud data across devices and into my system. Still building this one as we speak. Iāve been on a bender trying to clean, sort, and organize 20,000 personal contact records and figure out how to separate iCloud (personal contacts) from HBE/Personal Google, while still having the data unified and everywhere.
Health & Wholeness - The Fuller Picture
Last, I started integrating health data into Taurus because I thought it would be useful for training. What I got was something bigger - a fuller picture of my life, not just my work.
I unified data from Garmin, Strava, and TrainingPeaks into one store. 2,300+ daily health snapshots going back to 2011. Every workout, every race, every sleep score.
Race History + PR Tracker: Iāve been running since college, but I only started taking races more seriously. In the last year, Iāve competed with myself, continuously raising the bar on 5Ks, 10Ks, half marathons, ultras, and Hyrox, you name it. The data was scattered across Garmin, Strava, and my own memory. Now itās all in one place with best efforts and PR progression over time. I can see exactly how my half marathon pace has trended over the last five years. Thatās not life-changing, but itās the kind of thing that would have taken me an entire afternoon to compile manually.
Race Readiness Dashboard: This one has been especially relevant lately. I had a half marathon recently. I sat down with Claude (the chat app, not Code - but pulling data from Taurus via MCP) and asked it to assess my race readiness. It pulled my training logs, recent sleep and HRV data, body battery trends, and my travel schedule for the week. It generated a GPS-verified pacing strategy based on my actual splits from training runs on similar courses. It compared against my prior races at the same distance. It flagged that Iād flown cross-country two days before and adjusted the recommendation accordingly. It merged calendar data, with life data extracted from email, with stress ... and it helped me realize how much Iāve been redlining.

This is why I think CC has so much potential. You could ask the same question to a chat window in Claude but youāre going to get terrible results. The ability to collate diverse data streams over extremely long periods of time is what makes it magical. Thatās an AI with genuine context about my life making a recommendation Iād trust. Itās the kind of analysis a coach would do, except itās instant, itās built on real data, and it cost me nothing but the time to ask.
The Tally
146 tickets. 45 skills. 90 days. And Iām just getting started.
If any of this resonates - if youāre a marketer, a growth person, a technical operator who wants to build systems like this - Jonathan and I are putting together something for you. More details soon.
Next week: vendor procurement. Then we wrap up the Attribution series.
Until next time š«”
If youāve made it this far, maybe you are my sisterās dog? Cutie.
I get emails every week from people giving thanks and sharing their stories about how these articles have helped them. Itās the number one reason I keep writing. So if you enjoyed todayās post, share it with friends, like it, comment, or DM me. Iād love to hear from you.
Recommended Reads
AI-Native Org Report Part One - Kyle Poyar on how AI-native organizations are structured differently. Essential reading if youāre thinking about team design.
How Stripe Built āMinionsā - AI Coding Agents That Ship 1,300 PRs Weekly from Slack Reactions - Steve Kaliski on Stripeās internal AI agent infrastructure. The closest thing Iāve seen to how I think about skills at scale.
Arm Launches Own CPU - Ben Thompson on Armās move into its own silicon. The systems thinking here is relevant to anyone building infrastructure.
Please Listen to My Podcast - Thompson draws parallels between Jensen Huang and Steve Jobs. AIās inflection point is now defined by visionary execution, not raw innovation.
How to Make Better Decisions - The Atlantic on decision frameworks. Relevant to anyone building systems that make decisions for you (like, say,
/linear-feed-me).





This is a banger of a post man
Congratulations on your PR, and fascinating work, too! Congratulations!!!