We Gave AI Our Growth Playbook. Here's What Happened.

San Francisco · August 19, 2025

The experiment starts on a Monday, the way most questionable decisions do. I'm sitting in my apartment at 6 a.m. — I work from home on Mondays — and I have a thought that's either brilliant or stupid: what if I fed our entire growth playbook into an AI and asked it to run our experiments?

Our growth playbook is a 47-page Google Doc that I've been building for three years. It contains every experiment we've run, every channel we've tested, every conversion rate we've measured. It has our ICP definitions, our messaging frameworks, our channel-specific strategies, our onboarding sequences. It's the accumulated knowledge of a growth team that's been in the trenches since the company was twelve people in a co-working space.

I copy-paste the entire thing into Claude and type: "You are a growth lead at a B2B SaaS company. Based on this playbook, what experiments would you run next quarter, and why?"

Week One, Monday

The AI's response is twenty-three paragraphs long. I read it twice. Then I read it a third time, because parts of it are uncomfortably good.

It identifies seven experiment ideas. Three of them are variations of things we've already tried — the AI basically recombined existing strategies in new ways, which is useful but not groundbreaking. Two are ideas that we'd discussed internally and deprioritized — the AI independently arrived at the same conclusions, which is either validation or confirmation bias. And two are genuinely novel.

The first novel idea: segment our onboarding flow by company size. We currently run the same onboarding for a five-person startup and a 500-person enterprise. The AI argues, with supporting logic from our own data, that companies with fewer than 20 employees need a speed-focused onboarding (get to value in under 10 minutes), while companies with 20+ employees need a comprehensiveness-focused onboarding (set up integrations, invite team members, configure permissions). Our data shows that small companies activate 40% faster than large companies but churn at twice the rate after six months. The AI connects these dots in a way we hadn't.

The second novel idea: build a "growth calculator" — an interactive tool on our website that lets prospects input their current metrics (MRR, churn rate, CAC) and see a projection of how our product would impact their numbers. The AI suggests this based on a pattern in our conversion data: prospects who engage with our case studies (which contain specific numbers) convert at 3.2x the rate of prospects who don't. The calculator would give every prospect a personalized case study.

The AI connected dots in a way we hadn't. Not because it was smarter, but because it wasn't burdened by the politics of which team owned which metric.
Week One, Wednesday

I bring the experiment ideas to the growth team meeting. There are four of us: me, two growth marketers, and a data analyst. I don't tell them the ideas came from an AI. I present them as "some experiments I've been thinking about" and see how they react.

The segmented onboarding gets immediate buy-in. "We've been talking about this for a year," says one of the growth marketers. She's right. We have. But we never prioritized it because it requires coordination between product, engineering, and marketing, and cross-functional initiatives die slow deaths in our company.

The growth calculator gets pushback. "It'll take engineering two weeks to build," the data analyst says. "And the assumptions will be wrong for half our customers." He's also right. A calculator is only as good as its model, and our model for projecting customer outcomes is, frankly, a guess. We know our average customer sees 18% improvement in key metrics. But "average" hides a range from 3% to 47%, and the calculator would either overproject or underpromise depending on the inputs.

We decide to run both. The segmented onboarding as a proper A/B test. The growth calculator as an MVP — a simple spreadsheet-based tool embedded on a landing page, not a full engineering build.

Week Two

I'm deeper into the AI experiment now. I've moved past "generate experiment ideas" and into "analyze our data." I export our last twelve months of marketing performance data — channel-level spend, impressions, clicks, signups, activations, conversions — and ask the AI to find patterns.

It finds three things I didn't know:

First, our LinkedIn ad campaigns perform 2.7x better on Tuesdays and Wednesdays than on other days. Our ad spend is evenly distributed across the week. We're wasting money on low-performing days. This is the kind of insight that a human analyst would find eventually, but it's buried in 15,000 rows of data and nobody had looked.

Second, there's a correlation between blog post length and conversion rate that I wouldn't have predicted. Posts between 1,800 and 2,400 words convert at 2.1%. Posts under 1,000 words convert at 0.4%. Posts over 3,000 words convert at 0.7%. There's a sweet spot, and we've been mostly writing outside it.

Third, and this is the big one: customers who come through our integration partners (we have twelve integrations listed on our website) have 23% higher twelve-month retention than customers from any other channel. But we spend exactly 0% of our marketing budget on the integration channel. Zero. We built the integrations and forgot about them.

Week Three

I start using the AI for copywriting. Not for final drafts — I don't trust it for that — but for first drafts and variations.

I give it our top-performing email subject lines and ask it to generate twenty variations. Of the twenty, I'd use maybe four. The rest are too generic, too clever, or too salesy. But those four are good enough to A/B test, and I would have spent an hour coming up with them myself.

I give it our landing page copy and ask it to rewrite it for three different ICPs: startup founders, enterprise heads of growth, and agency teams. The startup founder version is good — conversational, urgent, focused on speed. The enterprise version is wooden. The agency version is surprisingly strong, with a value prop I hadn't considered: "Stop building custom dashboards for every client."

The growth marketer on my team, who's been watching me do this, has a question. "If AI can do 60% of my job, what happens to the other 40%?"

"The other 40% is the part that matters," I tell her. "Strategy, judgment, relationships, the stuff you can't learn from a playbook."

She doesn't look reassured.

"If AI can do 60% of my job, what happens to the other 40%?" she asks. "The other 40% is the part that matters," I tell her. She doesn't look reassured.
Week Four: Results

Here's what happened with the experiments:

The segmented onboarding test ran for three weeks with 1,400 new signups split 50/50. Small-company onboarding (under 20 employees): activation rate 52%, up from 38%. Large-company onboarding (20+ employees): activation rate 41%, up from 29%. Both segments improved significantly. The segmentation works.

The growth calculator MVP — a glorified Google Sheet embedded on a landing page — got 340 unique visitors in three weeks. Of those, 87 completed the calculator (input their data and got a projection). Of those, 23 signed up for a trial. That's a 26.4% calculator-to-signup rate. For context, our homepage converts at 3.8%. The calculator page converts at 6.8%. It's our best-performing landing page by a mile.

The LinkedIn ad day-of-week optimization saved us approximately $2,100/month by shifting spend from low-performing to high-performing days. Not life-changing, but free money.

The integration partner marketing campaign — co-branded content and referral incentives with our top three integration partners — generated 89 signups in the first month. Retention data is too early to call, but if the 23% retention advantage holds, these are our most valuable new customers.

What I Actually Learned

The AI didn't replace the growth team. It replaced the least interesting parts of the growth team's work: the data digging, the pattern finding, the first-draft writing. The parts that require human judgment — which experiments to actually run, how to interpret ambiguous results, when to kill a test that isn't working — those are still human. Maybe they'll always be human. Maybe not.

What surprised me most was the integration insight. That was hiding in our data for at least a year. Nobody found it because nobody looked. The AI found it in forty seconds. Not because it's smarter, but because it has no preconceptions about what's important and no organizational politics about which channels "belong" to which team.

The growth marketer who asked about her job — she's fine. She's running the integration partnerships now, a role that didn't exist a month ago. She's good at it because it requires empathy and relationship skills that no AI has. At least not yet.

I'm going to keep running this experiment. The playbook is getting updated. The AI gets the updates too. It's a strange partnership — me and a machine, iterating on a strategy together. I don't know where it goes. But the numbers are better than they were a month ago, and in growth, that's the only thing that matters.