The Experiment Log: Q3

San Francisco · December 9, 2025

Every quarter, I publish an internal experiment log. It's a document that lists every growth experiment we ran, what we expected to happen, what actually happened, and what we learned. The leadership team reads it. The board gets a summary. Nobody else cares, which is fine — it's not written for applause. It's written so we don't repeat mistakes.

This is the Q3 2025 version, edited for anonymity. Seventeen experiments. Five wins. Eight losses. Four inconclusive. A batting average of .294, which would get you benched in baseball and promoted in growth.

Experiment 1: Onboarding Redesign (Win)

Hypothesis: Reducing the onboarding flow from seven steps to four would increase activation rate (defined as completing two key actions in the first 72 hours).

What we did: Stripped the onboarding to essentials — connect data source, create first dashboard, invite a team member, complete guided tutorial. Removed account customization, profile setup, and the "explore features" tour.

Result: Activation rate went from 31% to 44%. Thirteen-point improvement. The effect was largest for SMB customers (22-point improvement) and smallest for enterprise (4-point improvement), which makes sense — enterprise customers have dedicated onboarding calls anyway.

Time to first value — the number of minutes between signup and completing the first meaningful action — dropped from 47 minutes to 18 minutes.

What I learned: We were confusing thoroughness with helpfulness. The old onboarding tried to show everything; the new one focused on one thing. Less really is more, until it isn't.

Experiment 2: Annual Pricing Discount (Win)

Hypothesis: Offering a 20% discount for annual billing would increase the percentage of customers on annual plans from 23% to 35%.

What we did: Added a toggle on the pricing page showing monthly vs. annual pricing, with the annual discount highlighted. Also added a pop-up during the monthly checkout flow: "Save 20% with annual billing."

Result: Annual plan adoption went from 23% to 41% over the quarter. The pop-up alone accounted for 60% of the conversions. Annual customers churn at 2.1% per year compared to 3.8% per month for monthly customers, so this has massive downstream retention effects.

Revenue impact: The discount costs us approximately $14,000/month in potential revenue, but the churn reduction saves approximately $31,000/month. Net positive by $17,000/month.

What I learned: Sometimes the obvious thing works. We spent a week debating whether 20% was too aggressive. We should have tested this a year ago.

Experiment 3: Referral Program (Loss)

Hypothesis: A "give $50, get $50" referral program would generate 200+ referral signups per month.

What we did: Built a referral system into the product. Each customer got a unique referral link. For every new paying customer they referred, both the referrer and referee received a $50 account credit.

Result: 23 referrals in three months. Total. Not per month — total. Of those, 17 converted to paying customers. Cost: $1,700 in credits. CAC per referred customer: $100, which is actually our cheapest channel, but the volume is negligible.

What I learned: B2B referral programs don't work like B2C ones. People don't recommend business software to their friends for $50. They recommend it because it solved a problem, and when they do, they don't need an incentive. The referral program didn't create new behavior; it just gave a small reward for behavior that was already happening.

B2B referral programs don't work like B2C ones. People don't recommend business software for $50. They recommend it because it solved a problem.
Experiment 4: LinkedIn Content (Win)

Hypothesis: Publishing three LinkedIn posts per week from the CEO's account would generate 50+ inbound leads per month.

What we did: I ghostwrote three posts per week for the CEO. Topics: industry insights, product philosophy, behind-the-scenes company stories. No hard sells. No product screenshots. No "I'm humbled to announce" posts. Just genuine takes on the market.

Result: After eight weeks of consistent posting, the CEO's LinkedIn following grew from 2,100 to 8,400. Inbound leads attributed to LinkedIn (via UTM tracking and "how did you hear about us" surveys): 78 per month by the end of Q3. Close rate on those leads: 12%, versus 7% for our average inbound lead.

What I learned: Founder-led content works, but only if it's authentic. We tried having a junior marketer write the posts first. They were technically correct and completely lifeless. When the CEO started providing raw voice memos that I edited into posts, the engagement tripled.

Experiment 5: Exit-Intent Popup (Loss)

Hypothesis: An exit-intent popup offering a 10% discount would recover 15% of abandoning visitors on the pricing page.

What we did: Deployed an exit-intent popup that triggered when cursor movement indicated the user was about to leave the pricing page. The popup offered a 10% discount code valid for 48 hours.

Result: Popup triggered 3,400 times. Discount code used: 41 times. Conversion rate: 1.2%. Of those 41, 28 would have likely converted anyway based on their prior engagement score. Net incremental conversions: approximately 13. Revenue impact: approximately $800/month in new MRR, minus $400/month in discount costs for the 28 who didn't need the discount.

Net impact: $400/month. Barely worth measuring.

What I learned: Discounts attract discount-seekers, not customers. If someone is leaving your pricing page, the issue is value perception, not price. A 10% discount doesn't fix a value problem.

Experiment 6: In-App Upsell Prompts (Win)

Hypothesis: Showing contextual upgrade prompts when users hit feature limits would increase plan upgrades by 25%.

What we did: Instead of showing a generic "upgrade your plan" message when users hit limits, we showed specific messages tied to the feature they were trying to use. "You've used 3 of 3 dashboards. Teams on the Professional plan create an average of 8 dashboards and track 2.4x more KPIs."

Result: Plan upgrades increased 37%. Average time from hitting a limit to upgrading: 4.2 days, down from 11.7 days. The most effective prompt was the one for API integrations — 14% of users who saw it upgraded within 48 hours.

What I learned: Specificity sells. "Upgrade for more features" is noise. "Teams like yours use X to achieve Y" is a story. People buy stories.

Experiment 7: Cold Outbound Email Campaign (Loss)

Hypothesis: A targeted cold email campaign to 5,000 ICP-matched companies would generate 100 demo requests.

What we did: Purchased a list. Wrote a four-email sequence. Personalized the first line based on company details. Used a dedicated sending domain.

Result: 5,000 emails sent. Open rate: 34% (decent). Reply rate: 1.8% (low). Positive replies: 22 (0.44%). Demo requests: 9. Closed deals: 1.

CAC for that one deal: $4,200 (list cost + tools + time). Our target CAC: $312. Not even close.

What I learned: Cold outbound in 2025 is brutal. Everyone's inbox is flooded. Personalization at scale is an oxymoron. We might try again with a warmer approach — conference attendee lists, LinkedIn engagement first — but the spray-and-pray era is over.

Cold outbound in 2025 is brutal. Personalization at scale is an oxymoron. The spray-and-pray era is over.
Experiments 8-17: The Quick Hits

8. Homepage headline test (Inconclusive). Tested "The platform for growing teams" vs. "Growth analytics for SaaS." Neither moved signup rate outside the margin of error after 10,000 visitors. Headlines matter less than I thought.

9. Webinar series (Win). Monthly webinars on growth topics. Average attendance: 147. Lead generation: 34 MQLs per webinar. But only 40% were ICP-matched. Quality needs work.

10. Product Hunt relaunch (Loss). Finished #4 on launch day. 612 signups. 30-day retention: 11%. Product Hunt traffic is curious, not committed.

11. Case study landing pages (Inconclusive). Built three detailed customer case studies. Pageviews: respectable. Attribution to signups: impossible to isolate. The classic content marketing measurement problem.

12. Slack community (Loss). Launched a customer Slack community. 340 members joined. Active after 60 days: 23. The world doesn't need another Slack community.

13. Partner integration marketplace (Inconclusive). Built a page showcasing 12 integrations. Early signs of SEO traffic but too soon to measure conversion impact.

14. Email nurture sequence rewrite (Loss). Rewrote the entire 8-email onboarding sequence. Open rates improved 11%. Activation rate: unchanged. People don't activate because of emails. They activate because the product solves their problem in the first session or it doesn't.

15. Pricing page social proof (Inconclusive). Added customer logos and "3,200+ companies" badge. No measurable impact on conversion rate.

16. Free migration service (Loss). Offered free data migration from competitors. 7 requests in 6 weeks. Too small to matter.

17. Usage-based email alerts (Loss). Sent emails when customer usage dropped below their 30-day average. Unsubscribe rate: 8%. Turns out people don't want to be reminded they're not using something.

The Summary

Five wins, eight losses, four inconclusive. That's the quarter. The wins — onboarding, annual pricing, LinkedIn content, in-app upsells, and webinars — generated an estimated $67,000 in incremental MRR. The losses cost us about $42,000 in direct spend plus the opportunity cost of the team's time, which I don't even want to calculate.

Net impact: positive, but barely. And that's the thing about growth. You run seventeen experiments and five work. You do the math and it pencils out. But it doesn't feel like winning. It feels like surviving.

Q4 starts tomorrow. I have twelve experiments queued. I expect four of them to work. I have no idea which four.

That's the job.