Dispatch: The AI Pricing Experiment

San Francisco · May 23, 2025

The idea comes from the CFO, which should have been my first warning sign. CFOs are not typically the source of growth ideas. But this one reads too many newsletters and has recently discovered the concept of "dynamic pricing," and now he wants to apply it to our SaaS product.

"Airlines do it," he says, leaning against the doorframe of my office. "Hotels do it. Uber does it. Why can't we?"

"Because we're a B2B SaaS company and our customers expect price stability," I say.

"That's what the hotels said in 2008," he says, and walks away.

I spend the next three hours thinking about it. The CFO is wrong about the implementation — you can't change a SaaS customer's price based on time of day — but he might be right about the principle. What if we used AI to optimize not when we charge, but what we charge? What if the price a customer sees on the pricing page was personalized based on their profile, their behavior, their likelihood to convert at different price points?

The Hypothesis

Here's what I know from our data. Our pricing page gets 8,400 unique visitors per month. Of those, 3.8% start a trial (319 trials). Of the trial starters, 11% convert to paid (35 customers). The average starting plan is $149/month.

But "average" masks significant variation. Startup founders with fewer than 10 employees almost always choose the $49 Starter plan. Mid-market companies with 50-200 employees split between the $149 and $399 plans. Enterprise companies rarely buy from the pricing page at all — they go through sales.

The hypothesis: if we show different pricing page variants to different segments — not different prices, but different emphasis, different default selections, different packaging descriptions — we can increase conversion rate and average deal size simultaneously.

I want to be precise about what this is and what it isn't. This is not changing the actual price based on who's looking. The prices stay the same. What changes is the presentation — which plan is highlighted as "most popular," which features are emphasized, which social proof is displayed. It's the AI equivalent of a good salesperson reading the room.

It's the AI equivalent of a good salesperson reading the room. Same product, same prices, different emphasis based on who's looking.
The Build

We build a segmentation model. It uses five signals available at the time a visitor hits the pricing page:

One: company size, inferred from their email domain (we use a firmographic enrichment API that costs $0.03 per lookup). Two: the referral source — did they come from a blog post about enterprise analytics or a tweet about startup tools? Three: their on-site behavior — how many pages they visited before the pricing page, which features they looked at. Four: geographic location (companies in San Francisco have different willingness-to-pay than companies in Omaha, a fact that's uncomfortable but real). Five: time on site — visitors who've spent more than eight minutes are in research mode and respond to detailed comparisons; visitors under three minutes want simplicity.

Based on these signals, the AI assigns each visitor to one of four segments:

Segment A: "Early-stage explorer." Small company, came from top-of-funnel content, low time on site. Pricing page emphasizes: the Starter plan, free trial, ease of setup, "get started in 5 minutes." The $149 and $399 plans are shown but de-emphasized.

Segment B: "Growing team evaluator." Mid-market company, came from comparison or feature pages, moderate time on site. Pricing page emphasizes: the Professional plan (highlighted as "Most Popular"), team features, integration capabilities, case study from a similar-sized company.

Segment C: "Enterprise researcher." Large company, came from enterprise-specific content or G2 reviews, high time on site. Pricing page emphasizes: the Enterprise plan, security features, compliance certifications, custom implementation, "Talk to Sales" CTA prominently placed.

Segment D: "Unknown/default." Insufficient signal to classify. Shows the standard pricing page — the control.

San Francisco, Week Three

We run the experiment for four weeks. Total pricing page visitors during the test: 33,800 (higher than monthly average because we launched a content campaign simultaneously). Segment distribution: A: 41%, B: 28%, C: 11%, D: 20%.

Results by segment:

Segment A (early-stage): Trial start rate: 5.1% (vs. 3.8% control). But average starting plan value: $52/month (vs. $89 control). We increased conversion but decreased ARPU. The net effect on revenue was roughly neutral. By highlighting the Starter plan, we made it too easy to choose the cheap option.

Segment B (growing team): Trial start rate: 6.3% (vs. 3.8% control). Average starting plan value: $167/month (vs. $149 control). This is the winner. By showing the Professional plan as "Most Popular" with relevant social proof, we both increased conversion and nudged people toward a higher-value plan. Combined revenue impact: +47% per visitor compared to control.

Segment C (enterprise): Trial start rate: 2.1% (vs. 3.8% control). But "Talk to Sales" click rate: 14.2% (vs. 4.7% control). We shifted enterprise visitors from self-serve to sales-assisted, which is where they should be. Three of the resulting conversations turned into proposals with an average deal size of $18,000 ARR.

Segment D (default): Baseline. 3.8% trial start rate. No change by definition.

The Debate

The results are good. Overall pricing page conversion increases from 3.8% to 4.9% — a 29% improvement. Estimated incremental MRR from the first month: $11,400. Annualized, that's $136,800 in additional revenue from a project that cost us roughly $8,000 in engineering time and $1,200 in API costs.

But the CEO has concerns. "Are we being honest with customers?" she asks during the review meeting.

"The prices are the same for everyone," I say. "We're just showing different emphasis."

"But we're using data about them — their company size, their location — to influence what they see. Without telling them."

"Every e-commerce site does this. Amazon shows different product recommendations to different people. Netflix personalizes the thumbnails. This is the same principle."

"We're not Amazon," she says. "Our customers trust us because we're transparent. If someone finds out their pricing page looked different from their friend's pricing page, what does that do to trust?"

It's a fair question. I don't have a clean answer. The marketing brain in me says "this is optimization, not deception." The human brain says "it feels a little manipulative, and if it feels that way to me, it might feel that way to customers."

"If it feels manipulative to me, it might feel manipulative to customers." The marketing brain and the human brain don't always agree.
The Compromise

We keep the segmentation but make two changes. First, we add a line at the bottom of the pricing page: "Showing recommendations based on your company profile. View all plans." Click that link, you see the standard pricing page with all plans equally presented. Of the 33,800 visitors, 2.3% clicked the link. Most people either didn't notice it or didn't care.

Second, we kill the geographic pricing signal. Showing different emphasis based on location felt like a line we didn't want to cross, even though the data supported it. Company size, referral source, and behavior are proxies for intent. Location is a proxy for willingness-to-pay, and that's a different thing entirely.

The modified experiment, with the transparency link and without geographic data, performs at 4.6% conversion — slightly lower than the 4.9% with all signals, but still 21% above baseline. We make it permanent.

San Francisco, May 23, 2025

The AI pricing experiment is live. It runs quietly in the background, classifying visitors and adjusting the page in milliseconds. Nobody on the team thinks about it anymore, which is both the best and worst outcome for any growth initiative. Best because it means it's working without intervention. Worst because it means we've stopped questioning it.

The CFO, by the way, took credit for the idea in the last board meeting. "I suggested we explore dynamic pricing," he told the board, "and the growth team ran with it." I didn't correct him. In my experience, letting people take credit for good ideas is one of the cheapest and most effective growth hacks there is.

It's not dynamic pricing. It's personalized presentation. But I'll let the CFO have his moment. The numbers speak for themselves.