The AI Churn Prediction That Was Right (And We Ignored)
San Francisco · July 15, 2025
On June 3rd, our AI churn prediction model flagged 47 accounts as "high risk of cancellation within 30 days." The model had been running for six weeks, and in those six weeks it had flagged a total of 203 accounts. Of those 203, 61 had actually churned. That's a 30% hit rate, which sounds bad until you consider that our baseline churn rate for any given 30-day period is about 4%. The model was identifying accounts that were 7.5x more likely to churn than average.
Of the 47 accounts flagged on June 3rd, there was one that caught my attention. It was one of our largest customers — I'll call them Meridian Corp — paying $2,400/month. They'd been with us for fourteen months. They had 38 active users. They were, by every traditional measure, a healthy account.
The AI disagreed.
Our churn prediction model was Priya's creation. She'd spent four months building it, training it on three years of customer data — 12,000 accounts, 847 features, every click and login and support ticket and billing event we'd ever recorded. The model used a gradient-boosted decision tree (Priya's choice; she'd tested four architectures and this one had the best precision-recall tradeoff).
The top five predictive features, in order of importance:
One, the ratio of weekly active users to total seats. A declining ratio — meaning fewer people in the account were using the product — was the strongest single predictor.
Two, the frequency of "admin settings" page visits. Customers who visited account settings frequently were often preparing to downgrade or cancel. The settings page was, effectively, the exit ramp.
Three, support ticket sentiment. Priya had run sentiment analysis on every support ticket we'd received. Accounts with increasingly negative sentiment scores were more likely to churn, even when the tickets were resolved.
Four, login time-of-day shift. This was the weird one. The model found that customers who shifted their login patterns — say, from morning logins to late-night logins — were at higher risk. Priya theorized that this indicated a change in the person's role or workflow, which often preceded an account review.
Five, days since last feature adoption. Customers who hadn't adopted a new feature in the last 90 days were stagnating, and stagnation precedes churn.
Stagnation precedes churn. Customers who stop growing with your product are already leaving — they just haven't told you yet.
Meridian Corp triggered the model on three of the five factors. Their active user ratio had dropped from 84% (32 of 38 users) to 47% (18 of 38) over the past six weeks. Their admin had visited the account settings page four times in the last week. And they hadn't adopted a new feature in 127 days — the longest drought of any account their size.
I showed the data to the head of customer success. She looked at it and said, "Meridian is fine. I talked to their ops director last month. They love the product."
"The model says otherwise."
"The model doesn't know that we just closed a contract extension with them for another twelve months."
She was technically right. Meridian had signed an extension. But a signed contract doesn't mean a happy customer. It means a customer who made a decision three months ago based on information that may no longer be current. The model was looking at behavior, not contracts. And the behavior said something was wrong.
I suggested we reach out. A wellness check — casual, non-threatening. "Hey, we noticed some changes in your usage patterns. Anything we can help with?" The head of CS said she'd handle it. I moved on to other things.
She didn't handle it.
Twenty-four days later, I get an email from Meridian's ops director. The subject line: "Need to discuss our account." In my experience, that subject line never leads anywhere good.
The ops director — I'll call her Sarah — explains the situation in four sentences. Their company had a round of layoffs three weeks ago. The team that used our product most heavily was reduced from 28 people to 14. Their new VP of Operations (hired after the layoffs) is evaluating all software spend and has asked for a cost-benefit analysis of every tool. They need to downgrade or cancel.
I read the email three times. Then I pull up the churn model's flag from June 3rd and look at the timeline. The layoffs happened in mid-May. The active user ratio started dropping in late May. The model flagged the account on June 3rd — about two weeks after the layoffs. It saw the behavior change before we heard about the organizational change.
If we'd reached out on June 3rd — or better, on June 5th or 6th, after I'd flagged it — we would have had three weeks of advance notice. Three weeks to talk to Sarah, understand the situation, offer a plan adjustment, demonstrate value to the new VP. Instead, we're getting the call after the decision is already half-made.
I call Sarah the same day. It's an honest conversation, which I appreciate. She tells me their new VP is "not a tool person" — he wants fewer software subscriptions, not more. Their budget for our category has been cut 60%. They can afford $960/month, down from $2,400.
I offer a custom plan. Fourteen seats instead of 38, reduced reporting features, same core functionality. $1,100/month. It's a 54% revenue reduction but it keeps them on the platform. Sarah says she'll present it to the VP.
A week later, she comes back. The VP wants to try our competitor — the one with the free tier. He wants to run a 30-day evaluation. If the competitor works, they switch. If not, they'll take our custom plan.
"What if we match the free tier for 30 days?" I ask. It's desperate and I know it.
"I'll ask," Sarah says.
She doesn't call back. Two weeks later, Meridian's account goes inactive. The cancellation comes through on July 14th. $2,400/month in MRR, gone. $28,800 annualized.
I call a meeting. The attendees: me, Priya, the head of CS, and the CEO. I put three things on the whiteboard.
One: The model worked. It flagged the risk 24 days before we got the email. We had a window, and we missed it.
Two: The process failed. Flagging risk isn't enough. You need a system that turns a flag into an action — specific, time-bound, with clear ownership. We had the signal. We didn't have the workflow.
Three: This will happen again. The model will flag accounts. If we don't act on the flags, the model is a very expensive way to feel bad after the fact.
The head of CS is defensive, which I understand. Her team is stretched thin — three CS managers covering 700 accounts. She can't investigate every flag. "The model flags 40-50 accounts a month," she says. "We can't do proactive outreach on all of them."
"We don't need to," Priya says. "We can tier it. Top 10 by revenue get a phone call. Next 20 get a personalized email. The rest get an automated check-in."
Flagging risk isn't enough. You need a system that turns a flag into an action — specific, time-bound, with clear ownership.
We build the system. It takes two weeks. Priya adds a revenue-weighted risk score to the model so high-value accounts surface first. The CS team gets a weekly report: "These are your top 10 at-risk accounts. Here's why. Here's a suggested action." The automated tier gets an in-app health check survey that triggers when usage drops below the account's 30-day average.
The system has been running for three weeks. In that time, the CS team has proactively reached out to 18 high-risk accounts. Of those, 4 were experiencing genuine issues that would have led to churn — budget cuts, team changes, competitive evaluation. In three of those four cases, the early outreach let us intervene before the customer started shopping alternatives. The fourth was too far gone.
Three saves. At an average MRR of $1,600 per account, that's $4,800/month in retained revenue, or $57,600 annualized. Against the cost of Priya's time and the engineering hours to build the workflow, the ROI is clear.
But I keep thinking about Meridian. $28,800 in annual revenue that we didn't have to lose. The model told us. We didn't listen. Not because the technology failed, but because the organization wasn't built to act on what the technology was saying.
That's the real lesson of AI in growth. The models are getting very good at seeing what humans miss. The hard part isn't the model. It's building the human system around it — the processes, the incentives, the culture — that turns prediction into action.
We're getting better at it. Slowly. The way you get better at anything: by making mistakes you can't afford to repeat.
