Underlying a lot of this is the huge number of AI solutions which are being created every day. For example, in GTM, you can buy any number of sales and marketing tools, each one of which is effective, but they can quickly flood a team. People then get lost and confused and stop using them, or need things to be better integrated. I think there is going to be a big shake-out as many GTM solutions drift away, and we get left with a smaller number of core tools. I very much hope this is not going to drive out the creative new guys in favor of the big elephants with their lumbering AI.
Fantastic data breakdown here. The $250/month threshold as a dividing line between "serious" and "experimental" spend is really smart because it separates paying tourists from actuall commited buyers. But the friction remains that many AI products push conversion before proving workflow value, whichh basically guarantees that 60% churn rate long-term.
Thanks for sharing this post, these benchmarks, even just directionally, are super helpful for 2026 planning. I am also hearing from AI native startup founders that with the magic power of AI there is less of an acquisition problem. Tons of top of funnel users for exploration, the hard part is translating that into durable value and real retention.
Curious if you are seeing teams build a value system to systematically map and score where users actually get value, so they can double down on the few value levers that really move retention instead of just shipping more AI features. And if so, what does the minimum viable version of that system look like for startups in 2026?
This analysis is the smoking gun for a concept Alex Raymond and I have been discussing for years - "Recurring Revenue" is a fallacy.
As your data shows, if 60% of an install base walks out the door, that revenue was never "recurring" it was just a temporary rental by an "AI Tourist." The low GRR in AI isn't just a product issue; it’s a symptom of what we call the maintenance mindset.
We are seeing that the only cure for the Gross Retention Apocalypse as Cassie called it, is shifting to a posture of active retention, where the post-sale team stops maintaining accounts and starts re-earning the revenue through measurable value delivery (not just satisfaction).
Alex Raymond and I are facilitating an NRR Growth Mastermind in January focused specifically on this shift, moving CS to a true Growth Department. This data is going to be the perfect backdrop for that conversation. Fantastic breakdown, Kyle.
1. Did you use each company's 2024 year end ARR as the starting cohort size and then looked at retention each month from Jan to Sep-25?
2. "AI-native products that sell for >$250 per month see 70% GRR and 85% NRR. This is essentially the same as B2B SaaS." --> How are GRR and NRR defined here? Over what period of time?
3. Very surprised to see ~60% of companies with NRR 80-99% have negative to only 24% y/y growth.
On the data side, we used the last 3 month average & then annualized that value. This smoothed out some of the month-on-month differences while keeping the numbers close to real-time.
Thanks, Kyle. I think an example with simple math to spell it out will encourage more folks to share the article, use it as benchmarks in a board deck, etc. Because inevitably the question "what period of time are we looking at" or "how is GRR defined" will be asked.
ARR has been bastardized by AI startups to a point where it often becomes a vanity metric. Loveable is a perfect case here:
* They hid the attrition issues
* They made far-fetched assumptions about both "annual" and "recurring" of ARR (extrapolating monthly subscriptions)
* They ignored the cost side, building the impression that revenues translate to future margins
In reality, their stellar run is likely hiding some serious challenges with making business sustainable: https://pawelbrodzinski.substack.com/p/lovables-arr-is-vanity-metric-20
That's precisely what vanity metrics are all about. They look good, but tell a convenient lie about the actual state of the realm.
That was excellent, I'm saving this post.
So glad to hear it!
Underlying a lot of this is the huge number of AI solutions which are being created every day. For example, in GTM, you can buy any number of sales and marketing tools, each one of which is effective, but they can quickly flood a team. People then get lost and confused and stop using them, or need things to be better integrated. I think there is going to be a big shake-out as many GTM solutions drift away, and we get left with a smaller number of core tools. I very much hope this is not going to drive out the creative new guys in favor of the big elephants with their lumbering AI.
Fantastic data breakdown here. The $250/month threshold as a dividing line between "serious" and "experimental" spend is really smart because it separates paying tourists from actuall commited buyers. But the friction remains that many AI products push conversion before proving workflow value, whichh basically guarantees that 60% churn rate long-term.
Thanks for sharing this post, these benchmarks, even just directionally, are super helpful for 2026 planning. I am also hearing from AI native startup founders that with the magic power of AI there is less of an acquisition problem. Tons of top of funnel users for exploration, the hard part is translating that into durable value and real retention.
Curious if you are seeing teams build a value system to systematically map and score where users actually get value, so they can double down on the few value levers that really move retention instead of just shipping more AI features. And if so, what does the minimum viable version of that system look like for startups in 2026?
This analysis is the smoking gun for a concept Alex Raymond and I have been discussing for years - "Recurring Revenue" is a fallacy.
As your data shows, if 60% of an install base walks out the door, that revenue was never "recurring" it was just a temporary rental by an "AI Tourist." The low GRR in AI isn't just a product issue; it’s a symptom of what we call the maintenance mindset.
We are seeing that the only cure for the Gross Retention Apocalypse as Cassie called it, is shifting to a posture of active retention, where the post-sale team stops maintaining accounts and starts re-earning the revenue through measurable value delivery (not just satisfaction).
Alex Raymond and I are facilitating an NRR Growth Mastermind in January focused specifically on this shift, moving CS to a true Growth Department. This data is going to be the perfect backdrop for that conversation. Fantastic breakdown, Kyle.
Excellent post, Kyle. So much to unpack.
1. Did you use each company's 2024 year end ARR as the starting cohort size and then looked at retention each month from Jan to Sep-25?
2. "AI-native products that sell for >$250 per month see 70% GRR and 85% NRR. This is essentially the same as B2B SaaS." --> How are GRR and NRR defined here? Over what period of time?
3. Very surprised to see ~60% of companies with NRR 80-99% have negative to only 24% y/y growth.
On the data side, we used the last 3 month average & then annualized that value. This smoothed out some of the month-on-month differences while keeping the numbers close to real-time.
On the ARR side, the cohorts were based on the most recent data. For a look at data based on historical/prior ARR, you can find that in this report: https://www.growthunhinged.com/p/the-compounding-startup
Thanks, Kyle. I think an example with simple math to spell it out will encourage more folks to share the article, use it as benchmarks in a board deck, etc. Because inevitably the question "what period of time are we looking at" or "how is GRR defined" will be asked.
Where did you get the initial numbers for revenue for each company? You wrote:
“Then we compared gross revenue retention (GRR) and net revenue retention (NRR) rates”
Where did this data come from?
The data is all from ChartMogul. It’s a SaaS metrics & growth platform that has realtime data across these metrics.
Thanks for the info. Interesting - naturally I’m wondering where they get the data - assuming sourced from publicly available sources?