Growth Unhinged is proudly supported by beehiiv

Growth Unhinged is, quite literally, powered by beehiiv. I made the switch at the beginning of the year because beehiiv is serious about powering the internet’s best newsletters. Their all-in-one platform brings together newsletters, websites, and every tool you need to grow and earn.

And they keep making it better. beehiiv already had the most powerful Ad Network in the market. Now they’re announcing On-Demand Ads, the next evolution of the beehiiv Ad Network. No more waiting for new opportunities. Just premium sponsorships, available when you want them. See beehiiv yourself and get 30% off for three months with code KYLE30.

👋 Hi, it’s Kyle Poyar and welcome to Growth Unhinged, my weekly newsletter exploring the hidden playbooks behind the fastest-growing startups. Today’s newsletter is a bonus edition.

A quick update: Unhinged Perks just got even better. I’m adding five new offers exclusively for premium subscribers, bringing the total to 18 perks and $40,000+ in potential savings* from the likes of AirOps, beehiiv, Metronome, Retool, Rox, and much more. The new offers include:

  • Granola: My personal favorite AI notepad. Get 90 days free of the Business plan with up to 5 users.

  • Make: AI automation you can visually build and orchestrate in real time. Get 90 days free of the Pro plan with up to 40k credits.

  • Turnstile: Complete quote-to-cash for sales-led startups. Get 90 days free (up to $250k of billing volume).

  • Amplemarket: Unifies prospecting, engagement & orchestration into one AI platform — with agentic & assisted workflows that live wherever you go, from your CRM to ChatGPT to Claude. Get a 14-day free trial and 20% off an annual plan.

  • Freckle: Freckle sits on top of your CRM, auto-enriching every record coming in from any source. Enrich your first 5,000 signups for free.

*Terms and conditions apply and we may run out of codes at any time. More information here.

Rethinking SaaS metrics for AI

We’ve grown accustomed to the traditional set of SaaS metrics as just part of how to operate a B2B business. Here’s the thing: the traditional SaaS metrics playbook can be extremely misleading when it comes to building an AI-native business.

ARR, for example, was the lifeblood of SaaS. It’s becoming meaningless untrustworthy for AI-native companies.

It isn’t just ARR that no longer feels as powerful as it once did. Let’s look at a few other examples:

  • Classic product usage metrics were built for seat-based models where we expected people to be the main users. If AI is taking on digital labor, or if our products are used inside other products (ex: via Model Context Protocol or MCP), people shouldn’t need to be logging in everyday. Do we still need to obsess over DAUs, MAUs, or DAU:MAU?

  • LTV:CAC was always fun-with-numbers, and now we need to make it official. I don’t trust the LTV of any AI product right now, especially given AI experimentation budgets plus the epic shipping velocity of Anthropic and OpenAI. How much should we spend on customer acquisition when LTV is so unpredictable?

  • We’re seeing an explosion of mixed monetization models, lower margin profiles, and re-occurring revenue. Margin variability is the new normal. What happens if companies split revenue streams into platform (high margin) and tokens (low margin)? How should we treat Service-as-a-Software revenue versus pure software?

I’ve been interviewing more than a dozen of the top AI founders for an upcoming report about how AI-native companies grow (stay tuned 👀). What struck me was how the metrics these founders obsess over have started to shift, although there was almost no consistency in metrics from company to company.

The reality is that the old SaaS metrics still have a place, especially for companies selling subscriptions with high retention and healthy gross margins. But we need to start looking for next-era metrics that better define success.

Today’s post unpacks seven alternative metrics that feel more relevant and urgent for this next era of AI-native companies. These include both operational KPIs (which help you identify where to focus) and investor KPIs (which help you communicate the health of your overall business model).

1. Token consumption

SaaS analog: Monthly active users

Perhaps the most unifying thing about AI companies is that they consume an immense amount of tokens. Tokens are a metric that’s hard to hide from. They show whether people are deeply using your AI product or if being AI-first is mere marketing jargon.

I was surprised when people began posting publicly about their Anthropic bills or their OpenAI “Token of Appreciation” plaques. There’s something a bit off about being proud of burning spending so much money on AI tokens.

Some investors, like Jamin Ball from Altimeter Capital, are pointing out that “the fastest-growing AI companies today are the ones sitting directly in the token path." Revenue is tightly linked with token consumption, and these companies are positioned to benefit as token use continues to explode (provided they have a product that makes these tokens more valuable).

I recently wrote about emerging shift to platform + tokens pricing at Clay and PostHog. In a sign of the growing popularity of token monetization, Stripe is now in private preview with a new LLM token billing feature. This can automatically pass through raw LLM token costs synced to the latest model prices with a consistent markup.

2. Gross profit per million tokens

SaaS analog: ARR

Token consumption could just be passing through your infrastructure costs with very little additional value. Or it could be delivering real value above and beyond the LLM costs. The counterpart to token consumption becomes the gross profit per million tokens (inclusive of all revenue streams).

Tokens are essentially fuel consumption. But gross profit shows how many miles you’ve driven with that fuel. As Tomasz Tunguz recently wrote, the gross profit per million tokens seems to correlate pretty closely with the multiples that AI companies are trading for.

This can start to become an interesting metric for sales teams, too. Historically reps have been compensated based on the ARR they’re bringing in. That makes sense in an 80% gross margin world. It doesn’t work as well if margins are 20-60% and could swing anywhere from -50% to +80% across the customer base.

If sales reps are allowed to discount your product, or if they can choose which monetization model is right for a given customer, you might wind up paying fat commissions for unprofitable deals. Adjusting compensation to align with gross margin dollars becomes a better representation of the rep’s actual contribution. A simplified version would be an earnings multiplier connected to gross profit, e.g. 1.5x for 80%+ margin, 1.25x for 65-80%, 1x for 50-60%, 0.75x for 35-50%, and 0.5x for 20-35%.

3. AI quality

SaaS analog: Product activation

Everyone can produce AI outputs for customers. That doesn’t mean these outcomes are any good, or that they’re improving over time. There might be hallucinations, latency issues, generic slop, or inconsistencies.

I’ve talked to a number of companies that don’t have a good way of measuring the effectiveness of their AI. They might use certain benchmarks or evals prior to shipping improvements, yet those evals in testing don’t necessarily line up with the real world experience of users. Nor do these evals inherently translate into downstream benefits like higher conversion, more usage, or better retention.

I’m starting to see AI quality metrics become the AI-native analog of what the SaaS world considered product activation. My product activation framework looked like this:

Each company will need to create their own definition of activation that fits with their specific product. In my experience it should be something that (1) is easily achievable by somewhat committed users, (2) can be completed within the first week from sign-up, (3) is predictive of future conversion/retention, and (4) is correlated to business performance. 20-40% activation rates are common.

Applying a similar lens to AI quality, potentially useful metrics could include explicit user feedback (ex: thumbs up) or implicit feedback based on actions taken by the user (ex: sharing AI outputs, downloading or copying the outputs, using the AI output as-is versus editing it). I recommend starting with a long-list of the plausible AI quality metrics that you can capture, then scoring these metrics based on the following factors:

  • Aligns with the product promise: How well does this metric represent what users are trying to accomplish?

  • Covers a large share of users: What percentage of users perform this action today and how frequently do they do it?

  • Correlates with business performance: When we’ve improved this metric previously, did downstream KPIs (conversion, retention, expansion) improve, too?

This metric isn’t only for product teams. It can become part of your marketing and competitive differentiation, helping educate prospects about why your AI is better than what else is on the market.

logo

Subscribe to Kyle Poyar's Growth Unhinged to read the rest.

Become a paying subscriber of Growth Unhinged to get access to this post and other subscriber-only content.

Upgrade

A paid subscription gets you:

  • Full archive
  • Subscriber-only bonus posts
  • Full Growth Unhinged resources library

Reply

Avatar

or to participate

Keep Reading