Growth Unhinged is proudly supported by AirOps

The way people get information has changed more in the past year than in the previous twenty. That’s the 🔥 headline from the new 2026 AI Search Playbook from AirOps, and I’m seeing it firsthand.

The report pulls research from 15M+ AI queries to reveal four plays that drive visibility and pipeline from ChatGPT, Gemini, and Perplexity. They found that 85% of brand mentions in AI search come from third-party sources, not your own. Freshness is critical: 70% of AI-cited pages were updated within the past year.

Get the framework, the data, and the 90-day action plan here.

Over the next two months, I’m going to share an in-depth playbook for launching and growing an AI-native business.

This series is the result of more than a dozen 1:1 interviews with founders from breakout AI-native companies. Some have already scaled to $100M ARR and beyond – including Clay, Gamma, HeyGen, Intercom (Fin.ai), and Fireworks AI. Others are fast-growing AI-native startups who are well-positioned to join them including 7AI, AirOps, bolt.new, Fyxer, GC AI, GrowthX, HappyRobot, and incident.io.

Here’s an overview of the three-part series (subscribe to get these yourself):

  • Part 1: A guide for reaching $1 million ARR (TODAY)

  • Part 2: A guide for scaling to $10 million ARR and beyond

  • Part 3: A guide for building an AI-native org

The conversations woke me up to just how different AI-native businesses are compared to B2B software, along with things that haven’t changed. Here’s a preview of some of the more interesting takeaways:

  1. PMF isn’t gradual in AI.

  2. AI-native companies reach $1 million ARR twice as fast as B2B SaaS.

  3. Price sensitivity drops dramatically when AI does real work.

  4. Product success hinges on AI performance crossing a reliability threshold.

  5. Explicit user feedback (thumbs up/down) can be surprisingly insightful on its own.

  6. AI companies tend to delay hiring their first AE until about $2-5M ARR.

  7. The winning GTM motions combine bottom-up discovery with top-down sales.

  8. The best GTM teams look more like consulting firms than SaaS sellers.

A huge thank you to: Alexander Berger (COO of bolt.new), Archie Hollingsworth (co-founder of Fyxer), Cecilia Ziniti (co-founder of GC AI), Des Traynor (co-founder of Intercom, Fin.ai), Jon Noronha (co-founder of Gamma), Joshua Xu (co-founder of HeyGen), Lin Qiao (co-founder of Fireworks AI), Lior Div (co-founder of 7AI), Marcel Santilli (Founder of GrowthX), Matt Hammel (co-founder of AirOps), Pablo Palafox (co-founder of HappyRobot), Stephen Whitworth (co-founder of incident.io) and Varun Anand (co-founder of Clay).

Typical times to get to $1M ARR

Lenny Rachitsky found that it took successful SaaS companies roughly two years to feel product-market fit (PMF) and reach their first $1 million in ARR. Some notable exceptions – including Figma, Airtable, and Slack – took four or more years to feel PMF. 

Much of this time was spent gradually iterating on the product with alpha and beta testers. Finding PMF was an artisanal endeavor; it was usually careful, steady, gradual, and incremental. In fact, it was so gradual that many SaaS founders never fully felt they achieved PMF.

The biggest change for AI-native companies is the speed. Everything has compressed. Timelines are twice as fast for top AI-native startups compared to the best SaaS companies. 

The median AI-native startup grew to $1 million ARR just 12 months after they started building. bolt.new exploded to $4M ARR in only four weeks post-launch. GrowthX hit $1M ARR only three months after founding the company. The market was ready to buy, even if the product might still be a work-in-progress.

The core principles of building a legendary company haven’t changed. But it’s all way faster now. Feedback loops are faster. People can build software more quickly so there’s more competition. You need to execute faster.”

- Varun Anand, co-founder of Clay

What PMF feels like for AI-native products

One surprise from my interviews was how binary PMF tends to be for AI-native products.

PMF was often so obvious that it was inarguable. Founders either felt extreme market pull or they didn’t. The harder part was keeping up with demand and productizing as much of the experience as possible.

We first felt PMF on day one. Immediately it was obviously the best product we ever launched. Everything else we built would have a day one pop, then drop off. This just kept accelerating.”

- Alexander Berger, COO of bolt.new

“In 2024 we had a new user channel in Slack. We had to turn it off because there were too many new users. We had high confidence that if the quality was there, people would pay for Fyxer. It was a technical question of whether the quality was there.”

- Archie Hollingsworth, co-founder of Fyxer

We had $2 million in revenue and hadn’t even announced the company yet. Growth came from workshops where we would teach people how to compete against us with a DIY approach. I was doing all the sales myself, closing $150k contracts with a 60% win rate with a sales cycle of less than 30 days… We had insane PMF because we already had a service.”

- Marcel Santilli, co-founder of Growth X

“We initially had a 23-24% resolution rate and we looked at the times when Fin didn’t reach a resolution. We saw a time when Fin went back and forth with a customer seven times. We saw another customer batch together five questions at once; Fin answered all of them correctly. Even when Fin didn’t resolve a ticket, it saved a support rep an hour. We knew the thing was valuable.”

- Des Traynor, co-founder of Intercom (Fin.ai)

A clear PMF tell: customers are ready to buy and they aren’t particularly price sensitive.

GC AI charges more than 20x the cost of a Claude or ChatGPT subscription. Still people purchase it quickly and without needing a complex sales motion, simply because they’re hooked within the first five prompts when they feel the UI and see the better legal answers.

At one point 23% of people who took our classes [GC AI teaches classes on its product] paid for the product. There's never been a webinar in the world where 23% of people took the next step, let alone bought. Our customers (lawyers) are smart and can tell right away when the product is worth paying more for. ”

- Cecilia Ziniti, co-founder of GC AI

Gamma A/B tested pricing and found that people were willing to pay double the price of a conventional productivity app. The highest price they tested ($20 per month) performed best.

Willingness to pay looks different for AI products. It used to be really hard to get someone to pay $10 per month for a productivity app. It’s much easier to get someone to pay $20 for AI.”

- Jon Noronha, co-founder of Gamma

It’s worth mentioning that some companies, including HeyGen, Gamma, and Clay, still iterated for two or more years before they felt PMF. Clay even took six years.

These outliers were usually founded pre-ChatGPT and tried out multiple product concepts. Prior launches quickly fizzled out. Then they struck gold, and things were obviously different.

“We relied on the signal of how much market pull there was from customers. When you have PMF, you just feel it. It's different.”

- Joshua Xu, co-founder of HeyGen

“Our previous launches weren’t self-sustaining in terms of signups. Then we saw growth without any marketing. We started spending the whole weekend responding to support tickets, and a lot of the tickets were from people asking how to pay us.”

- Jon Noronha, co-founder of Gamma

How to tell whether your AI product works

Product-market fit was usually fairly obvious. The harder problem was figuring out whether the AI product worked consistently and whether it was substantially better than alternatives like Claude or ChatGPT. That would be the key to durable revenue and avoiding the ‘thin wrapper’ problem of AI apps.

HeyGen has a dedicated AI success team responsible for internal evals, data for the evals, and evaluating the HeyGen videos that users share externally.

If the AI quality is there, people will keep using it. This is the most important metric and a leading indicator for us. But AI quality is very hard to measure and it’s subjective.”

- Joshua Xu, co-founder of HeyGen

HappyRobot builds tools for customers to evaluate their own AI agents in a way that’s customized to what the agent’s work is. Customers create behavioral North Stars that guide the correct behavior of the agent in production. Agents aren’t only doing the work, they’re also QA-ing the work that gets done.

Founders measured AI performance in four ways: explicit user signals, implicit user signals, adoption signals, and business impact signals.

Explicit user signals, where a user might give a thumbs up or thumbs down rating on an AI output, worked fairly well as a starting point. Some of this explicit feedback is through a binary rating; other times it’s more qualitative with chat-based feedback.

“We collect explicit user feedback with a thumbs up/down option next to the outputs. The explicit feedback is a great signal on its own, and it correlates strongly with downstream conversion.”

- Jon Noronha, co-founder of Gamma

“Explicit feedback like thumbs up/down is fine, but it’s a bit too coarse. Since our product is built into chat platforms, users can just directly tell us if they think the AI sucks, providing richer feedback than binary ratings.”

- Stephen Whitworth, co-founder of incident.io

Implicit signals monitor what the user does after getting an AI output, which can provide a stronger signal about whether the AI output was actually useful. Some examples: how long users spend modifying an AI output, how much ‘edit intensity’ there is on this output, and whether users copy, share, or export AI-generated content.

“We have a copy button in our app and users can click-to-copy. At one point 20% of AI responses were copied. This shows utility; lawyers are actually using the outputs. We want to have 10x or 20x the utility of generalist tools like Claude or ChatGPT.”

- Cecilia Ziniti, co-founder of GC AI

We look at what percentage of AI-generated email drafts users actually send. And then internally we have a hypothesis of a threshold. Users should only need to type at their computer when it’s absolutely necessary.”

- Archie Hollingsworth, co-founder of Fyxer

Adoption signals aren’t unique to AI-native products. They remain important nonetheless.

Copilot-based products look closely at user-centric metrics like days spent editing per month or the percentage of monthly active users who use the product on a given day (DAU:MAU). Agentic products focus more on consumption metrics (ex: number of messages sent per day).

“For the legacy product we focused on daily active users (DAUs) as the leading indicator of revenue. Now usage is the leading indicator of revenue. When people build more complicated things, our average revenue per user expands. Today we look at both DAUs and the number of messages sent per DAU. ”

- Alexander Berger, COO of bolt.new

AI products are ultimately meant to complete work for customers. Mature AI-native companies know exactly how much work their products deliver, as well as the business impact of this work. A proxy metric for copilot-based products is a simpler time savings metric: how much time is the product saving users compared to manual work?

We look at resolution rate, which is our highest level metric and shows the quality of the Fin product. Then we look at CSAT – are the answers great and are we keeping customers happy? We also measure the total automation rate, which is how much of the total support volume we are getting multiplied by what percentage of that volume are we resolving.”

- Des Traynor, co-founder of Intercom (Fin.ai)

“Right now we focus on selling ‘digital capacity’ to our customers. We measure how many AI agents are deployed, how many FTEs we’ve augmented or replaced, and the volume of work the agents are completing.”

- Pablo Palafox, co-founder of HappyRobot

We’re solving for owning an outcome for you – organic growth via your website – that you deeply care about and we know is important. We measure pages created or modified as an output, and then there’s the perceived and reported quality of that output. Then there’s relationship quality and confidence in the overall strategy. Finally is the performance. Are we driving visibility growth, traffic, and conversion?”

- Marcel Santilli, Founder of GrowthX

Selling an AI-native product

Half the AI-native products started as primarily self-serve. Anyone could try and buy these products without speaking to a salesperson.

The median AI-native company didn’t hire their first account executive (AE) until they were between $2-5 million ARR. They could rely 100% on self-serve revenue and/or founder-led sales. (In my experience, SaaS companies usually hired their first AEs as they approached $1 million ARR.)

HeyGen and Gamma, which have strong self-serve motions, waited until $10 million+ ARR before hiring sales. GrowthX muscled its way to nearly $8 million ARR without self-serve; it was all founder-led sales in the early days.

The decision to start fully self-serve often came down to capacity – founders literally would not have been able to hire sales teams fast enough to match demand.

“SaaS was steady and you had time to build up your go-to-market (GTM) force. Now there’s a big mismatch on the pace. Our market is exploding and we couldn’t launch a sales-led GTM fast enough. Even if we increased our sales team by 5x, we’d still be capacity constrained.”

- Lin Qiao, co-founder of Fireworks AI

But the early GTM motion isn’t necessarily what scales. Only one AI-native company I interviewed is still primarily focused on self-serve today. Three-in-four companies combine both self-serve and sales-assisted paths.

bolt.new, for example, initially saw explosive self-serve growth, reaching $40M ARR in only five months. Beneath this breakout success: the vast majority of accounts were from personal email signups. Retention and gross margin suffered. As of Q4 2025, the company set a mandate that they’re focused on B2B, upmarket deals and a sales-assisted GTM motion.

AI-native companies have a secret weapon when it comes to selling: both end-user excitement and top-down AI mandates. Many are trying to thread the needle by winning both personas, usually starting with bottom-up interest and then flipping to a top-down sale.

You have to attach it both to the end user level and the leadership level if you want to be ingrained. If you just go top-down entirely, it's really hard to drive adoption. If you just go end user focused, it's hard to get around institutional roadblocks. You need to do both. Just don’t sell to AI committees; you’ll get stuck in imaginary budget land.”

- Matt Hammel, co-founder of AirOps

“AI in legal is a movement and a new way of working. To see the big gains from AI, it has to be individually adopted. You can do some automations or take the lawyer out, but it really depends on the operator, and you need a product the operator loves.”

- Cecilia Ziniti, co-founder of GC AI

“There’s a paradox right now in selling our particular AI tool in the enterprise. It’s a bottom-up discovery by early adopters in the org who build prototypes, then bring these to their boss. The paradox is that to get maximum value out of the tool, we actually need to engage in more of a top-down or traditional enterprise sale… Large organizations won’t settle for vibecoded AI slop prototypes. The real value is hooking into their enterprise design systems and code bases, and early adopters need their organization’s support to enable these advanced capabilities.”

- Alexander Berger, COO of bolt.new

Proceed with caution as what wins over the end-user could look quite different from what wins over the executive. As one founder confided, the turkey doesn’t get a vote about what to eat for Thanksgiving.

Many of the founders I spoke with designed their early sales processes to feel more like a consulting or professional services experience. Their sales teams aren’t selling features; they’re selling a business transformation powered by AI.

Buyers need to be confident that the AI will work, especially when it’s doing mission critical work like investigating security alerts.

Because we're using AI agents in production security environments, we can't just sell a piece of software and say ‘good luck’... If something happens at 2:00 AM, a CISO isn't going to submit a ticket, and they likely don't have in-house AI expertise in the SOC. So there's an even bigger trust component and that requires an elite team of experts that will pick up the phone. We call them AI Security Engineers and they are assigned to every account. They work directly with the sales team to ensure that the AI agents are customized to the customer's unique environment and workflows, and they are able to actually commit code to do it.”

- Lior Div, co-founder of 7AI

It’s very consultative in that you really have to partner with the customer to understand their existing workflows and reimagine the work for the future. You can’t just blindly automate what’s been done in the past. This is why you see so many AI companies embrace forward-deployed engineers.”

- Pablo Palafox, co-founder of HappyRobot

“We realized the actual path to the best outcome for Fin is a radical revolution of AI. We talk to customers about what it would mean to their support org if 70% of conversations were handled by AI with zero human contact. They need someone to own AI, they need to think differently about what type of support they can offer for their best customers. The reason people don’t do this is because it’s a harder sell. It might require you to go up a level in the organization. Get out of the mindset of AI as a tool and focus on AI as a transformation.”

- Des Traynor, co-founder of Intercom (Fin.ai)

Thanks for reading Growth Unhinged! To receive new posts and support my work, please subscribe.

Early patterns in breakout AI-native startups

A few common themes came up repeatedly in these conversations.

The best AI-native companies are reaching early revenue milestones twice as fast as previous generations of SaaS companies. Product-market fit happens faster and more visibly. Instead of gradual adoption, founders describe a clear moment where demand accelerates.

AI quality is the most important early variable. Teams spend significant time evaluating outputs and measuring whether the product is actually completing useful work. They rely on a mix of explicit feedback, usage signals, and outcome-based metrics to evaluate performance.

Finally, go-to-market might start with self-serve adoption and founder-led sales, before layering in sales-assisted motions as companies move upmarket. Use end-user love to get to the executive, then sell a broader AI transformation story.

Next time I’ll go deeper into how the best AI-native companies built GTM motions that scaled to $10 million ARR and beyond. It’ll include surprising learnings about how to sell AI-native products and the new ‘must-have’ GTM roles that AI companies can’t hire quickly enough. See you then.

Related resources:

Reply

Avatar

or to participate

Keep Reading