The most immediate path to growth often comes down to finding (and fixing) hidden revenue leaks.
Often overlooked things like localization, checkout UX, and regional payment methods can make the difference between purchases and abandoned carts. In fact, new research from Cleverbridge finds that 4-in-5 software companies report double-digit cart abandonment in their online sales channel.
Cleverbridge — which manages payments, taxes, and compliance in 240+ markets — surveyed over 1,700 software buyers and sellers for their inaugural Friction Report. Get the full insights here (the report is ungated).
Welcome to part two of my playbook for building an AI-native company. Here’s an overview of the three-part series:
Part 2: A guide for scaling to $10 million ARR and beyond (TODAY)
Part 3: A guide for building an AI-native org
In part one, we covered how product-market fit (PMF) was often binary for AI-native companies. Founders either felt extreme market pull or they didn’t. AI-native companies grew to $1 million ARR only six months after they had a usable live product. The harder part was keeping up with demand and productizing as much as possible.
What happens after the first $1M ARR is even more interesting. It used to take IPO-caliber SaaS companies more than 24 months to go from $1M to $10M ARR. AI companies like Clay, Gamma, HeyGen, and Fyxer are doing it in 12 months or less.
This playbook shows you how. It’s backed by more than a dozen 1:1 interviews with founders from breakout AI-native companies.

A huge thank you to: Alexander Berger (COO of bolt.new), Archie Hollingsworth (co-founder of Fyxer), Cecilia Ziniti (co-founder of GC AI), Des Traynor (co-founder of Intercom, Fin.ai), Jon Noronha (co-founder of Gamma), Joshua Xu (co-founder of HeyGen), Lin Qiao (co-founder of Fireworks AI), Lior Div (co-founder of 7AI), Marcel Santilli (Founder of GrowthX), Matt Hammel (co-founder of AirOps), Pablo Palafox (co-founder of HappyRobot), Stephen Whitworth (co-founder of incident.io) and Varun Anand (co-founder of Clay).
Plan for a (way) higher bar
Breakout companies grew from $1 to $10M ARR within a year
Breakout SaaS companies aspired to the T2D3 path. After reaching $1M ARR, they’d hope to hit $3M ARR in 12 months and then $9M ARR in 24 months.
The typical AI-native company I interviewed grew from $1M to $10M ARR in just 9 months! Nearly all did it in under 12 months. When you factor in the time to build a usable product, AI companies hit $10M ARR before SaaS companies were even at their first $1M.
In some cases, every aspect of company building was accelerated. HappyRobot started building in January 2024. They crossed $10M ARR in 2025, 10 months after reaching $1M ARR. GrowthX started building in October 2024 and announced the company in December 2024. They had achieved $12M ARR a year later.

You can still win even if you have a slower start
At Clay, Gamma, and HeyGen it took 2 or more years to launch a usable live product. StackBlitz had been around for 7 years before focusing on its vibecoding app, Bolt. Intercom was founded in 2011 and spent their first 12 years as a SaaS application; everything changed after launching their AI agent, Fin.
The slower start didn’t matter. Once these companies had PMF for an AI-native app, the business exploded. They went from $1M to $10M ARR in 12 months (Clay), 6 months (Gamma), 5 months (HeyGen), and <2 months (Bolt).
Wait to hire your first AE until $2M ARR
The AI-native companies I spoke with usually waited to hire their first account executive (AE) until they were between $2M to $5M ARR. They instead set up a repeatable GTM motion through either self-serve or founder-led sales before hiring. (In my experience, SaaS companies hired their first AEs as they approached $1M ARR.)
A big benefit of waiting: AI companies didn’t blindly follow the old SaaS sales playbooks. They discovered GTM motions that were consultative, more technical than commercial, blended end-user excitement with top-down AI mandates, and moved fast from first conversation to enterprise deal.
Measure the right leading indicators
Sales and proof of concept (POC) velocity metrics set the tone
If you reverse engineer going from $1M to $10M ARR in 12 months, everything in GTM needs to move faster. Hiring sellers, ramping them, generating pipeline, setting up proof of concepts (POCs), and navigating enterprise procurement all need to be done in weeks or even days.
Some, like GC AI, just didn’t let prior experience slow them down.
“We didn’t know you’re supposed to be slow or that enterprise deals should take six months. And we were selling to legal teams who are not used to buying AI yet. But we just moved fast. The naivete and first principles approach have been huge accelerants.”
Others, like 7AI, found ways to balance speed with trust. 7AI sells an agentic security solution to CISOs at large enterprises. This is definitionally not a category that acts quickly, and trust is critical in the buying process. Many of 7AI’s deals want a proof of concept, and so 7AI closely monitors POC velocity along with the time to close a full production deployment.
“We’re looking at how quickly we’re able to show value and then move to an initial land. Once we show value and cover a customer’s use cases, we’re able to close quickly. With DXC, which has 120,000 employees, we went from initial conversation to full production deployment in 8 weeks.”
As their market gets more comfortable with AI in security operations, 7AI pays close attention to the percentage of opportunities that don’t require a POC. This shows how the product category is maturing and whether they can further accelerate deal velocity.
Measure how much work AI takes on (and how much is left)
Fin, the AI agent for customer service, charges $0.99 per outcome. They only make money when the product works. Resolution rate is Fin’s first-order KPI and it’s measured both internally and with customers. (A resolution happens when, for example, the customer confirms that Fin resolved the issue.)
Resolution rate sounds fairly straightforward. It’s not, and it’s only part of the equation for buyers. There might be additional needs from the customer even if the first issue is resolved. There could be latency or lag-time, hurting customer experience. AI answers could consume more or less tokens. AI could hallucinate. And then there’s a matter of how the AI answers look. I’d venture to guess that the easiest way to game resolution rates would be to tolerate more flexibility for AI to “guess” an answer (aka hallucinate).
Co-founder Des Traynor told me Fin pairs resolution rates with customer satisfaction (CSAT), which indicate whether the answers are great and Fin is keeping customers happy. They’ve now introduced their own vertical model called Fin Apex, which shows fewer hallucinations, less latency, and higher resolution rates compared to general-purpose LLMs.
Des monitors Fin’s total automation rate as well. This measures how much work AI completes relative to the total addressable work in an account. You can think of it as an AI-native share of wallet metric.
“Customers might not expose Fin to certain topics, cases, or support channels like voice or WhatsApp. We measure how much of the support volume we’re getting and what percentage of that work we’re resolving. This shows the total amount of work done for the customer.”
Growth metrics like free-to-paid conversion still apply
AI-native companies still care about classic growth metrics. The most frequently mentioned: paid customer retention on a cohort-basis, free-to-paid conversion, product adoption, usage frequency, demo requests, and direct traffic to the website.
“We care a lot about the frequency of use, specifically the number of days editing per month. It’s a strong leading indicator of retention.”
“The free-to-paid conversion ratio gives us a sense of whether people are getting enough of an aha moment. New users get anywhere from one to five messages before they hit a paywall. Are they able to get to the point where this solves enough of their problem to enter a credit card?”
Deploy rather than sell
Get customers to self-onboard (if you can)
The fastest way to sell AI is to prove it works, not talk about it. Half the AI-native products started as primarily self-serve. Three-in-four still offer some sort of self-serve path for getting started.
But the early GTM motion isn’t necessarily what scales to $10M ARR and beyond. Only one AI-native company I interviewed is still primarily focused on self-serve today.
Clay found a middle ground with reverse demos. These flipped the script of a conventional B2B sales demo. Ahead of sales meetings, Clay emailed potential customers asking them to come prepared with a specific GTM workflow or data enrichment problem. Customers signed up for Clay during the demo and were guided exactly where to click to solve their problem within the sales call.
Staff up a forward-deployed team
Nearly everyone I spoke with has technical team members working with prospects. Many of these team members are called forward-deployed engineers; others have more creative names like AI security engineer (7AI), solutions attorney (GC AI), or forward deployed designer (Gamma).
This is consistent with the latest hiring data: 39% of the top 200 AI companies are actively hiring forward-deployed roles including forward-deployed engineers (FDEs), product managers, data scientists, and deployment strategists.
A typical SaaS company had a 1:3 ratio of solution engineers to AEs. AI-native companies are shifting toward 1:1, one FDE for every AE.
“The whole entire engineering team is working with customers from the beginning. They’re all forward-deployed. Our product team is also very much customer-facing.”
HappyRobot has more than 25 FDEs out of a total team size of 110 people. FDEs at HappyRobot work hands-on with customers from onboarding to ongoing usage.
7AI calls these hires AI security engineers and they’re assigned to every account. They work directly with the sales team to ensure that AI agents are customized to the customer's unique environment and workflows, and they are able to actually commit code to do it. AI security engineers are part of the engineering team and don’t carry a quota.
Pair FDEs with sellers who span the full lifecycle
Several AI-native companies have rethought the role of sellers. AI can handle activities like prospecting, qualification, research, follow-ups, and CRM hygiene, which leaves more time for direct customer interactions. Sellers own more pipeline per AE and more of the customer lifecycle.
With this shift toward full cycle ownership, AEs become consultative rather than transactional. The seller’s role is to help customers reimagine their workflows to be AI-native.
“If I did it again, I’d prioritize hiring an all-around, end-to-end GTM person rather than over-emphasize the AE itself. Selling HeyGen is not that hard. In the enterprise you want to help customers figure out how to deploy the tool as a system with context and in a way that fits into their organization.”
“We call sellers go-to-market (GTM) engineers and they do the end-to-end sales process. In the early days this included BDR, AE, and SE work, although now we have a dedicated ClayDR team. Only 20% of GTM engineers have traditional sales backgrounds. We look for ops or growth people who look like our customers and are passionate about the product.”
Turn growth into a system
Add more net-new ARR than you burn
I won’t be the first to observe that many AI-native companies have lean teams. Gamma crossed $100M ARR with 50 employees, although co-founder Jon Noronha admitted they were “probably too lean” at the time. HeyGen passed $100M ARR in October 2025 and still has only 130 people.
A great ARR per employee no longer guarantees efficiency, however. It could hide low gross margins, exorbitant compensation, and seven-figure AI token bills.
Two better efficiency metrics for AI-native companies are ARR per dollar spent on headcount and lifetime burn multiple. Several founders told me they’re laser-focused on keeping lifetime burn below ARR, which equates to a burn multiple below 1.
“If I could only look at one number in addition to ARR, it would be burn multiple. This dials me into the efficiency of the business and tells me if I’ve messed up. I try to run that to 0.5x.”
Remove GTM obstacles with revenue systems teams
Historically growth in B2B is tightly correlated with GTM hiring. Companies need AEs who might carry a quota of $500k to $1M. Every 10 AE hires would typically coincide with 10 other GTM hires: 3-5 BDRs, 3 solution engineers (SEs), and 2-5 marketers.
Product-led growth started to decouple growth from hiring. AI-native companies are going much further with revenue systems teams and 10x builders. Quality matters more than quantity: revenue systems teams are usually quite small relative to their influence.
Clay has been championing the GTM engineer (GTME) title for this role, and to their credit the title is showing up in more and more open roles. Clay’s GTM engineers “ship GTM plays the way engineering teams ship products.”
“We have a dedicated GTM engineering team of seven people that build revenue systems at Clay. They’re basically trying to remove all the obstacles to growth, and they ship GTM plays the way engineering teams ship products. Some examples are pre-meeting notes, bespoke Clay tables for personalized demos, and personalized follow-up emails.”
Gamma’s GTM engineer role focuses on building AI-native GTM systems and infrastructure to turn product usage signals into enterprise sales opportunities. AirOps’ GTM engineers do “a bit of everything” including tech infrastructure, prioritizing target accounts, and automated plays.
“GTM engineering at AirOps does a bit of everything: pipeline building from automating value-add outbound, managing the target account list with Clay, email setup, outbound infrastructure, and helping implement our inbound automation tool.”
Let GTM teams cook with autonomy and AI budgets
GTM systems teams own both strategy (acting like a product manager) and execution (using a mix of AI, data, and traditional GTM tooling). They can’t be roadblocked by arcane hierarchy, approval processes, or internal politics.
Fyxer’s growth engineers ran 90 experiments each (!) last year. They have full ownership over an end outcome, with a big assist from AI tools like Claude Code.
incident.io has an applied AI team internally that’s tasked with rebuilding the company foundations with AI. The company offers an unlimited AI budget for tools that team members want to adopt.
“The goal is for you to experiment. We now spend almost $1 million on Claude. Claude Code has ripped through the team…. Philosophically, we’re comfortable adding multiple digit percentage points of spend on AI to replace work.”
The focus is on finding and retaining 10x talent across the business – not just in engineering. This talent needs to be able to cook with autonomy and AI budgets.
“The people we hire have high ownership and agency to work on different things. They are 10x superhumans using both their skills and AI. Once we get these people in, we need to build the environment for them to thrive.”
Thanks for reading Growth Unhinged! To receive new posts and support my work, consider subscribing.

