👋 Hi, I’m Kyle from OpenView and welcome to my newsletter, Growth Unhinged. Every week I explore the playbooks behind the fastest growing startups. Join 18,000+ founders, investors and practitioners for an unorthodox take on how to scale faster.
Welcome back to the fifth edition of The Gist by Growth Unhinged, real-world growth tactics you can actually use. (You can catch up on the first four editions here.)
Today I’ll cover how to evaluate a business case for a potential growth experiment, pulling from a real-life experiment run by Leah Tharin during her time as Product Lead at Smallpdf. This experiment led to an 89% uplift in free-to-trial conversion, yet almost wasn’t prioritized due to the high degree of risk.
Leah is Head of Product at Jua.ai and writes the ProducTea Newsletter, sharing hot takes on PLG and startup/scaleup advice from her two decades of experience in the industry.
Keep reading 👀 for hot-off-the-presses insights into the state of product-led sales, courtesy of my friends at Pocus.
The gist
Prepare a well thought-out business case for product growth experiments, and come back to those business cases before, during and after your experiments.
Why you should care
Product growth experiments face a prioritization paradox. While they have the potential to unlock significant impact, they’re experimental bets with a high failure rate and don’t have the perceived urgency of building net-new products. It’s almost always challenging to get the engineers and resources you might need in order to capture this impact. And it’s nearly impossible to prioritize high risk, high reward bets.
That’s where business cases come into play.
You need a well thought-out business case to ensure you’re tackling the best possible opportunities that have the potential to unlock tangible impact on the KPIs that matter, and that you’ve mitigated any potential risks involved.
Tell me more
Leah Tharin, PLG leader and self-described product tea spiller, walked through the business case for one of Smallpdf’s top-performing growth experiments. Smallpdf is a bootstrapped document management platform with “dozens of millions of monthly active users” according to Leah.
The PLG company specializes in the processing, signing and altering of PDF documents and features ~20 different PDF tools. Among Smallpdf’s product suite was an existing ‘Edit PDF’ tool, which could do structured editing of a PDF (ex: reorganizing pages, rotating, annotations). These editing capabilities had previously been limited due to technical gaps.
Leah’s team hypothesized that product growth experiments to this tool could unlock incremental business value. I’ll turn it over to Leah to walk through her team’s hypothesis, how they evaluated the business case, what they tried, and the result.
Hypothesis
Allowing users to make minimal changes to the text of PDFs will drive considerable uplift in our tool ‘Edit PDF’ because it broadens the value proposition of ‘Edit PDF’ and we’re now in a position to offer this to customers.
Evaluating the business case
The business case surfaced four major uplift areas as a hypothesis:
SEO: What is the long-term benefit on top of funnel traffic on the keyword ‘Edit PDF’? Does this lead to an increase in our ranking of this keyword and have ancillary effects? Reduced bounce rate? We expected a good (but uncertain) uplift over the long term.
Existing user impact: There is already an existing user base using our ‘Edit PDF’ tool. How does this affect their free to paid conversion rate? We expected a massive impact.
Existing customer impact: Will this reduce future churn rates? We expected a moderate impact on existing users.
Cost to serve: Due to the need to license this feature, there is a non-trivial cost involved. How do we balance this cost with potential impact? We anticipated sizable risk to the overall case.
When conducting experiments in SaaS environments, the cost to serve is often not significant. Licensing technology can complicate the issue. This was influencing the experiment in two major ways.
Recurring cost is high: If the cost is substantial, upfront licensing can introduce a risk the business is not willing to take until the impact is validated with high certainty. In either case, we knew that the CAC would be substantial even if the experiment is a success.
Type of cost: We had to license based on usage. There's a paradox in product-led growth with usage-based cost licenses. They discourage the platform to offer a feature for free/limited use as it drives up the licensing cost.
The experiment
Free to try vs. gated?
Due to licensing being usage-based and how we served our users, we had no choice. The new feature had to be a paid-trial or customer-only feature.
The math is simple: With a 1% conversion rate to trial, you face a 100:1 usage of the feature driving up the cost to serve by 100 times. Not an option.
This split the experiment funnel into two areas with potential insight:
Free to try: How many people will see this use case as important enough? Since they have to sign up before seeing the feature, we can confirm the value proposition with a high degree of accuracy.
Trial to customer: How good is the actual tool? The percentage of trial users that convert into paying customers validates the value realization.
Risk mitigation of high licensing cost
We negotiated a proof of concept period with the SDK provider with favorable terms. This allowed us to run the experiment and see whether our impact predictions were on point. You need to be careful not to get locked into year-long commitments for cases that rest on numerous assumptions.
In case of failure, we reserved the right to end the contract at the end of the POC.
In case of success, we reserved the right to continue the contract with predefined terms for many years after the POC ends.
The Result
The uplift in free-to-paid conversion was 89% in an already well-converting tool, validating clearly that this is a relevant user need. We now had proof within our acquired users that editing the text of a pdf for small adjustments is a sizeable, monetizable need.
We expected a decrease in the trial-to-customer rate, though, which is normal for gated features. You will start to convert users who are a bad fit since they couldn't try out the product before buying. This is exactly what happened in the end, but the net uplift from free-to-trial still made up for it (and then some).
In the end, we were able to validate two important questions for us: the need and solution fit. We were already quite sure about the first which is why we opted for this setup and not a step in between like a painted door test.
Due to the technology being new, we were not sure if the solution the providers offer is good enough already. This was the main risk in this business case.
Now the company is in a spot to further optimize the tool itself. The established baseline was a resounding success.
Bonus: Fresh product-led sales insights 🔮
I teamed up with my friends at Pocus to collect data on the state of product-led sales. The survey was conducted from September to November of 2022 and included responses from 200+ companies, most of which self-identify as having a PLG strategy.
There’s a TON of data in the full report 👉. Here’s the TL;DR — three insights into the state of product-led sales.
Insight #1: Self-service is the starting point, but not where the 💰 comes from
Let’s officially kill the myth that PLG is anti-sales.
Among the dataset of companies, the self-service motion generated 25% of revenue on average (median). Very rarely did self-service account for the majority of revenue.
Interestingly, 20% of revenue on average (median) came from a product-led sales motion, where prospects begin their journey via self-service acquisition and then either buy or expand their purchase via a sales-assisted experience. Meanwhile 59% on average (median) came from a largely sales-led experience.
This data begs the question: where should we draw the line between self-service, product-led sales, and a sales-led experience for our prospects?
Insight #2: We’re still learning the most efficient sales motions for PLG companies
When asked which option most accurately describes your sales motion, respondents were all over the map:
69% have sales conducting outreach to inbound leads (aka hand-raisers who explicitly contact sales or request a demo).
52% are conducting ‘outbound’ to PQLs, up from last year. This is where sales reps navigate users and accounts who indicate purchase intent based on their product activity and other signals. This typically means working the entire account and not just the lead/user. (You can read up on how ClickUp does that here.)
52% are conducting outbound to MQLs.
44% still have sales teams doing cold outbound.
In other words, there’s now a high degree of complexity in sales motions, which creates a substantial burden on RevOps and sales enablement teams in order to get reps focused on the right motion, onboarded, and effective. I’d urge you to ask yourself: are we trying to do too much at once?
Insight #3: Let’s normalize Product Qualified Accounts (PQAs)
Based on the learnings above, it’s no surprise that Marketing Qualified Leads (MQLs) and Sales Qualified Leads (SQLs) are still the top two metrics that GTM teams track.
It’s great to see Product Qualified Leads (PQLs) become part of the GTM metric mix, now tracked by 52% of accounts. But we still have (lots of) work to do.
In my opinion, more companies should be paying close attention to Product Qualified Accounts (PQAs), which are accounts (not just users or sign-ups) that fit your ideal customer profile and have 1+ user with product signals. These are only tracked by 17% of companies.
Three reasons why I’m digging PQAs:
The potential value of a customer isn’t just about their product usage — it’s about the bigger picture. Thinking in terms of PQAs rather than PQLs leads folks to factor in important information like firmographics (company size, industry, geo, etc), personas/use cases, and product usage. Your sales-assist motion should adapt accordingly. For example, you may find that accounts with >1,000 employees struggle to self-serve and need assistance ahead of product usage.
Your user is rarely your economic buyer, especially in this economic climate. Sure, you’ll want to understand your power users and document how they see value from the product. You might even lean on those users for intel, introductions, or help navigating the account. But you ultimately need to get to the economic buyer and have a value-based conversation about why they should buy. PQAs help you navigate across the entire account.
You may have several different sub-accounts and PQLs within the same organization. Part of your sales strategy may be to consolidate accounts in order to drive more business value, better insights, and greater security/compliance. PQAs give preference to the domain, not the user or sub-account, so you don’t miss out on the true opportunity at hand.
Thank you Kyle! Great coverage of topics as usual, and thank you for having me!