Your next job will require AI skills
Leaders from Clay, Vercel, Wiz and Zapier on how to hire AI-native GTM talent (and how to get an AI-native job)
👋 Hi, it’s Kyle Poyar and welcome to Growth Unhinged, my weekly newsletter exploring the hidden playbooks behind the fastest-growing startups.
AI fluency is the skill of 2025. It’s becoming a part of performance reviews. It’s being evaluated in the hiring process. Meta is even letting candidates use AI during coding tests.
But being AI fluent (should we pronounce it like affluent?) is highly context dependent. While an engineer might need to be familiar with AI coding assistants (and their limitations), a recruiter might need to find ways to accelerate resume screens (while minimizing bias) and a support team member might need to use AI to audit support docs for agents and LLMs.
Looking at tech GTM roles, the number of job posts requiring AI skills has exploded in recent months. It increased from a mere 65 in July 2023 to nearly 1,000 in July 2025. The data comes from Sumble and you can explore all the aggregated job posts for yourself here.
I was shocked at how varied the specific job titles and requisite AI skills actually were. These jobs do include next-gen roles like growth/GTM engineers, but also social media producers, paid search specialists, BDRs, marketing analysts, content marketing managers, product marketers and CMOs. And the AI skills ranged from beginner (ex: “working proficiency of AI tools like ChatGPT”) to highly sophisticated (ex: “skills in using AI copilots and agents to personalize messaging, automate outreach, and surface insights from CRM data”).
Clearly, AI skills are in demand. But how do you figure out who has AI skills? And, if you’re thinking about changing jobs, how can you prepare yourself to demonstrate AI skills in an interview?
To help, I’ve put together the six-step FLUENT framework for hiring AI-native talent. This comes from polling readers (thank you!) along with some of my favorite GTM leaders who’ve been actively hiring AI-native folks at places like Clay, ClickUp, Wiz, Vercel and Zapier.
Step 1: Figure out which AI skills you really need for the role
Some jobs require systems-level thinkers who can build complex, multi-step workflows with AI agents and first-party data. Most don’t.
Your first step is to figure out which AI skills you really need for the specific job and which are simply nice-to-have.
Zapier has a helpful framework for how they think about evaluating AI skills across roles including specific examples of what separates a capable from transformative hire. I’ve adapted their framework slightly (below) for GTM roles.
Unacceptable: Resistant to AI tools and skeptical of their value.
These are candidates who make little or no use of AI tools beyond ChatGPT. When pressed, they default to older playbooks or manual ways of doing things.
Capable: Using the most popular tools, with likely under three months of hands-on experience.
These folks use AI on perhaps a weekly basis for work related to their role. They’re curious about AI tools and eager to experiment with advanced functions when given the opportunity or training. They understand some of the limits of AI and have started to think through how to adapt to those (ex: human-in-the-loop review process).
Adoptive: Embedding AI in personal workflows, tuning prompts, chaining models, and automating tasks to boost efficiency.
Their AI use should begin to have a meaningful productivity impact for their specific role. They’ve brought AI into at least one complex, multi-step workflow like an AI agent or a custom GPT. And they can readily identify processes or even roles that could be fully automated with AI.
Transformative: Uses AI not just as a tool, but to rethink strategy and deliver user-facing value that wasn't possible a couple years ago.
Their AI use should have a meaningful business impact across their team. At this level folks are using AI to build teammates and run multiple workflows in parallel. They have demonstrated an ability to solve complex, last mile problems of getting AI to work for important processes. And they champion AI adoption beyond their specific role, bringing it to other teams or cross-functional workflows.
While in all cases you want to avoid anyone who’s unacceptable, for junior or entry-level roles you can be OK with someone who’s capable and eager to experiment. For more experienced ICs or Ops-focused roles, you’ll likely want to raise the bar to either adoptive or transformative depending on the context.
Step 2: Learn about their AI interest level and usage
“People think you need a big grand vision for how to adopt AI. The reality is that you want as many people playing with this on a personal productivity level as possible,” Phil Lakin told me.
Phil leads enterprise innovation at Zapier where he helps enterprises adopt AI and automation. When hiring at Zapier, which says that 89% of its employees are now ‘AI-fluent’, Philip looks for candidates who are innately curious about AI. “Tools are making anything possible so we’re looking for people who are curious, interested, scrappy builders who want to try new stuff,” he said.
For Phil this means looking beyond the obvious “have you used ChatGPT?” question. His top three interview questions are instead:
What’s something you rebuilt from scratch after AI changed how you’d approach it? I want to see if they’re rethinking systems, not just adding a chatbot to their workflow.
Tell me about a moment when you realized AI made a workflow or role obsolete. Shows if they’ve zoomed out and connected real dots.
If I gave you a full-time AI engineer tomorrow, what would you have them build? This one is my favorite. It tells me if the candidate is playing offense or just watching the show.
Meg Scheding, a fractional product marketing and GTM leader and former product marketing lead at Brex, uses a similar line of questioning. She asks:
What is the most frustrating use case for AI that you’ve come across? It shows me how much they’re tinkering or playing with the tools. Curiosity (and not fear) about AI is one of the biggest indicators of successful adoption in my opinion.
Yash Tekriwal, the first GTM engineer at Clay, explores AI usage by probing about nuances that folks learn over time if they’re serious about AI adoption:
What is the structure of a great prompt? Does it change at all based on the model you're selecting or the task to be completed?
What's an example of something that AI is very good at doing, and something that it's not very good at doing? Bonus points for real examples from work experience.
Step 3: Unpack a real-life example of what they’ve built with AI
From there, spend time with the candidate on a real-life example. You’re looking to assess the business impact, how they identified the problem, and whether it was really the candidate who spearheaded the AI project or mostly driven by someone else.
This is the favorite question of Sam Kuehnle, VP of marketing at Loxo, a talent intelligence and recruiting platform, and writer of the Marketing Meditations newsletter. He asks candidates:
Tell me about a recent project or deliverable that you leveraged AI to help with and are proud of. What was it and what made it special?
It's a basic question, but it's very open-ended so that I can follow up with questions that go 2-3 layers deeper depending on what they say. Their answer to this initial question is mostly to gauge how they think and use AI in general, but what I'm really using to understand their AI fluency is how they answer the follow-up questions I ask that are tailored to their initial response. This helps me quickly assess if they’re driving the AI or if they’re taking the output at face-value.
Paul Stansik takes a similar approach in his work on value creation initiatives within ParkerGale Capital’s portfolio of profitable technology companies with $10-30M in revenue. (Paul also writes the Hello Operator newsletter on Substack, which has a great “AI 101” page that covers everything AI he’s teaching his portfolio teams.)
One of the things we think is important in this role is actively looking for opportunities to experiment with AI. Can you tell us about a time you proactively noticed an opportunity where you might be able to apply AI to a part of your job, and the steps you took to make it happen, refine it, and then share it with others on your team?
Good answer = They noticed the opportunity, they went through a couple rounds of "making it work", and then they took it upon themselves to teach at least one other person how to take advantage of what they figured out.
Bad answer = A company-wide AI initiative that impacted them, but they didn't own OR "that's it?" really basic examples (e.g., using ChatGPT to write a blog).
Tom Orbach, director of growth marketing at Wiz and writer of the Marketing Ideas newsletter, has a slight twist on asking for a real-life example. He’s actually hiring right now and asks every candidate this question: “Tell me about a time you helped someone else use AI.”
This question is great because it's not about specific tools and more about mindset. I try to see if they have that "bug" where they are obsessed with finding cool shortcuts with AI, and they enjoy it so much they're doing it for someone else. It also shows if they're team players. Do they step outside their own role to make others better? Or do they just keep their head down?
Answers I love usually look like this:
"Our designer was stuck trying to come up with an infographic idea for a blog post, so I showed them napkin.ai. Two minutes later they had a draft."
"I wrote a super detailed prompt to help our SDRs write better cold intros."
But the best answers are about running workshops for their teams.
Step 4: Examine further to gauge their creativity and problem solving
Many of the real-life examples sound great on paper. It’s a fascinating use case, cool new technology, and probably had a quantifiable impact on the metrics.
There’s just one problem: it doesn’t stand up to scrutiny.
Hiring managers drill into the examples in much greater depth, testing for a candidate’s creativity and problem solving abilities. Here are a few specific follow-ups recommended by Sam Kuehnle:
How did you iterate to get to the final output?
Where did you get stuck along the way and what did you do from there?
Where did you find the AI wasn't "getting it right"?
Where did you override the AI output and why?
Why did/didn't you ask AI to run a SWOT on the output before finalizing?
Step 5: Navigate hard problems together in a case setting
The first 15 minutes of building an AI workflow can be mind-blowing. It’s incredible how quickly you can vibecode an ROI calculator or even build your own marketing AI agent. That first 15 minutes of magic can be followed by 15 hours of debugging.
Forward-thinking hiring managers spend significant time asking about how candidates navigate the hard problems of designing and implementing AI workflows that are production-ready with actual customers. These questions tend to be in the form of a ‘case’ tied to a specific workflow that the team has been tackling. Case-style interview questions – which give me flashbacks to the terrifying case interviews from strategy consulting – test for problem-solving and the ability to collaboratively work together on a scenario relevant to the job.
Everett Berry, head of GTM engineering at Clay, likes to ask about a case involving how to automate outbound with AI. Here are the specifics:
I would say to candidates: consider two types of systems. One is a fully automated AI SDR, which given an account and some information about ICP, the system chooses contacts and messaging, sends emails, handles replies, and evolves messaging based on campaign analytics. The second is a human-in-the-loop system where a human chooses the account and contact criteria, researches good-fit accounts and contacts with AI, uses an AI to write messaging, and then relies on a human to change the prompts based on what's working.
Discuss the pros and cons of each system. Which one would you choose to roll out today? What evidence would you need to see to change your answer.
In a good case interview question, there’s not necessarily a single right answer. Hiring managers instead look for critical thinking, clear communication, and the candidate’s ability to navigate new situations that they haven’t necessarily encountered previously.
Step 6: Tailor a data assignment to the role to confirm what you heard
Regardless of how compelling a candidate is when talking about AI, there can be a major gap between knowledge and execution. Many hiring managers include a data assignment to verify that candidates are comfortable building for themselves.
It can be tricky to find the right assignment that’s challenging enough to delineate between candidates without being so time-consuming or taxing that candidates drop out.
Some, including general manager, v0 at Vercel Zeb Hermann, ask explicitly to build AI prototypes:
The main thing I do is require anyone interviewing for a product or growth role to ship actual prototypes of changes they'd make in v0. The PRD to prototype transition was massive, and the #1 way I test for it is to see if they are actually "prototype pilled". Really excellent candidates just proactively do this anyways – they share their ideas in the form of v0 links and/or pull up their screen during an interview and we start vibe prototyping together.
I don’t care at all if someone has a great narrative about how they’ve adopted AI or how it’s changing the world. I just want to see the actual stuff you produce with it.
Many don’t make AI an explicit part of the assignment, which allows hiring managers to see whether candidates are actually thinking in an AI-first way or if they default to more manual approaches. This is the preferred approach of ClickUp chief operating officer Gaurav Agarwal who told me:
We have a data assignment as a part of our hiring process. The data assignment isn't very hard, but tailored to the role. It mostly tries to assess if the person can think through a given problem and figure out what's happening at a deeper level.
In that evaluation, I ask if they used AI to analyze their data, and have a detailed conversation there. The goal is to talk about AI hallucinations in data analysis.
A second question is about how they would use AI to automate or 10x a certain part of the system. What will your stack be? Have you built something around this?
The goal is to use the case as an anchor point and try to bucket the person within four categories: (1) does not use AI, (2) uses AI at a basic level, (3) can build workflows using AI, or (4) uses AI to build teammates and run in parallel.
Wrap up
For candidates out there: even if an interview isn’t explicitly about AI, you should be increasingly prepared to talk about AI. Stay up to date on how AI tools are evolving. Be aware of the limitations and how to work through those limitations. Get your hands dirty and build with AI tools yourself – many offer generous free versions.
Here’s the Growth Unhinged mini-MBA to master AI for GTM:
101 Level: Take ChatGPT beyond the basics. Learn how to do it.
201 Level: Learn vibecoding for rapid prototyping. Learn how to do it.
301 Level: Apply Deep Research to research-heavy projects. Learn how to do it.
401 Level: Build your own personal AI agent. Learn how to do it.
Elective A: Try an AI avatar. Learn how to do it.
Elective B: Build a team of AI agents. Learn how to do it.
And, hopefully, have fun with it. If you’re only learning AI skills to ace the interview, you might find that you’d be more fulfilled in a different role.
A special thank you to the following GTM leaders for sharing your insights: Everett Berry (Clay), Gaurav Agarwal (ClickUp), Meg Scheding (Clover and Crimson), Paul Stansik (ParkerGale Capital), Phil Lakin (Zapier), Sam Kuehnle (Loxo), Tom Orbach (Wiz, Marketing Ideas), Yash Tekriwal (Clay), and Zeb Hermann (Vercel).