8 Comments
User's avatar
Will Howard's avatar

The insights about test AI-centric messaging is super interesting, will be very curious to see how this plays out!

Expand full comment
Jenni B's avatar

Super interesting! I love the breakdown of plain vs. designed/product backgrounds for signup pages. So often we are fed the generic advice of just make everything minimalist. Simple. Less is better. But that isn’t some universal truth. We aren’t are trying to copy Apple. In reality, it’s often clarity and proper expectation setting that wins.

Expand full comment
Vishal Kataria's avatar

This is awesome. Real data to show what works and what doesn't.

Will be interesting to see whether senior leaders will overcome their bias of following "best-practices" in the wake of this data... or will they stick to what is safe because "everyone is doing it."

Curios: What has your experience been while interacting with decision-makers, Casey?

Expand full comment
Casey Hill's avatar

The key is to understand what each stakeholder values and align before you run the test. If you know the VP of product marketing is keen to include verbiage about the new AI feature, you can have the convo, "Right now we are seeing less impact from AI positioning in the headline, but we see it really working great for conversions in XYZ other spot. These are some examples of top brands who run tests around this and the results were X"

Expand full comment
Vishal Kataria's avatar

Great point...

Expand full comment
Andres Glusman's avatar

It blows my mind that people just copy what they see competitors do. In fact, we saw a competitor of Monday.com actually copy the losing side of an experiment Monday.com was running. Experimentation is one of the most powerful yet misused forces in business. There is so much room for innovation in how to approach testing.

Expand full comment
Casey Hill's avatar

I just saw a few weeks ago Zendesk switched from saying "AI-powered support" on their pricing page to "AI agents, available 24/7 to solve customer issues." A real value claim tied to a key customer pain point. This is what will actually win in AI messaging and I hope where we continue to move towards.

Expand full comment
Koen AKA Digital Dali!'s avatar

Having run 1000+ experiments, Casey Hill's article is a vital gut-check. He's spot on: A/B testing, as done by most, is fundamentally broken.

Why?

Because too many tests start with garbage inputs, leading to useless data. Despite the effort, win rates are abysmal. Because teams are blindly copying 'best practices' that have been lost. And worst of all, winning results are constantly ignored or blocked internally.

This isn't just waste; it's bleeding growth potential.

True testing power? It demands rigorous research, upfront alignment, and non-negotiable implementation based on the data.

Expand full comment