Using AI to Accelerate the PMF's Path
How we're using AI to reach PMF faster, and what we learned along the way.
Most teams spend weeks validating an idea that could be tested in three days. Not because they lack resources. Because they lack the nerve to put something real in front of people before it feels "ready." And that word — ready — has killed more good products than any bad idea ever could.
The PMF loop has always been build, validate, test. Seven to ten days per cycle, when things go well. With AI, we do it in three. But the number isn't the point. What matters is what you do with the time you get back. If you use it to stall more, you've learned nothing.
Components that actually work
One of our first experiments was a single element for adding and removing members from a project. Sounds simple, but there was a lot happening: email autocomplete, removal, a counter that kicks in past three people. In Figma, that becomes six frames and a meeting about edge cases. With vibe-coding, we built it functional in hours, starting from a visual reference.
When the client used it with their own hands, they stopped asking about edge cases. Because they saw. And seeing resolves more than any explanation ever will.
Data that tells stories
A static chart is a photograph. An interactive chart is a conversation. The same data presented differently leads different people to different conclusions — and that's not a design problem, that's just how humans work. With AI, we can explore those narratives in real time, without waiting for a dev slot in the next sprint.
If you have some coding fluency, the ceiling disappears. If you don't, you can still get somewhere Figma couldn't reach on its best day.
When the idea doesn't fit a static screen
Some products can't be explained. You try, people nod, they leave the room and understood nothing. Because the experience is the idea. The interaction logic is the differentiator.
For those, we started building applications with real complexity: navigation, responsive charts, basic data logic. The code isn't pretty. A real developer would look at it and quietly weep. But in front of an investor or a founder deciding whether to bet on an idea, functional beats perfect every single time.
What surprised us most was how much fidelity changes the quality of listening. When someone uses something that works, they stop imagining and start feeling. Feedback stops being opinion and becomes reaction. And reaction is what you actually need to make good decisions.
There was also a side effect we didn't see coming: meetings got shorter. When everyone is looking at the same thing and touching it, alignment happens faster. Less "I pictured it differently," more "this works" or "this doesn't."
These are just a few of the experiments we've been running. With each cycle, we learn a little more about where AI accelerates and where it still needs a human hand. What we can say for certain is that high-fidelity prototyping is now a permanent part of how we work.
If you're still validating with static, try it once. Watch people's faces when they touch something that feels real. That moment is worth more than any slide deck you'll ever build.