How Fyxer used AI coding and GrowthBook to run 541 experiments in 1 year
Something remarkable is happening at Fyxer. The AI email assistant grew from $1M to $35M in annual recurring revenue last year. This year, they’re targeting $100M to $150M. Behind that trajectory is a company-wide culture of experimentation that produced 541 experiments in twelve months, more than two per working day. The growth engineering team alone, just four people led by Kameron Tanseli, accounted for 360 of those.
The story of how they did it comes down to two things: the right mindset and an AI-first approach to experimentation. The mindset meant treating every product change as a hypothesis to validate, not a feature to ship. The AI-first approach meant using tools like Cursor, Claude, and GrowthBook to compress the entire experimentation loop, from research to development to analysis, so a small team could operate at a scale that would have been impossible even two years ago.
Kameron joined Fyxer when the company had $1M in ARR. He brought a discipline he’d honed across B2C health tech, B2B SaaS, and now prosumer AI: measure everything, share everything, and learn as fast as possible. One of his first moves was creating a public Slack channel where every experiment result, win, and loss was visible to the entire company. The founders loved it. It became the company’s central nervous system for understanding what was working and what wasn’t.
Kameron recently joined The Experimentation Edge podcast to share the full story. Below are the key takeaways, but the real unlock came when they combined that learning culture with AI-powered development. That combination is what made 541 experiments possible across the company, and it’s what turned a high volume of losses into the wins that turbo-charged Fyxer’s trajectory.
The Growth Engineering Mindset: Why Learning Speed Beats Intuition
Here’s something Kameron will tell you openly: he’s bad at his job for the first few months every time he starts somewhere new. And it’s not just him. It’s everyone in growth.
When Kameron joined Fyxer, his instincts were calibrated to B2C health tech, his previous role. He defaulted to discount-heavy messaging, pricing-focused copy, and the kind of urgency-driven language that works for consumer subscription boxes. At a B2B SaaS company selling an AI productivity tool to professionals, none of it landed. The only way to close that gap was to get experiments in front of real users and let the data teach him what his intuition couldn’t.
This is the core of the growth engineering mindset at Fyxer: A/B testing isn’t just an optimization tool. It’s a learning tool. And when you’re new to a product, a market, or a customer base, it’s the fastest way to develop the intuition you don’t have yet.
The numbers back this up. Fyxer’s win rate in GrowthBook is 25%. That means 75% of their experiment ideas failed. If they had shipped every idea to 100% of users without testing, the cumulative damage would have been severe. A 50/50 test, even with imperfect sample sizes, beats shipping blind every time.
Kameron pushes back hard on the common startup objection that “we’re not big enough to A/B test yet.” His view: you may not be able to detect 5% lifts, but you can detect 20% or 30% effects, and at a startup, those are exactly the kinds of changes you should be testing. Pricing models, usage limits, core product flows. The risk of getting those wrong without testing is far greater than the cost of running an imperfect experiment.
A key element of Fyxer’s approach is how they think about iteration. Rather than waiting for a fully polished feature, they ship the core experience and then immediately run experiments to improve adoption and engagement. As Kameron puts it, almost nobody uses your new feature on day one. The real work starts after launch, when you test messaging, onboarding flows, and nudges to find what actually drives usage. This iterative approach was central to their PLG breakthroughs later in the year.
Kameron uses a simple framework to evaluate which features could drive viral growth. First, he identifies the actions users are already repeating within the product. Then he asks: every time a user sends an email, schedules a meeting, or triggers a confirmation, is there a way to use that touchpoint to introduce Fyxer to someone new? When the answer is yes, the team builds and tests a loop around it.
Not every loop works. Fyxer has a scheduling feature, similar to Calendly, and Kameron hypothesized that sending booking confirmations could drive recipients back to Fyxer to sign up. In theory, it was a clean growth loop. In practice, users pushed back immediately. Fyxer’s entire value proposition is reducing inbox noise, and here they were adding another email on top of the Google Calendar and Outlook invites people already received. They killed the experiment and pivoted to a different approach. That willingness to test assumptions, even ones that look great on a whiteboard, is what separates a growth-minded team from one that ships on conviction alone.
Using AI to Scale Experimentation from Weeks to Hours
The mindset gets you to the right experiments. AI is what lets a team of four run them at startup speed.
Fyxer’s experimentation stack is built around a few key tools, with Claude as the central hub. The growth team shares Claude skills across the team, so common workflows, like turning a GrowthBook experiment result into a Slack post or generating a hypothesis from a data analysis, are reusable and consistent. They’ve connected Claude to their internal systems through MCP integrations, including GrowthBook’s API, so experiment data flows directly into their AI workflows.
For development, they use Cursor across the full stack. But the real unlock has been Cursor’s desktop mode with virtual environments. Here’s why that matters: traditionally, even a simple experiment requires a developer to write the code, pull it down locally, run the app, and manually check that the new upsell panel or copy change looks right. With Cursor desktop, the tool runs the app in a virtual environment and shows Kameron a video of what the experiment will look like. He reviews it, signs off, and moves on, without ever pulling down the code himself.
This means he can run five or six experiments in parallel, as long as they’re relatively contained changes. For even simpler experiments, like backend configuration changes or one-line feature flag adjustments, they use Claude Opus, Codex, and Tembo to one-shot the implementation entirely.
The AI acceleration extends beyond development. On the data side, Fyxer uses Dot, an AI data analyst that connects to their BigQuery warehouse and lives in Slack. The data team documented their table schemas, columns, and relationships, and Dot uses that context to answer complex questions — segmentation analysis, survival curves, custom queries — from anyone on the team. Non-technical stakeholders can get answers in seconds without waiting for the data team, which unlocked a bottleneck that plagues almost every growing company.
The experimentation lifecycle itself is increasingly automated. Cursor automations fire when PRs are opened, daily jobs check for stale experiment code that should be cleaned up, and product release docs are generated automatically. When a key metric dips unexpectedly, the data team uses the GrowthBook API combined with Claude to cross-reference recent experiment launches and diagnose whether an experiment caused the problem.
The net effect: AI compresses the entire experimentation loop. Research that took days happens in hours. Development that took a week happens in an afternoon. Analysis that required a data scientist can be done by anyone on the team through Slack. That’s how four engineers run 360 experiments in a year.
What 541 Experiments Actually Produced
Volume without results is just busywork. Here’s what Fyxer’s experimentation program actually delivered:
- Increasing free-to-paid conversion from 5% to 35% by adding a credit card gate before the free trial.
- 2.3x-ing the share of paying customers on annual plans, which now accounts for 50% of subscribers.
- Increasing the trial start rate for personal email users by 65% by segmenting trial lengths based on signup type.
- Creating a referral growth loop in which 33% of invites are accepted.
None of these were obvious in advance. The credit card gate, for example, contradicts conventional wisdom about reducing friction in signup flows. But Kameron noticed that many AI apps were already asking for credit cards upfront, and Fyxer’s users had high intent because they were connecting their email. They also made the paywall optional during the experiment, drawing design inspiration from Canva’s checkout flow by showing users a clear timeline: what happens today, in 5 days, and in 7 days. The result was essentially free revenue on existing traffic.
The annual plan shift followed a similar pattern. The original UI defaulted to monthly billing with a modest 8% annual discount. Kameron tested defaulting to the yearly plan, increasing the discount to 25%, and displaying the effective monthly price. It’s the kind of change that takes a few hours to implement and test, but has a massive compounding effect on retention and cash flow.
That’s the compounding advantage of high-velocity experimentation: you find the counterintuitive wins that your competitors are leaving on the table because they’re still debating whether to test.
Where Fyxer’s Growth Team Is Headed Next
Fyxer is scaling the growth engineering team from 6 to 13 this year, with a target of 1,000 experiments. But the real multiplier isn’t headcount. It’s continued investment in AI-powered developer performance: more reusable skills, more automated workflows, and tighter integration between their experimentation platform and their AI tooling.
Their revenue target of $100M to $150M ARR would represent another 3–4x leap. If the pattern holds, that growth won’t come from a single breakthrough. It will come from the compounding effect of hundreds of experiments, most of which will fail, but the ones that win will change the trajectory of the business.
Key Takeaways
- You don’t need to be big to experiment. You need to be disciplined about testing the things that carry the most risk.
- A/B testing at a startup is primarily a learning tool. It’s how you build customer intuition fast, especially when you’re new to a market.
- AI doesn’t just make development faster. It compresses the entire experimentation loop, from hypothesis to analysis, making high-velocity testing possible with a small team.
- A 25% win rate is a feature, not a bug. It means you’re testing bold ideas and catching the failures before they ship to everyone.
- The combination of the right mindset and an AI-first approach to tooling is a genuine competitive advantage, and one that’s accessible to any team willing to invest in both.
Want to hear the full conversation? Watch Kameron’s episode on The Experimentation Edge podcast, where he goes deeper on Fyxer’s growth loops, AI tooling stack, and advice for growth engineers starting at a new company.
Fyxer runs their entire experimentation program on GrowthBook, the open-source feature flagging and A/B testing platform. If your team is looking to scale experimentation without scaling headcount, get started for free or request a demo.