Why A/B Testing Is Good for Your Company Culture

Adapting a robust A/B testing program not only drove growth, it also improved our company culture

· 4 min read
Why A/B Testing Is Good for Your Company Culture

Six years ago, my previous company had a problem. We had the habit of building large, time-intensive projects based on perceived market trends or tenuous research, and far too often these projects were unsuccessful at growing our business. We also had far too many meetings debating small details like the position of buttons or which colors to use, and some of these meetings even got contentious. Agile delivery counteracts a lot of these problems, but without the right systems and tools in place, we still ran into issues that slowed down progress to our goals. The solution we came up with evolved slowly, but looking back, there were a few simple changes to our process and some new tools which had a remarkable impact on accelerating our growth. As a bonus, we noticed it had positive effects on our company culture as well.

The first step we took was to get our data in order. We evaluated many BI (Business Intelligence) and analytics tools and ultimately decided that none of these tools gave us a deep enough understanding of our users. We opted instead to create a data warehouse where we could run our own queries and analytics, and then created reports using data visualization tools (which, by the way, are orders of magnitude cheaper than the BI tools). This had the additional bonus of having the data in the right shape (roughly) as we grew our data team.

The second step was to add an AB testing platform. At the time, there weren’t AB testing platforms on the market that we were happy with, so we built our own. Building it in-house meant that the AB testing could be easily and deeply integrated into the code, a huge advantage.

Finally, we changed our product process to incorporate a lot of testing. For each new project or iteration, teams were required to create a valid hypothesis about why it would be successful, and determine specific success metrics (for example, increasing retention or engagement rates). Then their hypothesis was tested against the control with our AB testing framework to see which won. After a few cycles of this, we were able to drastically increase the cadence of our testing.

Lessons learned

Becoming data-driven had remarkable and wide-ranging effects on our product process and company culture. Let’s go over the top four effects.

Alignment

Adopting the steps above allowed us as a company to really focus on the metrics that would drive our business success. Having these clearly defined goals removed a lot of ambiguity about projects because we had clear success metrics. By making sure we defined our success metrics at the start of our planning cycle, we could drive alignment around the goals. This helped reduce the invariable scope creep and pet features from inserting themselves — or at least gave us the framework to say “yes, but not now.” Knowing what success meant also allowed our developers to start integrating the tracking needed to know if the project would be successful from the beginning, which was too often forgotten or only done as an afterthought. We also got in the habit of building in parallel the reports and dashboard to see what the current behavior was (if any), which would be ready for tracking the projects’ effects and sharing results with the company.

Speed

The meetings where we reviewed designs or implementations became much faster to get through as we would often say “let’s test it.” What tended to happen was people let opinions or bias affect their decisions. The biggest problem with these was that most of the time, we were not our users; we were essentially guessing what our users would prefer — and this was an opinion. When there was a disagreement between two opinions, it was very easy to take it personally or defer to the HiPPOs (Highest Paid Person’s Opinion). By focusing on which metrics defined success, we could remove the ego from the decision process. Whenever we found ourselves with an intractable difference of opinions, the answer became “let’s AB test it.” This allowed us to move on and increased the speed of development and shortened time between iterations. When properly implemented, AB testing should be straightforward and add minimal project overhead.

Intellectual humility

If there is one thing I’ve learned from our AB testing, it’s that most people are really bad at predicting user behaviors — myself included. For example, as an education company, we had to be sure that our members were over 13 to legally register. We had quite a number of ideas on how to do this, from small checkboxes on the user form to a full screen stating “You must be over 13 to continue.” When it came to AB testing these ideas, I felt certain that the minimal treatment would win. After all, the accepted wisdom is that one should reduce friction on the user forms by simplifying them as much as possible. However, the winner was the full-screen version, and by a lot! AB testing is nothing if not humbling. This was great news for our company as this attitude was incorporated into our culture — good ideas can come from anywhere, and no one person in our company is the arbiter of what is “right” and “wrong.” The HiPPO’s ideas are just as likely to be right or wrong as anyone else’s idea.

Team collaboration

Initially, we had a growth team coming up with all the test ideas. However, since the odds of successful AB tests were (and are) low, having just one team come up with all the test ideas was fundamentally limiting the odds that we would have big wins. We were aiming to run as many potentially high-impact tests as we could. We found that inviting ideas from the entire company opened up ideas we hadn’t considered, and in turn, led to some really good results. As a bonus, we also improved the culture of inclusion by inviting everyone into the process.

In closing, moving to a data-driven process that focused on hypothesis-based AB testing had a remarkable impact on our growth and culture. We were able to choose the metrics we wanted to move and focus the teams on moving them. Results of these projects were directly measurable. We experienced some remarkable single tests that increased our revenue by ~20%. The excitement of these wins was palpable. The teams started to execute faster and had greater alignment and collaboration — which was good for the company, and great for our users.

Read next

Want to give GrowthBook a try?

In under two minutes, GrowthBook can be set up and ready for feature flagging and A/B testing, whether you use our cloud or self-host.