Improve the digital product with A/B testing
We always want to improve the KPIs for our digital product like improving engagement, conversion, time spent on the site, and etc. But there is one issue; how do we know whether the new version is going to be better than the current version? This is when the A/B testing comes into play. (A/B testing is a method to compare two versions of the product to see which one performs better.) So we should make a habit of testing new feature before releasing to all user to verify that this new feature will achieve the goal or improve the KPIs. By using data deriving from the result of the A/B testing, we take the data-driven approach to make a product decision whether to release this new feature or not. This practice is very common in many major companies like Facebook (especially in the Growth Team), Google, BBC, and so on. No one wants to release a new version of the product and it performs badly than the previous version. So A/B testing is a method to help every product manager or even executive to make a better decision about the product based on the data.
To do the A/B testing, we need at least two versions: version A and B. The version A can be the existing product and the version B can be the existing product with a new feature. We also can compare the variations of the new feature. For example, if we want to have a new button, but we aren’t sure about the color of that button. We can test the version A with a red button and version B with a blue button to see how they perform. Apart from having two versions, we normally don’t run the test to all user, but we’ll experiment with 20% to even 70% of all user to make sure the test won’t impact the KPIs of the existing product and won’t severely impact the business.
With digital product like web and mobile app, there is a very easy way to run the A/B testing for finding the ways to improve the product or to validate the new idea by using an out of the box A/B testing tool such as Optimizely, VWO, and Google Optimize. These tools allow us to plug into the front end of the product and create a variation based on the existing product. Then, we can publish the experiment to the users and compare the result of the testing to determine which one performs better based on the goal. As long as the app has a sizable amount of user using that tested feature to determine the winner, the testing part shouldn’t be an issue.
There are various ways to optimize the product and these are based on the KPIs of that product. For example, if the product is a content website, you may want to improve pageviews/session, time spent on the site, bounce rate, and etc. If this is the e-commerce product, you may want to test CTA click rate, cart checkout completion rate, and etc. Another good example is the video streaming app, you may want to increase the amount of video played by a user, video view completion rate, and etc.
To start testing thing, if you don’t have your own custom testing method already, you may want to use Optimizely, VWO, or Google Optimize. Once you integrate the A/B testing tool to your app, you should be able to manipulate your app easily by updating the element via the tool’s visual editor. This visual editor is really helpful for non-technical person to create an experimentation without the need of software engineer. However, the visual editor is still limited to the changes on the front end of the app only. You still need your engineering team to help you if you want to conduct the testing that is extensive such as updating a backend algorithm.
Lastly, A/B testing is a very powerful tool for product person to make a right decision for the next version of the product since data never lies.