Tracking the right metrics enables you to monitor the progress of your beta test and steer it towards success. After the test is completed, analyzing the data you collect from it unlocks further insights that you can use to improve future iterations of your app and its beta test.
Besides the KPIs and metrics you normally track for your app, there are a few beta test metrics that are important to keep an eye on. These metrics focus on the quality of your app and the activity of your beta testers so you can proactively resolve issues that may pop up.
Beta Test Metrics
Total issues reported by LOC
Provided that your testers are active, the number of issues uncovered gives you an overall view on the quality of your app and the efficiency of your internal testing. Normalizing the number of issues by lines of code (LOC) allows you to compare your numbers to industry benchmarks. Although this number also indicates the activity of your testers, other metrics give better insight into that.
Fixed vs. reported issues
This metric speaks to the efficiency of your team in handling the incoming issues, but it is also affected by the efficiency of your bug reporting process. When the reports you receive lack the necessary details to pinpoint the offending line of code, the number of bugs that remain open will definitely rise.
Time to fix
Likewise, the average time your team takes to fix the reported issues helps you gauge the efficiency of your team and bug reporting process.
Number of issues by severity
A bunch of minor UI bugs might not have a big effect on the user experience, but a critical showstopper can leave your app unusable. To put the number of issues into context you have to dig deeper, and severity is one of the first factors to consider.
Number of issues by component
Looking at the distribution of issues by app module or component can reveal problematic areas in your app. If a particular feature contains a high number of issues, you can revisit it and identify where the issue stems from.
Number of issues by root cause
By analyzing the defects’ root cause you can discover where in your development process the bug has been injected. You will be able to identify areas that need improvement that might need to be iterated on to stop bugs at the source.
Bug leakage is the number of bugs that found their way into a release compared to the number of bugs that were detected by QA. This gives an indication about the efficiency of your internal testing process and reveals areas for its improvement.
Daily/Weekly/Monthly Active Users (DAU, WAU, MAU)
The absolute number of users your app has is mostly a vanity metric and is dangerous to take seriously. To avoid errors, you are better off tracking the number of users who are active within a certain interval. Depending on the type of your app, the interval can be daily, weekly, monthly, or a combination of them.
Tracking how long your users stay in-app is another indicator of their engagement. Again, this will be affected by what kind of app you’re developing; long sessions on a mobile game are great but might indicate an issue for a task reminder app.
Tracking the time between sessions is another way to look at user engagement. This metric is more useful for apps like task reminders that typically see frequent short sessions from an active user.
Number of issues reported per tester
While this might seem like a technical metric, it speaks more about testers’ engagement and commitment to the beta test. Remember to keep this number in the context of metrics like DAUs and session duration/frequency. If you have a low average per tester but DAUs and session duration seem fine, that might indicate an issue with your reporting process or maybe there just aren’t many bugs to find in the first place. You can use the average number of issues reported per tester as a benchmark to measure the individual tester activity against and identify your star testers and your poor performers.
Measuring the number of users that are using each feature is a great way to see which features your users value the most and which ones need further iteration. However, make sure to put the numbers in context and examine all explanations. A low adoption rate for a particular feature could mean that you need to work on its discoverability rather than function.
Tracking your users’ screenflow gives you great insight into how people are using your app. You can identify the common screenflow patterns and screens that have a high exit rate to determine the most common use cases and find out where users drop off.
This is sometimes a better KPI to determine tester activity since it doesn’t depend on the tester finding a bug and filling out a report. Responding to a quick survey is one of the easiest tester feedback responsibilities and can show you how many of your testers are to some degree committed to your beta test. Keep in mind that the efficiency and ease of your survey process will also affect the response rates.
Another thing you need to keep an eye on is the number of testers who respond to your communication and how quickly. This metric gains more importance the smaller the number of testers and more focused your beta test is. Small tests might not generate enough data to reveal a lot of insights and direct communication with testers becomes key.
This measures how many testers abandon your app over a specific time period compared to the overall number of testers and provides an overall view of your app’s retention. A high acquisition rate can skew your churn rate to make it look lower than it is, so keep that into account and try to exclude new testers when calculating churn.
Revenue retention rate
If you charge testers for your beta app or utilize in-app purchases, revenue retention can shine more light on your churn rate. Your overall churn rate might be low, but if it is your high spending users that are churning, your revenue retention rate will let you know.
Net Promoter Score
Now regarded as the industry standard to measure customer satisfaction, NPS measures how likely your users are to recommend you to a friend. NPS does not wait for your testers to abandon your app or its beta program and tries to detect their dissatisfaction earlier. Scores below seven signify detractors who will advise their friends against your app, while those above eight are promoters who will evangelize for your product. Sevens and eights are passive users who will not speak for or against your app.
How can Instabug help?
Instabug is the top beta testing tool for bug reporting and user feedback in mobile apps. It provides the most useful metadata on the market, exceptional customer support, and an in-app communication channel to chat with your beta testers.
- Integrations: Jira, Slack, Trello, GitHub, Zendesk, and more.
- Pricing: Free. Paid plans start at $41 per month.
Bug and Crash Reporting
With each report, you automatically receive comprehensive data to help fix issues faster, including steps to reproduce errors, network request and console logs, and environment details. For bug reporting, your beta testers can also send screen recordings and annotatable screenshots to provide further context.
In-App Surveys and Feature Request Management
Collect user feedback from your beta testers right inside your app to minimize interruptions and boost participation rates. Get powerful insights to enhance your product roadmap with surveys that you can target to specific tester segments and feature request voting to understand user pain points and desires.
While these metrics will help you see how your beta test is performing, they are not the only metrics you should be tracking. You will also need to track the acquisition, engagement, and retention of your app rather than the test. The common advice is: track everything but don’t report everything. For new apps or features, engagement and retention are especially important since you want to make sure that users like your app and find enough value to stick with it.
Your data contains an abundance of insights that allow you to make the kind of decisions that lead to an amazing app. However, beware of focusing on wrong metrics that don’t tell you the whole story and lead you astray. Keep your metrics in check by adding context; find the “why” behind the numbers and understand the effect on your app.