Today, let’s explore how to dig deeper into your survey data, spot trends, and look for connections using basic data analysis techniques. We’ll take a look at analyzing some Instabug survey data, create a few visualizations, and talk about how to tie different types of data together. Combining your survey results with contextual details will help you get the big picture and make informed decisions about your app performance.
In this blog, we’ll be using stories and data from InstaBite as an example. InstaBite is a fictional food discovery app. Its main features are food reviews, meal match (an AI-powered food suggestion engine), catering, and recently, online ordering and delivery.
Let’s start with a survey
Let’s imagine it’s the end of Q3 and we recently surveyed 300 random users about their favorite InstaBite feature. It was a simple one-question survey with multiple choice answers. Our survey looked like this:
Your Instabug survey dashboard automatically generates a pie chart to show the distribution of multiple-choice answers. Let’s look at an example below.
Proportions of a whole: the pie chart
First, the pie chart is a good start to visualize proportions of a whole. It’s one of the first things you can do when analyzing survey data. We can see that nearly half of our users have selected online ordering as their favorite feature, with the next most popular feature being food reviews, the original main feature of the InstaBite app.
But this pie chart doesn’t really give us the full picture. In fact, some people feel very strongly about pie charts being the worst. And they have a point, pie charts alone don’t tell us much. We can understand a lot more about our situation if we add another variable. Let’s look at how our users’ favorite features have changed over time. For our next graph, let’s add the data from the last few times we did this survey back in Q1 and Q2. If you want to work with your Instabug results in a spreadsheet, it’s easy to download them as a CSV from the survey results page.
Changes over time: the bar chart
This bar chart shows us how the distribution of feature preferences has changed over time. Online ordering is the clear favorite in Q3, but its rank has risen gradually since it was introduced in Q1. We can see that preference for food reviews has declined gradually as more users discovered online ordering.
Something interesting I’ve noticed here is the steady decline in preference for Meal Match. This could be due to our users simply deciding that online ordering is their favorite feature of the app, or it might be an indication that something else is happening. To start, let’s take a closer look at our Meal Match data.
Let’s try to identify the cause of this steady decline in meal match favoritism since Q1. Thankfully, Instabug has given us a lot of data to work with. One of the most common explanations for a decline in preference for a feature is bugs or UI updates. Sure, maybe simply introducing online ordering to the app was enough to bump user preferences for Meal Match, but let’s find out if something else is going on.
Spotting trends: the line graph
We’re halfway through analyzing our survey data. Let’s look at line graphs. As mentioned above, one of the most common explanations for a decline in preference for a certain feature is bugs. Bugs can turn a great experience into a frustrating one. So, let’s start with the simplest hypothesis first. Let’s plot Meal Match fans against bug reports and see if there might be a relationship. Instabug allows users to categorize bug reports by feature (and developers can also add tags to bug reports), so it’s easy for us to find out how many bug reports are associated with Meal Match.
We only have three quarters’ worth of data here, but there’s already an interesting pattern to be seen. It looks like the appeal for Meal Match started declining at the same time that bug reports for Meal Match doubled. One of the first lessons here is that you should bring in other data sources and look for clues when you see survey results that surprise or mystify you.
Additionally, it’s important to note that this is not a strictly scientific experiment. We’re doing our best at analyzing our survey data and looking for trends, but we’ve got to remember the difference between correlation and causation. Although it certainly looks like there’s a relationship between the numbers here, we need to do a little more digging to confirm that this is actually the case. Plotting data won’t give you all the answers—but it’ll give you leads to explore and verify.
Interpreting the results
Based on the data in the line graph above, it looks like one or more new bugs that showed up in Q2 might be affecting Meal Match enjoyment. From your survey dashboard, you can click on individual respondents’ Instabug profiles and view more details. You can find out if they’ve filed any bug reports or answered any other surveys in the past.
In this situation, we discover that many of the users who used to love Meal Match have something in common: they’ve filed bug reports for Meal Match. By looking at their other profile information, we can pick up additional details—in this instance, they also have something else in common. They were all using the same OS. We can surmise that users of a particular OS suffered from a disproportionately large number of Meal Match bugs and this impacted their enjoyment of the feature. At this point, you might check in with your developer team and find out if the situation has been addressed.
This scenario has been simplified for the blog, but every situation is different. This example illustrates the need to gather contextual details from other sources to piece together the full feedback story. Instabug gathers rich data from multiple angles of the user experience—bugs, crashes, surveys, feature requests, and help messages. With all that, you’ll have the information you need to understand your app performance and user experience.
Understanding the why: combining quantitative and qualitative data
As we’ve seen above, context makes it easier to understand complex situations. In addition, it also helps you validate hypotheses you might have about why you’re getting certain survey responses. But, one of the most helpful things you can do to get the user perspective is to hear it in their own words.
The NPS survey is one of the most widely-used surveys out there. It starts with a quantitative question: “On a scale of 0-10, how likely are you to recommend this app to a friend or colleague?” In the next example, InstaBite has sent out NPS surveys to random users (n=300) and we’re going to look at the data. The second question in the survey was a qualitative one: “What was your main reason for giving InstaBite this score?”
Here are the results from our dashboard.
The responses in full are recorded on our survey results page. But if we want to get a quick look at our score distribution—plus the reasoning behind the scores—all at once, we can create a stacked column and combine it with the most insightful quotes from each category of respondents. This simple tool is a great way to share user experiences with your team and shareholders at a glance. It keeps main ideas at the forefront, preserved in your users’ own words, combined with their quantitative scores.
In this post, we’ve covered how it’s equally as important to interact with your data as it is to collect it. There are countless other ways you can combine your findings from multiple sources to get the full story on your app. We’ll cover some more of them in future posts.
More survey resources
- The Beginner’s Guide to In-App Surveys for User Feedback
- In-App Survey Questions: Guidelines and Templates
- How to Segment and Target Users for Max Survey Feedback