Join CTO Moataz Soliman as he explores the potential impact poor performance can have on your bottom line. 👉 Register Today

ebook icon

User Feedback

General

How to Analyze User Feedback for Your Mobile App

If you’re using an in-app feedback tool, you’re bound to see an increase in your feedback amounts (some Instabug users report receiving over 750% more feedback!). This bump in responses is great, but what do you do with it all? How do you turn all these words into actionable data? Let’s take a look at some simple yet effective feedback analysis techniques that will help you keep your eyes on the big picture and make informed roadmap decisions.

Don’t worry, this isn’t an intro to statistics course. But when you’re staring at a mountain of feedback, sometimes starting with simple analysis will shed light on what you should do next. These are some straightforward tips on how you can analyze your feedback and use it to make data-driven decisions.

‍

Open Text or “Verbatim” Feedback

‍

This category includes anything with a text field for freeform answers written by your users. Each answer is known as a “verbatim.” If you’re using Instabug, there are three ways you can get verbatims. Your users can shake their phones to send you app feedback, respond to an open-ended survey question sent by you, or answer an open-ended follow-up question in an NPS survey.

The biggest challenge with open feedback is finding quantitative patterns in pure verbatim responses. Thankfully, there are straightforward methods you can use to transform the sentiments you receive into quantifiable data.

‍

Word clouds

Instabug users get classic text analysis on the spot for open survey responses and NPS feedback. You’ll see your answers displayed in a word cloud in the Analytics box at the top of your survey page. The words are sized according to their relative frequency, with the words used most appearing the largest. This method will help you choose themes for your feedback according to how often they are mentioned and see what’s making an impression on your users.

‍

word cloud

‍

Word clouds are a great way to identify general themes for your feedback, but they shouldn’t be the be-all, end-all for feedback analysis! This tool measures frequency, not sentiment, so you may discover that a lot of people are discussing your customer service, but not know the ratio of good to bad comments. Use this as a launchpad to discover which issues require a deeper look. You can combine this with a tagging process like the one outlined below for a deeper, more substantive analysis.

‍

Manual tagging

Manually tagging (also called coding or labeling) your responses is an excellent way to peer deeper into your data. It’s similar to the word cloud example above, but you’ll get a more nuanced analysis that both quantifies open feedback and digs into the deeper meaning and intention behind the words. By manually categorizing your responses, you add human intelligence and language processing capabilities to your analysis.

This works best for data sets under 1000 responses, or you can reduce your workload by taking a few minutes each day to tag feedback as you receive it, so it never piles up. Categorizing your responses will help you condense the content of each verbatim and summarize your overall feedback.

‍

text analysis tagging hierarchy

‍

To dive in, first you’ll need to download your data (if you’re using Instabug, it’ll be exported as a CSV). Open it in your favorite spreadsheet program. Next, start assigning categories to the themes you see in comments. Some people simplify this by using numbers or colors for specific categories. Be sure that the tags you choose are broad enough to cover multiple comments but specific enough to be insightful later on. Some comments will have multiple applicable tags.

If you’re working with a team, it’s helpful to discuss your tags when you begin so you can agree on tag formats and the desired level of specificity. That way, you don’t end up with multiple tags describing the same thing. For instance, you wouldn’t want your spreadsheet to be cluttered with tags for “friendly,” “Friendly service,” “nice agents,” “Friendly customer support,” or “FRIENDLY.” Keep your tags consistent, and your data will be much easier to group.

One strategy that works well for complex feedback is to group your tags in a hierarchical structure. This allows you to group answers by category or feature, sentiment, and theme. Here’s an example of how this could look:

Here are some real-life examples from Instabug reviews and how they can be tagged (your tagging strategy might be different, which is totally fine, just as long as you’re consistent!):

‍

textual analysis content tagging

‍

So we’ve got tags. Now what?

Once you’ve assigned tags to your data, you can add them up and see how many responses there are per category. You’ll get a quantitative look at what your most prevalent feedback issues are. And if you are interested in what people are saying about one specific aspect of your product, it’ll all be nicely organized so you can start diving straight into the “why” phase of your analysis, outlined in section C below.

Just from a quick glance at the above sample, you can see that one of the most common feedback tags is Details, which tells us that users are finding value in how much data they’re getting from Instabug. This kind of insight has helped us understand which of our features to emphasize in our marketing materials—data-rich bug reports are a major draw for a large percentage of our most dedicated users. We also learned that our dashboard UI was a little clunky, so we used this feedback to design a more intuitive experience.

‍

Be aware that you can make this process as simple or as complex as you want it to be. The example insights are fairly broad and basic, but the more detailed your tagging system is (and the more feedback you collect), the deeper you can dig into specific issues. Integrating other information, such as tagging the user segment, device, or version, is also incredibly helpful when you’re trying to make fully contextualized, data-driven decisions. Here we’ve described applying tags to feedback you’ve already received, but you can also look for deeper insights by segmenting your audience before you send your survey.

In fact, proper segmentation is so important that it truly can change the entire outcome of your survey. Rahul Vohra’s enlightening piece on product-market fit gives a crystal-clear example of how smart segmentation will shape your analysis—and when performed correctly, can drive your product development in the right direction, guiding your features roadmap and differentiating your product in the market.

‍

Root-Cause Analysis and 5 Whys

‍

Have you got a problem? (It’s okay. We all do.)

Do you know what’s causing it?

5 Whys is a tried-and-true problem-solving technique, a classic qualitative method that identifies the root causes of problems. Developed in Japan for Toyota by Sakichi Toyoda (but let’s be real: humans have been asking “why?” forever), this method has been described as “the basis of Toyota's scientific approach: by repeating why five times, the nature of the problem as well as its solution becomes clear." The process aims to identify root causes of problems by examining cause-and-effect relationships within your system.

The process is as simple as its name. The idea is that you have a problem (the root cause of which you don’t know) and symptoms. Your goal is to identify the root cause, which is done by repeating the question “why?” Every time you answer the question, you’ll ask again until there are no more whys to be asked and you have hypothetically reached the root of the problem. Root-cause analysis is essential for getting to the bottom of customer complaints, addressing them, and preventing the same complaint from happening again in the future.

Let’s look at an example.

‍

Problem:

You’re the product manager for a popular productivity app. Your developers worked hard to crank out a calendar integration feature, but weeks after its release, few users have adopted it.

Why?

You’ve learned from a survey that most of your users were unaware that a new feature had been released.

Why?

Because they didn’t realize they had received a notification about the new feature.

Why?

A notification was sent, but it wasn’t visible.

Why?

Your notification symbol blends in too easily with the rest of your menu.

Why?

It wasn’t bright or bold enough.

‍

We can keep going with the “whys,” (you might ask as few as three or as many as ten!) but here’s a place where it’s reasonable to stop and consider that your notification scheme isn’t working for you.

In response to this incident, your developers decided to mark new notifications in bright red and send a modal message about the new feature upon the user’s next login. The result? Awareness and adoption increase, and soon, your users are integrating their calendars with your app, living more organized and productive lives, and leaving you positive reviews.

Congrats! You discovered the root cause of the issue.

‍

Additional guidelines for effective execution of '5 Whys':

  • When you can, work in groups to avoid biases and knowledge gaps.
  • Clarify the problem and make sure everyone actually understands it.
  • Follow logic when identifying factors in a cause-and-effect relationship. Test it by reversing your sentences, using “therefore” or “so” instead of “because.” If it still makes sense after the reversal, it’s probably solid logic.

Here’s an example: “The users didn’t realize there was a new feature because the notifications weren’t obvious enough” becomes “the notifications weren’t obvious, so the users didn’t realize there was a new feature.” The result is a logical statement.

‍

  • Don’t skip steps or make logical leaps. Move one step at a time so you don’t miss anything, and don’t fill in the blanks with assumptions.
    Your conclusions must be determined by examining the process, not the people, and from the customer’s point of view.

For example: The root cause in the example problem is not “the customers don’t care about calendars, so they didn’t notice it.” The root cause of the customer’s issue was simply that they hadn’t realized the feature had been introduced — because the notification system needed to be improved.

‍

This method isn’t foolproof, but works well so long as you’re aware of potential pitfalls. Knowledge gaps are one such risk factor; many people will fail to identify the root cause of a problem if they don’t have enough information to diagnose it. That is why teamwork is recommended. Stopping at the symptom level instead of the root cause is another potential issue. This can be avoided by seeking verification for your answers along every step of the way, and continuing to ask “why?” until you have reached a logical root cause.

There’s also the potential failure to find all root causes if there are multiple issues contributing to the problem. There are ways to avoid this trap, though, such as drawing an Ishikawa diagram, outlined below.

‍

Ishikawa Diagrams

‍

A simple diagram will allow you to better visualize the possibility of multiple root causes contributing to your problem. Use this simple fishbone-shaped diagram to organize your flow of ideas. The problem or defect forms the spine of the fish, with the potential causes branching off as the ribs of the fish. You can use sub-branches for root causes, with as many levels as you need to reach the real root. Plotting issues and hypothetical root causes visually can help inspire ideas.

‍

user feedback analysis

‍

Like 5 Whys, this exercise is best done in teams. Also like 5 Whys, fishbone diagrams aren’t perfect (what is?)—they can sometimes fail to recognize relationships between causes, and are also limited by potential knowledge gaps in the team. Despite these issues and its occasional tendency to oversimplify, it has become one of the most famous quality control tools over the decades due to its ease of use and effectiveness. Part of this method’s strength is in its versatility; it can be used to diagnose issues both technical and behavioral.

This visual brainstorming aid was made popular in Japan in the 1960s by Kaoru Ishikawa of Kawasaki Shipyards. For a powerful example of this technique in action, check out this in-depth look at fishbone diagrams, focused on a detailed IBM case study. It is even said that Mazda engineers used these diagrams to aid in the design of the sleek, popular Miata MX-5 sports car, continuously in production for over 30 years.

Though imperfect at times, 5 Whys and fishbone diagrams are powerful and simple diagnostic techniques that can help you gain more insight into addressing your challenges. All feedback analysis techniques require your human intelligence to close the gap between information and actionable insights.

‍

Multiple Choice Questions

Giving your users multiple-choice questions is a great way to collect straightforward feedback because it’s low-effort for them (so you’ll get more responses), and it forces your users to state their priorities when given predetermined options. Instabug automatically plots your survey results in a pie chart for a quick visual breakdown of your responses.

The most important step you can take to get a deeper look at this type of feedback comes before the questions are asked, not after. Be proactively strategic about how you frame your questions and ask meaningful questions that will give you actionable answers. Choose your target audience and timing before sending a survey out to everyone.

You can weigh the answers according to their importance to you. If you ask the same survey question to multiple segments, check out how responses are distributed within segments rather than averaged across all groups. You may discover interesting trends about how people are using your product differently.

In order for a well-rounded approach that’s both quantitative and qualitative, try to complement this type of data with free-form feedback from your customers. Additional comments can provide context and clarify examples, especially when presenting your data to your team or stakeholders.

‍

NPS Surveys

‍

Ah, the Net Promoter Score survey. Such a simple, widely-used question that yields powerful results. This is generally seen as the primary metric for measuring customer loyalty. We’ve all seen it at some point or another: “on a scale from 1–10, how likely are you to recommend this app to your friends?”

Those who answer 9 or 10 are called promoters, 7 or 8 are considered passives, and those who rate you between 0–6 are your detractors. Rather than being an average out of 10, your total score could be anywhere from -100 to 100.

‍

So what’s a good NPS score?

Well, that’s relative. But in order to find out, first, you’ll calculate it. If you’re using Instabug, this step will be done for you automatically and your score will be displayed in the analysis tab. If not, it’s quite easy to do it yourself: just subtract the percentage of detractors from the percentage of promoters. Generally speaking, any resulting number above zero is considered good. But people tend to answer this question differently across industries, so for some companies, a zero just isn’t going to cut it. For others, a score of -5 might make them industry leaders. The average NPS in the software industry was 31 in 2018. Where do you fit in?

The first thing you might be tempted to do is compare your NPS score to your industry’s benchmarks. In fact, that’s what we just did. But in order to make a fair comparison, you’ll need to pit your results against other scores that are relevant to your app category, location, and customer segment. A simple industry-wide benchmark could be misleading, as NPS averages can vary for any one of these factors; some countries even average 10 points lower on all NPS scores, regardless of the industry!

An NPS survey typically doesn’t end there, though. Many companies will ask their customers a follow up question or send them to the app store for a rating. If you’re using Instabug, we apply logic to route your users to conditional follow up actions after they rate your app: by default, your promoters will be asked if they’d like to leave you a review in the app store, and passives and detractors will be asked what you could do better. The exact wording can be changed in your survey settings, but this approach is extremely effective in raising your app store rating and controlling churn.

One of the most important steps in the NPS survey process comes after the feedback is received. Don’t forget to close the loop. Instabug allows you to write back to your users straight from the dashboard, so you can dig deeper into their thought processes and let them know that you care about their experiences with your product. By continuing to ask questions, you’ll unlock insights that your customers didn’t realize you needed.

A popular variant of the NPS question is: “How would you feel if you were no longer able to use this app?” If enough people answer with “very disappointed,” you’ve probably got a winner. Prominent growth hacker Sean Ellis puts this figure at 40%. Whatever the number, it’s a good idea to pay close attention to the segment that answered “very disappointed.” These are the people who love and are invested in your product. Weigh their opinions more heavily than those who wouldn’t notice if your app disappeared tomorrow.

Since NPS questions come in two parts (the rating and the follow up question), don’t forget that you can apply the previously mentioned textual analysis to these answers as well. Instabug will capture main themes from your NPS responses in a word cloud. You can also apply the 5 Whys here in order to identify the root causes of the issues your customers are experiencing. It’s also recommended to cross-reference individual survey responses with other feedback events from the same user—if they’ve reported a bug, suggested a feature, or sent your other commentary, you might infer that these events could have influenced their NPS response.

‍

Impact-Effort Matrices

‍

Everyone’s got opinions; your app users are no exception. You’re certain to get feedback with ideas, and some of those ideas will be good. Some might even be great!

Instabug’s feature request function captures your users’ suggestions in one place and lets other users vote and comment on them (comments can be turned off in your preferences). You can sort the results by number of votes, last activity, or keywords. You can also see some information about the person that posted the feature request: their User ID number, profile and any attribute tags you’ve applied (great for segmentation!).

So you’ve received some requests and discovered that some ideas have garnered a lot of votes. What’s next? How do you decide what to build and when? Should you implement the most popular idea?

Well, not necessarily. You can take this analysis deeper than just counting votes.

If there are a few ideas that stand out to you, you can create an impact-effort matrix to decide which ones are worth implementing.

This exercise is best done in a group, so you can gather as many perspectives and as much information as possible. Together, work through your “maybe” list, one feature at a time. First, decide how much effort it will take to implement the idea. Some factors that will go into this might include cost, manpower, complexity, resources, and so on.

Then decide together how impactful the new feature will be. Did it earn lots of votes? Is it something many people have asked for? Are the people who are voting for this feature in your target segment? All of these questions and more can be worked into this exercise.

You can probably see where this is going. Draw a box with four quadrants, plotting Impact on one axis and Effort on the other. Once you’ve determined your two variables for each feature, plot them on your chart.

‍

impact effort matrix

‍

This simple team exercise is an easy, effective way to sort through the noise among all these ideas, and make prioritization decisions crystal-clear. Choosing what to add to your features roadmap next doesn’t need to be a complicated process.

Don’t forget to pay attention to both positive and negative feedback. The positive feedback you receive will boost your team morale and motivate them to keep doing well. It also reinforces your roadmap decisions. The negative comments will give you an opportunity to reach out with Instabug (or your preferred feedback tool), close the feedback loop, identify problem areas, and reduce churn.

Learn more:

‍

Instabug empowers mobile teams to maintain industry-leading apps with mobile-focused, user-centric stability and performance monitoring.

Visit our sandbox or book a demo to see how Instabug can help your app

Seeing is Believing, Start Your 14-Day Free Trial

In less than a minute, integrate the Instabug SDK for iOS, Android, React Native, Xamarin, Cordova, Flutter, and Unity mobile apps