One of the most important steps in the mobile app development process is gathering qualitative user feedback. It’s an ongoing task that’ll stay with you through all stages of your app’s development. And it can be challenging, but it comes with a huge payoff.
Getting in touch with the customer’s point of view is crucial to building an app with happy users. And for product owners and designers, gaining that empathy is essential to your success. Once you’ve heard enough opinions, you’ll need to use them to make informed decisions about your product.
But how do you collect qualitative feedback efficiently and effectively?
How do you ask the right survey questions?
How do you make quantitative sense out of words and opinions?
If you have questions about this process, don’t worry. We’ve prepared a detailed manual covering how to handle your qualitative feedback process from beginning to end.
In this guide, you’ll learn:
- Which sources of feedback are best for building mobile apps
- How to collect feedback efficiently
- How you can optimize your process with a feedback tool
- How to design effective surveys
- How to ask actionable feedback questions
- How to analyze qualitative feedback
- How to make data-driven roadmap decisions
Qualitative Feedback Manual Contents
Types of Qualitative Feedback You Need for Your Mobile App
Part of the challenge of building a mobile app is creating a product that’s both useful and beloved by your users. So how do you discover what your customers want? It all starts with research. From gathering audience statistics to asking for user feedback, data collection will help you make smarter and better-informed decisions.
But information comes in many forms. What kind of data should you be going for? Let’s find out.
Before we get down to choosing what type of feedback you’re going to gather, let’s talk about the definitions for qualitative and quantitative data. Most of us will have some vague recollection that quantitative data is about numbers and qualitative data is about descriptions. But let’s dig deeper.
What are the differences between qualitative and quantitative data?
Quantitative information is what most of us think of when we hear the word “data.” Numbers, percentages, and charts might come to mind. And that’s pretty accurate. Quantitative data is all about the “whats” of your situation: the numerical, quantifiable aspects of the item you’re researching. This type of raw and objective data is easy to record, categorize, and represent visually.
Typically, people (your stakeholders in particular) gravitate towards quantitative data because of how clear and objective it seems. The fact that it’s easy to visualize in charts and graphs is another reason it’s so powerful—we humans are visual creatures, after all. But without context, quantitative data can be prone to misinterpretation, which is why it isn’t always as objective as it appears.
Qualitative feedback reveals the “whys” and “hows” behind the numbers. This type of information is expressed with language, not numbers. It’s descriptive and in the case of user feedback, primarily subjective. Qualitative feedback could include comments, anecdotes, suggestions, or complaints; emotionally-driven responses that will deliver the context you need to make informed decisions.
Here’s a quick cheat sheet with key points for each type of data:
|Quantitative User Research||Qualitative User Research|
|Expressed numerically||Described with language|
|Measurements||Emotions + Thoughts|
|What happened||Why it happened|
Why do I need both types of data?
Qualitative data is sometimes described as being “fuzzy” data due to how subjective it can get. Yes, it’s often emotionally-driven and difficult to quantify. Is it a daunting task for number-oriented people? Sometimes!
There’s a modern proverb that goes like this: “The plural of anecdote is not data.”
You’ve probably heard that one before. But next time you do, remember this: frequency matters. If you hear similar comments from numerous users, you’re going to infer that there is a trend. And sometimes all it takes is one particularly helpful comment to unlock hidden potential in your app.
Quantitative data and qualitative feedback are better together. They’re like peanut butter and jelly. And when you combine both types of user feedback, you’re going to make a really insightful sandwich. Understanding the “whys” behind user interaction with your product will go a long way towards influencing the “hows” of your next steps.
A Real-Life Example
Imagine you’ve just released a new feature for your app. Your developers have really poured their hearts into this one, and you’re convinced it’ll be the next big thing. But weeks pass after the update and you see that the adoption rate for this feature is lower than expected (that’s quantitative data!).
What kinds of qualitative research methods can I use for my app?
There are many ways you can reach out to your users, both in-app and externally. Gathering multiple sources of information, using multiple methods, ensures that you’re not accidentally missing out on part of your user base. First, let’s look at the types of qualitative feedback you can gather straight from the app using third-party software such as Instabug.
In-App Qualitative Feedback Using Instabug
In-App Surveys: You can send custom surveys and choose to receive either multiple choice answers or open-ended text responses. You can also send Net Promoter Score (NPS) surveys (“On a scale from 0-10, how likely are you to recommend this app to a friend? Why?/What could we do better?”). The first part of the NPS survey (the score) is a quantitative metric, but the second part (the follow-up question) is entirely qualitative. All of these survey types are valuable for gaining insights about your users and target audience. For a guide to asking effective survey questions, see PART 4, In-App Survey Question Templates.
Feature Requests: Your users can propose new features and vote on those suggested by other users in your app. If you have commenting allowed (this builds a sense of community, though some businesses prefer not to allow them), you’ll also hear their opinions and reasoning. You’ll definitely score a few new ideas, but there’s more. This type of feedback is helpful to compare to your roadmap: How do your plans compare to how your customers use and perceive your product?
User-Generated Bug Reports: If you’ve ever seen a bug report from Instabug, then you know how data-heavy they can be. But aside from all the technical information, they’re also a source of valuable qualitative feedback. Your users can send written messages and attach screenshots, video recordings, and voice notes. Bug reports are not only a great way to cut down on the time your mobile developers spend debugging—they’re also loaded with information about how your users interact with your app, not to mention their comments and concerns.
In-App Feedback: These user-generated comments function the same way as bug reports and come with the same contextual data like device and session details, but they have the added benefit of being bucketed as a separate category of feedback from the start. When users shake their phones (or invoke Instabug another way), a menu appears where they can select to “Report a problem” (submit a bug report) or “Suggest an improvement” (share general feedback). These comments from real users will spark new ideas, make you think twice about your assumptions, and give you more information to make better product decisions.
In-App Chat: Your customer support team is one of your biggest assets when it comes to exploring the hearts and minds of your users. In-app chats allow you to send follow-up questions to your survey respondents or bug reporters. You can ask for more details or simply have a conversation that reveals more about their needs and concerns. A two-way communication channel empowers you to investigate interesting comments, dig a little deeper into user motivations and their perception of your product, and identify root causes of issues. Actively engaging with your user base is important not only from a customer satisfaction perspective but from a research perspective as well. Remember that your users may not always know what information you need, but by guiding the conversation, you can unlock insights with details they didn’t realize were important.
External Qualitative Feedback
All of that chatting and reporting happens during the live and beta phases of your product. But does that mean that all your qualitative research should come from within the app? Definitely not—let’s look at some other avenues to consider when building a well-researched product.
Focus Groups: This method can be expensive, but if you start with a careful sampling of users and well-defined goals before each meeting, you’ll gain invaluable insights. You can also bypass market research companies and gather your own focus group made up of friends and family. At any time during your product lifecycle, you can use these meetings to ask about user needs, their perception of your app, your messaging and strategy, and where you can improve. Focus groups help you gather information you can’t get from individual responses alone. You’re more likely to hear from people who communicate face-to-face more effectively than typing, and they’ll also interact with each other and build on each other’s ideas during the conversation.
User Testing: In addition to having shared conversations in these focus groups above, you can have them test your app live and think out loud while using it so that you can understand their thought process and emotional reactions at every step of their journey. You can test specific flows end to end, including new experiences that you’re thinking about building or variations of existing flows. Then at the end of each test, they can fill out an in-app custom survey to answer your specific questions about their experiences.
Interviews: Dig into your users’ experiences through in-depth conversations with individuals using your app. By interacting one-on-one with your users, you’ll learn their stories and gain insights into their relationship with your product since the very beginning.
Expert Opinions: There’s no reason not to get on a first-name basis with experts in your field. If you know someone with domain knowledge relevant to your product, don’t settle for just visiting their blog for ideas. Schedule meetings with people you look up to if you can, and ask them plenty of open-ended questions. You’ll want to do this continually as you grow, but it’s especially relevant when you’re just starting.
There are numerous methods to conduct qualitative research. But they all have one thing in common: they try to reach the hearts and minds of your users. Analyzing this data requires strong empathy skills and solid logic—but is it hard?
It doesn’t have to be! This guide will walk you through the key concepts of survey creation, targeting, data collection, and analysis. In PART 5, we’ll look at some simple yet strategic ways to unpack all your data.
How In-App Feedback Supercharges Your Research Process
In the last section, we learned about the types of qualitative feedback that can help influence your app development process. Now let’s talk about how to get it.
What’s the best way to hear from your audience?
Chances are you’re reachable via plenty of channels—emails, social media, blog comments, and more. Thanks to the internet, we’re at no loss for finding ways to communicate.
But there’s a better answer. A mobile app feedback tool is about to become your indispensable research partner. In-app feedback will help you nail your product positioning and move your roadmap in the right direction, faster.
So let’s take a look at how in-app feedback will bring you closer to your goals.
1. You’ll get more and better feedback—quantity and quality
To your users, an in-app feedback tool is the path of least resistance between you and them. Typical response rates for mobile web surveys are as low as 1-3%—opening a new window to fill out a survey is simply more effort than the average user is willing to make, if they even open your email to begin with.
But when you reduce the barrier to participate and eliminate extra steps that your users have to take to fill out the survey, the number of responses you receive increases. In fact, Instabug users report survey response rates as high as 50% and, in general, receive over 750% more user feedback after installing the SDK.
When a survey or request for feedback is quick, easy, and painless, your users are much happier to answer your questions. Well-timed and placed prompts make it easy for your users to respond, which widens your data pool. And the data you’ll get won’t only be higher in quantity—it’ll also be better quality.
2. You’ll understand your customers’ pain points better
Knowing your customer is crucial to building a successful app beloved by its users. The easiest way to understand their needs and desires is by speaking directly to them. You’ll probably ask your sales and customer support teams for insights about your users, but for a sophisticated and well-rounded understanding, you’ll want to conduct your own research.
In many cases, you’ll find that the answers you get cover points that are overlooked by the product team.
To start connecting with your users, you can ask open-ended survey questions and allow your users to respond. This is a low-friction, easy way to collect qualitative feedback about what you can do better and how you can do it. Because native in-app surveys are quick and unobtrusive, you are less likely to lose respondents than through other forms of outreach. You’ll get a deeper look at how to ask actionable user feedback survey questions in PART 4: In-App Survey Question Templates.
For a more quantitative approach, you can send Net Promoter Score (NPS) surveys or questions with numerical scales, like 1-5 star ratings which are more delightful for your users than just numbers but still quantifiable for you. When you use these type of questions, you get measurable feedback that’s simple to analyze and share with your team members and stakeholders.
Instabug’s NPS surveys do the dual duty of collecting data and helping you build your reputation: you choose the segment and timing, and promoters will be directed to the app store to write you a positive review. Detractors will be asked for improvement suggestions, rather than leaving you a negative review. You’ll learn more about NPS surveys in PART 5: How to Analyze Your User Feedback.
3. You’ll have happier, more engaged users
This goes hand-in-hand with understanding your customers’ needs. In the process of learning about what problems they’re trying to solve, you’ll also build personal, emotional connections with your users.
With in-app chats, you can reply to your survey respondents and get them to reveal their concerns and underlying motivations. A two-way communication channel will make your users feel acknowledged and valued. You’re giving them a chance to feel like their voice has an impact on the product, and this is a powerful driver for boosted satisfaction and engagement rates. You’ll connect more quickly with your users in-app, and can easily follow up on their comments and dig deeper for answers. Fast, seamless communication sends the message loud and clear: We care about what you have to say.
If your feedback tool includes the ability to suggest and vote for features, it will also build a sense of community around your product. Instabug’s feature request system, for example, introduces voting and (optional) comments to the mix, so your users feel involved, noticed, and can share ideas.
One other important benefit of two-way customer feedback is that your team can lasso problems before they start, calm unhappy customers, and collect their complaints before they slam your product with negative reviews.
A proactive communication strategy not only controls and reduces churn; it makes your product stronger.
4. You’ll see patterns faster with organized data
One of the biggest challenges associated with collecting user feedback (other than the task of getting it!) is picking out patterns from multiple sources of input. A good feedback tool will streamline your analysis process. You’ll get clear, uniform answers to specific questions, and they’ll be organized and easily accessible—without extra work on your part. No more compiling spreadsheets from various disconnected feedback channels or trying to find patterns in jumbled data.
Not only will a unified feedback channel slow the chaos of a disorganized backlog, it’ll also get you more usable data. When conducting research, uniform data obtained under the same conditions is always more reliable. You’ll measure your success more accurately with easy-to-parse results. When you send surveys, you’ll receive standardized answers in the format you specify, and spend less time combing through unrelated sources trying to piece information together. It’ll be easier for you to find trends in your customer needs by identifying patterns in their responses.
A feedback tool will also help keep your team on the same page. For example, Instabug’s dashboard allows for multiple team members to access and tag feedback with notes and status. You’ll all be able to effortlessly keep your eyes on the big picture by keeping your data grouped and trackable.
5. It will help you continuously validate your product strategy
Once you’ve collected your data, you’ll need to analyze it (you’ll learn more about processing your data in PART 5: How to Analyze Qualitative Feedback). Product strategy shouldn’t be based on speculation—continuous feedback allows you to avoid that pitfall. Feedback from surveys and feature requests give you opportunities to regularly review your strategy.
You can ask yourself the following questions:
- Are these responses in alignment with my original product vision?
- Are my customers using its features for their intended purposes?
- Is the customer perception of my product in line with its positioning?
If any of the answers you get surprise you, then you’ll know it’s time to adjust your product strategy and either make some small changes to your roadmap or even consider pivoting. When you use the right feedback tool, you’re empowering yourself to make informed choices about what should be done and when; all based on real data. Collecting user feedback early on allows you to test hypotheses and reach conclusions before building a feature no one wants. This is how messaging app Kik does it.
A mindset of continuous discovery allows you to stay on top of what your users are thinking and feeling. Running easily implemented and analyzed qualitative feedback experiments between iterations allows you to iterate faster. You’ll be continuously validating your product during the discovery phase, a faster and cheaper choice than waiting until a release to collect feedback. You’ll also avoid the nightmare of working hard to build a specific feature, only to discover that your users don’t want or need it.
6. You’ll capture details your users aren’t directly telling you
The more powerful your feedback features, the less challenging it is for you to collect data that matters. When you use a convenient user feedback platform like Instabug, you’ll also glean insights into how people are using features within your product.
You can reply individually in-app to any user who responds to a survey or sends a bug report or feedback. Sometimes you may receive an answer that intrigues you. You’ll be able to respond to the user, dig deeper, and look for the underlying motivation behind a response. This simple proactive act of communication not only makes your customers feel heard; it also helps you uncover the truth about what they want.
Now that you’re acquainted with the benefits of installing a mobile app feedback tool, let’s learn more about how to create effective surveys that get you higher response rates and actionable answers.
Best Practices for In-App Survey Creation
In-app surveys are an essential tool for collecting user feedback for customer validation and customer-driven development. The more you interact with your app users and understand their needs, the better you know what to build and how.
Why In-App Surveys?
Surveys help product owners proactively get qualitative feedback from their users. One of the biggest challenges with collecting user feedback is low response rates, with some survey response rates cited as low as 1-3% for mobile web surveys.
In-app surveys aim to tackle this challenge by reducing friction through a native experience. Some Instabug users report response rates as high as 50%. In-app surveys reduce the number of steps users need to take to complete the survey and eliminate the need for users to leave the app in order to submit their feedback. Keeping people in your app increases the chance that users will participate in the survey, and it also has the added benefit of increasing user engagement.
Another factor that has an effect on survey response rates is audience targeting. With a customizable, in-app survey tool like Instabug, you can target specific segments of your app users based on various criteria, such as app version, event completion, or other custom attributes.
Benefits of In-App Surveys
In-app surveys are versatile and their insights can be valuable for a range of teams, including product, customer success, and marketing. Survey responses can help guide product development, measure customer satisfaction, and shed light on your user personas.
Here are some of the benefits of using Instabug’s in-app surveys:
- Get customer validation for specific features.
- Help prioritize your product roadmap.
- Identify user pain points and come up with ideas for new features.
- Gather insights about the user experience of your app.
- Understand app usage and adoption of specific features.
- Identify unhappy users, prevent churn, and boost user retention.
- Identify loyal users and drive five-star reviews.
- Optimize your monetization and pricing strategy.
- Optimize your conversion rates.
- Conduct pre-release market research.
- Let your users know that their feedback matters.
Tips to Boost Response Rates for In-App Surveys
In order to make reliable conclusions about your app users from your surveys, you’ll need a representative sample of responses. The more responses you receive, the stronger the insights will be. Here are some tips to help encourage participation.
- Don’t prompt new users too soon.
- Don’t prompt users in the middle of completing a task in your app.
- Don’t prompt your users when they have just experienced an app event that could be considered negative.
- Don’t be too pushy. Instabug’s in-app surveys are displayed to users just a maximum of two times so that we don’t disrupt their experience.
- They don’t block the app’s UI and they are also easily dismissible with a downward swipe.
Send your surveys to specific segments, or use your surveys to segment your users. For example:
If you’re asking about your app’s shopping cart, make sure you only ask a segment of your users who have made a purchase.
Send an NPS Survey to all users, then segment your users based on their responses. With Instabug’s NPS Surveys, Promoters are asked to rate the app in the app store, while Detractors are asked for feedback about how to improve. This catches negative comments before they end up in app store reviews and gives you a chance to fix any issues.
Shorter is better. The longer the survey, the more likely your respondents are to drop off.
Multiple, focused surveys sent to relevant users and spread out over time are more likely to get higher response rates than one long survey that is sent to all of your users.
Best Practices for In-App Survey Design
In addition to asking the right questions, there are certain dos and don’ts that you should keep in mind about the structure of your surveys. Here are some best practices below.
- Have a clear objective to keep your surveys focused.
- Use mutually exclusive multiple-choice answers.
- Allow “other”, “N/A”, “neutral”, and “not sure” answers.
- Start with an easy question to encourage users to continue with the survey.
- Ask the most important questions at the beginning in case users drop off before your survey ends.
- Avoid leading questions.
- Avoid subjective questions (unless you want subjective answers).
- Avoid overly general questions (with the exception that sometimes you’ll want to ask open-ended questions).
Commonly-Used Survey Formats
Your surveys can take many forms, from a quick yes/no to a series of questions that get progressively more complex. The format you want will depend on the audience you’re targeting and the depth of feedback you expect. Here are some options:
Use this for open-ended questions, usually general or catch-all questions like, “What would improve your experience with this app?”
Use this when you want users to select only one answer from a list of possible choices, such as a numerical ranking.
Ask your users to rate your app or an aspect of it from 1 to 5. For example, “How would you rate your last purchasing experience?”
Use this to get feedback from your users with the least friction possible. This works best when you’re not really concerned with details, but you want as much participation as you can get.
Take it one step further and identify those users who replied with five stars, then ask them to rate your app in the app stores. Or have your customer success team contact those users who replied with one star to see how you can prevent them from churning.
Use this when you want comprehensive and detailed qualitative feedback from your users.
Net Promoter Score (NPS) Surveys help you measure your users’ level of satisfaction and loyalty on a scale of 1-10.
By default, Instabug’s NPS Surveys consists of two questions, and you can also adjust the wording:
“How likely are you to recommend [App Name] to a friend or colleague?” (0-10 indicating least to most likely)
“How can we improve?”
Those who respond with 9 or 10 are your “Promoters”, loyal customers who like your app enough to recommend it to other people. To help you drive five-star reviews, we prompt these users to review your app in the app store.
“Detractors”, those who respond with 0-6, are typically not satisfied with your app. We ask these users the second question about how to improve in order to catch negative comments before they reach the app stores.
“Passives” are those who respond with 7-8. We also ask them the second question. We recommend that you target these users and aim to turn them into promoters as quickly as possible in order to retain them and prevent them from becoming detractors.
Targeting is an essential part of the survey process—it not only matters that you ask the right questions, but to the right people and at the right times as well. For example, Instabug offers manual targeting to give you the most control over the SDK. With manual surveys, you can show in-app surveys however and whenever needed programmatically through a token.
Define one or many rules in the Instabug dashboard with default and custom conditions. With targeted surveys, you have two options:
Auto Show: Any user satisfying any or all of the default conditions you set will be prompted with the survey after 10 seconds.
Manual Show: Any user satisfying any or all of the custom conditions you set will be prompted with the survey. These conditions include:
- Application Versions: Target users running a specific version of your app. For example, use this when your survey is related to a new feature added to the latest version of your app.
- User Emails: Target specific users.
- User Attributes: Instabug allows you to add custom attributes to help categorize users. For example, you can add custom attributes specifying different interests. With targeted surveys, you can send them only to users who are interested in certain topics.
- User Events: Instabug allows you to add custom events. With targeted surveys, you can trigger them to appear after specific custom events. For example, when a user adds items to their cart and then cancels, you can send them a survey asking why they canceled.
Select this option when you want to have full control from your application’s code. Published surveys have unique tokens that you can use in your code to specify which surveys should be shown when. For instance, you can show a survey on certain pages or views.
Getting valuable qualitative feedback from in-app surveys takes more than just increasing response rates and targeting the right users. You also have to ask the right questions. In the next section, we’ll look at how you can get more useful answers from your testers and users.
In-App Survey Questions and Templates
When it comes to collecting qualitative feedback, in-app survey questions are one of the most effective tools you can use. They introduce very little friction to your users while allowing you to ask for specific feedback. Moreover, research has shown that more than half of mobile users expect companies to directly ask them for feedback.
However, the amount and quality of feedback you receive from surveys will depend on the quality of those surveys. Knowing what, how, and when to ask your surveys is crucial. In this post, we will look at some general guidelines for designing survey questions followed by some templates for common questions.
Before the In-App Survey Questions
The Goal of the Survey
The first order of business is to determine what you are trying to learn from this survey. This will help you decide on what type of questions you need to use in your survey.
When you have enough testers, it is considered good practice to segment them and send more targeted surveys. However, keep in mind that a small target sample can’t provide you with quantitative data, and qualitative data from a large one can be too much to analyze. Besides the size of the sample, also consider the type of people in the sample. Things like age, gender, technical level, etc. will change how you should frame your survey questions.
If you are going to send a survey to only a subset of your beta testers, it is important to avoid bias in selecting the sample. This bias is introduced when the sample is not representative of your larger target population, like surveying power users to represent all testers for example. A randomized selection is the best approach to avoid this type of bias.
In-App Survey Question Writing Guidelines
Open-Ended vs. Close-Ended Questions
Open-ended questions are good for qualitative feedback and exploratory research to find out what your testers think. They are also a good way to dig deeper into a specific area when coupled with a specific question. Moreover, they can generate useful feedback and insight from a small sample. However, they are wieldy and time-consuming both for your testers to submit and for you to go through so don’t overuse them.
Close-ended questions are good for quantitative feedback and to answer specific questions you might have. This quantifiable data shows you the “big picture”, and is a good way to reveal trends and patterns in your app’s use. Additionally, they are the least time consuming and offer your testers the least amount of friction to complete the survey.
Wording and Language
Ask only one thing per question, and keep your questions short and precise. Avoid using ambiguous wording, negatives, and double negatives as they can confuse your testers. Additionally, limit the number of questions you send per survey to around five questions to boost completion rates.
Use simple language and avoid using any technical jargon that might fly over your testers’ heads. Try to avoid using acronyms and abbreviation in your survey or spell them out when you do. Additionally, don’t use language that might evoke emotion or lead your testers to a specific answer to avoid biased results.
Giving a Choice
Avoid using yes or no question or dichotomies in favor of using a scale. This will give you more nuanced feedback and reveal more insights. Generally, it is good practice to use an odd number scale that has a middle point or “neutral” choice, but you can use an even scale to force a positive or negative choice when needed.
The answers you provide for the testers must be exhaustive and mutually exclusive i.e. they should cover all the possible answers and there should be no overlap between the choices. You can include an “other” option with or without requiring to specify if you can’t cover all the possible answers. For questions that can have more than one answer, consider limiting the number of choices possible. This will force your testers to prioritize their answer and give you a better idea of what matters most.
Here are a few quick tips to get more useful answers:
- Use multiple choice answers to make it easier for your users to complete the survey and increase response rates.
- Use numerical answer scales for more precise feedback.
- Ask specific questions to get specific answers.
- Aim for negative feedback to identify your app’s weak spots.
In-App Survey Question Templates
You can use your survey to collect feedback about many different aspects of your beta app. Here we will list some survey question templates sorted by what they are trying to address.
Star Ratings (1-5 stars)
How would you rate…
- …this app?
- …our customer support?
- …this new feature?
- When is the app most useful to you? (text field)
- What problem/goal are you trying to solve/achieve with the app? (text field)
- Did the app help solve your problem/achieve your goal? (scale of 1-5)
- What triggers would prompt you to use the app/feature? (text field)
- Which feature of the app are most/least important to you? (multiple choice)
- How would you feel if you can no longer use the app? Why? (text field)
- Which features didn’t work as expected? (text field)
- Are there any features you expected to find but didn’t? (text field)
- What features would you like to add to the app? (text field for new ideas or multiple choice to prioritize roadmap)
- How satisfied are you with the stability of the app? (scale of 1-5)
- How satisfied are you with the security of the app? (scale of 1-5)
- How satisfied are you with the ability to integrate with other services? (scale of 1-5)
- How satisfied are you with the look and feel of the app? (scale of 1-5)
- What was your first impression of the app? (text field)
- How satisfied are you with the ease of use of the app? (scale of 1-5)
- How satisfied are you with the installation and onboarding experience of the app? (scale of 1-5)
- What confused/annoyed you about the app? (text field)
- How difficult was this button to find? (multiple choice)
- What price would you be willing to pay for this app? (multiple choice)
- How clear do you find our pricing? (scale of 1-5)
- How would you rate the app’s value for money? (scale of 1-5)
- What are the alternatives that you are considering to the app? (text field)
- How does the app compare with competitors? (scale of 1-5)
- How did you find out about the app? (multiple choice)
- How would you rate your customer support experience? (scale of 1-5)
- Did we solve/answer your issue/question? (yes or no)
- How much time did we take to address your concern? (scale of 1-5)
- What type of support communication methods do you prefer? (multiple choice)
- Overall, how would you rate the beta program? (scale of 1-5)
- Did you find it easy to know your responsibilities as a tester? (scale of 1-5)
- How easy was it to report issues you encounter? (scale of 1-5)
- Do you have any comments/suggestions for the beta program? (text field)
- Which of the following features do you use most? (multiple choice)
- Which of the following features do you use least? (multiple choice)
- Which of the following new features would you most like to have in the app? (multiple choice)
- At which points did you feel bored with the game? (text field)
- Which parts of the game felt unnecessarily complicated? (text field)
- What was missing about your avatar/character? (text field)
- How much would you pay for this app? (multiple choice)
- How many coins would you pay for this feature? (multiple choice)
How likely are you to recommend this app to a friend or colleague? (scale of 1-10)
- What is the most important reason for your score?
- What is the one thing we should focus on the most?
- How can we improve?
- How would you rate the overall quality of the app? (scale of 1-5)
- How likely are you to recommend this app to a friend or colleague? (scale of 1-10)
- Do you have any additional comments/suggestions? (text field)
Tips for Designing In-App Survey Questions
- At the beginning of your survey, set your users expectations about its length and how much time it should take.
- Use a mix of open and close ended questions but limit the number of text fields as much as possible.
- The sequence of your survey questions makes a difference. Keep questions that might affect the user’s answers towards the end of the survey.
- Try to put the easy, short questions towards the beginning of the survey, and the longer open-ended ones towards the end to boost completion rates.
- Make sure you ask only one thing per question, to avoid confusion and inaccurate results.
- Avoid asking hypothetical questions; answers to hypothetical situation are often inaccurate and might not represent their actual response.
- Try to avoid framing answers on agree/disagree scales as they can be biased towards “agree”.
- Multiple choice questions can have a bias towards the first answer or choice, so try to randomize the order of the choices.
- Before you send your survey, test it out internally with your team and externally with a few testers if possible.
Asking the questions is half the battle. Now that you’ve asked your users some purposeful questions, you’ll need to analyze the results before you can take action. How will you analyze your feedback? Let’s jump into that in the next section.
How to Analyze Qualitative Feedback
If you’re using an in-app feedback tool, you’re bound to see in increase in your feedback amounts (some Instabug users report receiving over 750% more feedback!). This bump in responses is great, but what do you do with it all? How do you turn all these words into actionable data? Let’s take a look at some simple yet effective feedback analysis techniques that will help you keep your eyes on the big picture and make informed roadmap decisions.
Don’t worry, this isn’t an intro to statistics course. But when you’re staring at a mountain of feedback, sometimes starting with simple analysis will shed light on what you should do next. These are some straightforward tips on how you can analyze your feedback and use it to make data-driven decisions.
Open Text or “Verbatim” Feedback
This category includes anything with a text field for freeform answers written by your users. Each answer is known as a “verbatim.” If you’re using Instabug, there are three ways you can get verbatims. Your users can shake their phones to send you app feedback, respond to an open-ended survey question sent by you, or answer an open-ended follow-up question in an NPS survey.
The biggest challenge with open feedback is finding quantitative patterns in pure verbatim responses. Thankfully, there are straightforward methods you can use to transform the sentiments you receive into quantifiable data.
Instabug users get classic text analysis on the spot for open survey responses and NPS feedback. You’ll see your answers displayed in a word cloud in the Analytics box at the top of your survey page. The words are sized according to their relative frequency, with the words used most appearing the largest. This method will help you choose themes for your feedback according to how often they are mentioned and see what’s making an impression on your users.
Word clouds are a great way to identify general themes for your feedback, but they shouldn’t be the be-all, end-all for feedback analysis! This tool measures frequency, not sentiment, so you may discover that a lot of people are discussing your customer service, but not know the ratio of good to bad comments. Use this as a launchpad to discover which issues require a deeper look. You can combine this with a tagging process like the one outlined below for a deeper, more substantive analysis.
Manually tagging (also called coding or labeling) your responses is an excellent way to peer deeper into your data. It’s similar to the word cloud example above, but you’ll get a more nuanced analysis that both quantifies open feedback and digs into the deeper meaning and intention behind the words. By manually categorizing your responses, you add human intelligence and language processing capabilities to your analysis.
This works best for data sets under 1000 responses, or you can reduce your workload by taking a few minutes each day to tag feedback as you receive it, so it never piles up. Categorizing your responses will help you condense the content of each verbatim and summarize your overall feedback.
To dive in, first you’ll need to download your data (if you’re using Instabug, it’ll be exported as a CSV). Open it in your favorite spreadsheet program. Next, start assigning categories to the themes you see in comments. Some people simplify this by using numbers or colors for specific categories. Be sure that the tags you choose are broad enough to cover multiple comments but specific enough to be insightful later on. Some comments will have multiple applicable tags.
If you’re working with a team, it’s helpful to discuss your tags when you begin so you can agree on tag formats and the desired level of specificity. That way, you don’t end up with multiple tags describing the same thing. For instance, you wouldn’t want your spreadsheet to be cluttered with tags for “friendly,” “Friendly service,” “nice agents,” “Friendly customer support,” or “FRIENDLY.” Keep your tags consistent, and your data will be much easier to group.
One strategy that works well for complex feedback is to group your tags in a hierarchical structure. This allows you to group answers by category or feature, sentiment, and theme. Here’s an example of how this could look:
Here are some real-life examples from Instabug reviews and how they can be tagged (your tagging strategy might be different, which is totally fine, just as long as you’re consistent!):
So we’ve got tags. Now what?
Once you’ve assigned tags to your data, you can add them up and see how many responses there are per category. You’ll get a quantitative look at what your most prevalent feedback issues are. And if you are interested in what people are saying about one specific aspect of your product, it’ll all be nicely organized so you can start diving straight into the “why” phase of your analysis, outlined in section C below.
Just from a quick glance at the above sample, you can see that one of the most common feedback tags is
Details, which tells us that users are finding value in how much data they’re getting from Instabug. This kind of insight has helped us understand which of our features to emphasize in our marketing materials—data-rich bug reports are a major draw for a large percentage of our most dedicated users. We also learned that our dashboard UI was a little clunky, so we used this feedback to design a more intuitive experience.
Be aware that you can make this process as simple or as complex as you want it to be. The example insights are fairly broad and basic, but the more detailed your tagging system is (and the more feedback you collect), the deeper you can dig into specific issues. Integrating other information, such as tagging the user segment, device, or version, is also incredibly helpful when you’re trying to make fully contextualized, data-driven decisions. Here we’ve described applying tags to feedback you’ve already received, but you can also look for deeper insights by segmenting your audience before you send your survey.
In fact, proper segmentation is so important that it truly can change the entire outcome of your survey. Rahul Vohra’s enlightening piece on product-market fit gives a crystal-clear example of how smart segmentation will shape your analysis—and when performed correctly, can drive your product development in the right direction, guiding your features roadmap and differentiating your product in the market.
Root-Cause Analysis and 5 Whys
Have you got a problem? (It’s okay. We all do.)
Do you know what’s causing it?
5 Whys is a tried-and-true problem-solving technique, a classic qualitative method that identifies the root causes of problems. Developed in Japan for Toyota by Sakichi Toyoda (but let’s be real: humans have been asking “why?” forever), this method has been described as “the basis of Toyota’s scientific approach: by repeating why five times, the nature of the problem as well as its solution becomes clear.” The process aims to identify root causes of problems by examining cause-and-effect relationships within your system.
The process is as simple as its name. The idea is that you have a problem (the root cause of which you don’t know) and symptoms. Your goal is to identify the root cause, which is done by repeating the question “why?” Every time you answer the question, you’ll ask again until there are no more whys to be asked and you have hypothetically reached the root of the problem. Root-cause analysis is essential for getting to the bottom of customer complaints, addressing them, and preventing the same complaint from happening again in the future.
Let’s revisit the example from Part 1, where we used qualitative feedback to identify the reason behind a new feature’s low adoption rate.
You’re the product manager for a popular productivity app. Your developers worked hard to crank out a calendar integration feature, but weeks after its release, few users have adopted it.
You’ve learned from a survey that most of your users were unaware that a new feature had been released.
Because they didn’t realize they had received a notification about the new feature.
A notification was sent, but it wasn’t visible.
Your notification symbol blends in too easily with the rest of your menu.
It wasn’t bright or bold enough.
We can keep going with the “whys,” (you might ask as few as three or as many as ten!) but here’s a place where it’s reasonable to stop and consider that your notification scheme isn’t working for you.
In response to this incident, your developers decided to mark new notifications in bright red and send a modal message about the new feature upon the user’s next login. The result? Awareness and adoption increase, and soon, your users are integrating their calendars with your app, living more organized and productive lives, and leaving you positive reviews.
Congrats! You discovered the root cause of the issue.
Here are some additional guidelines for effective execution of 5 Whys:
- When you can, work in groups to avoid biases and knowledge gaps.
- Clarify the problem and make sure everyone actually understands it.
- Follow logic when identifying factors in a cause-and-effect relationship. Test it by reversing your sentences, using “therefore” or “so” instead of “because.” If it still makes sense after the reversal, it’s probably solid logic.Here’s an example: “The users didn’t realize there was a new feature because the notifications weren’t obvious enough” becomes “the notifications weren’t obvious, so the users didn’t realize there was a new feature.” The result is a logical statement.
- Don’t skip steps or make logical leaps. Move one step at a time so you don’t miss anything, and don’t fill in the blanks with assumptions.
Your conclusions must be determined by examining the process, not the people, and from the customer’s point of view.For example: The root cause in the example problem is not “the customers don’t care about calendars, so they didn’t notice it.” The root cause of the customer’s issue was simply that they hadn’t realized the feature had been introduced — because the notification system needed to be improved.
This method isn’t foolproof, but works well so long as you’re aware of potential pitfalls. Knowledge gaps are one such risk factor; many people will fail to identify the root cause of a problem if they don’t have enough information to diagnose it. That is why teamwork is recommended. Stopping at the symptom level instead of the root cause is another potential issue. This can be avoided by seeking verification for your answers along every step of the way, and continuing to ask “why?” until you have reached a logical root cause.
There’s also the potential failure to find all root causes if there are multiple issues contributing to the problem. There are ways to avoid this trap, though, such as drawing an Ishikawa diagram, outlined below.
A simple diagram will allow you to better visualize the possibility of multiple root causes contributing to your problem. Use this simple fishbone-shaped diagram to organize your flow of ideas. The problem or defect forms the spine of the fish, with the potential causes branching off as the ribs of the fish. You can use sub-branches for root causes, with as many levels as you need to reach the real root. Plotting issues and hypothetical root causes visually can help inspire ideas.
Like 5 Whys, this exercise is best done in teams. Also like 5 Whys, fishbone diagrams aren’t perfect (what is?)—they can sometimes fail to recognize relationships between causes, and are also limited by potential knowledge gaps in the team. Despite these issues and its occasional tendency to oversimplify, it has become one of the most famous quality control tools over the decades due to its ease of use and effectiveness. Part of this method’s strength is in its versatility; it can be used to diagnose issues both technical and behavioral.
This visual brainstorming aid was made popular in Japan in the 1960s by Kaoru Ishikawa of Kawasaki Shipyards. For a powerful example of this technique in action, check out this in-depth look at fishbone diagrams, focused on a detailed IBM case study. It is even said that Mazda engineers used these diagrams to aid in the design of the sleek, popular Miata MX-5 sports car, continuously in production for over 30 years.
Though imperfect at times, 5 Whys and fishbone diagrams are powerful and simple diagnostic techniques that can help you gain more insight into addressing your challenges. All feedback analysis techniques require your human intelligence to close the gap between information and actionable insights.
Multiple Choice Questions
Giving your users multiple-choice questions is a great way to collect straightforward feedback because it’s low-effort for them (so you’ll get more responses), and it forces your users to state their priorities when given predetermined options. Instabug automatically plots your survey results in a pie chart for a quick visual breakdown of your responses.
The most important step you can take to get a deeper look at this type of feedback comes before the questions are asked, not after. Be proactively strategic about how you frame your questions and ask meaningful questions that will give you actionable answers. Choose your target audience and timing before sending a survey out to everyone.
You can weigh the answers according to their importance to you. If you ask the same survey question to multiple segments, check out how responses are distributed within segments rather than averaged across all groups. You may discover interesting trends about how people are using your product differently.
In order for a well-rounded approach that’s both quantitative and qualitative, try to complement this type of data with free-form feedback from your customers. Additional comments can provide context and clarify examples, especially when presenting your data to your team or stakeholders.
Ah, the Net Promoter Score survey. Such a simple, widely-used question that yields powerful results. This is generally seen as the primary metric for measuring customer loyalty. We’ve all seen it at some point or another: “on a scale from 1–10, how likely are you to recommend this app to your friends?”
Those who answer 9 or 10 are called promoters, 7 or 8 are considered passives, and those who rate you between 0–6 are your detractors. Rather than being an average out of 10, your total score could be anywhere from -100 to 100.
So what’s a good NPS score?
Well, that’s relative. But in order to find out, first, you’ll calculate it. If you’re using Instabug, this step will be done for you automatically and your score will be displayed in the analysis tab. If not, it’s quite easy to do it yourself: just subtract the percentage of detractors from the percentage of promoters. Generally speaking, any resulting number above zero is considered good. But people tend to answer this question differently across industries, so for some companies, a zero just isn’t going to cut it. For others, a score of -5 might make them industry leaders. The average NPS in the software industry was 31 in 2018. Where do you fit in?
The first thing you might be tempted to do is compare your NPS score to your industry’s benchmarks. In fact, that’s what we just did. But in order to make a fair comparison, you’ll need to pit your results against other scores that are relevant to your app category, location, and customer segment. A simple industry-wide benchmark could be misleading, as NPS averages can vary for any one of these factors; some countries even average 10 points lower on all NPS scores, regardless of the industry!
An NPS survey typically doesn’t end there, though. Many companies will ask their customers a follow up question or send them to the app store for a rating. If you’re using Instabug, we apply logic to route your users to conditional follow up actions after they rate your app: by default, your promoters will be asked if they’d like to leave you a review in the app store, and passives and detractors will be asked what you could do better. The exact wording can be changed in your survey settings, but this approach is extremely effective in raising your app store rating and controlling churn.
One of the most important steps in the NPS survey process comes after the feedback is received. Don’t forget to close the loop. Instabug allows you to write back to your users straight from the dashboard, so you can dig deeper into their thought processes and let them know that you care about their experiences with your product. By continuing to ask questions, you’ll unlock insights that your customers didn’t realize you needed.
A popular variant of the NPS question is: “How would you feel if you were no longer able to use this app?” If enough people answer with “very disappointed,” you’ve probably got a winner. Prominent growth hacker Sean Ellis puts this figure at 40%. Whatever the number, it’s a good idea to pay close attention to the segment that answered “very disappointed.” These are the people who love and are invested in your product. Weigh their opinions more heavily than those who wouldn’t notice if your app disappeared tomorrow.
Since NPS questions come in two parts (the rating and the follow up question), don’t forget that you can apply the previously mentioned textual analysis to these answers as well. Instabug will capture main themes from your NPS responses in a word cloud. You can also apply the 5 Whys here in order to identify the root causes of the issues your customers are experiencing. It’s also recommended to cross-reference individual survey responses with other feedback events from the same user—if they’ve reported a bug, suggested a feature, or sent your other commentary, you might infer that these events could have influenced their NPS response.
Everyone’s got opinions; your app users are no exception. You’re certain to get feedback with ideas, and some of those ideas will be good. Some might even be great!
Instabug’s feature request function captures your users’ suggestions in one place and lets other users vote and comment on them (comments can be turned off in your preferences). You can sort the results by number of votes, last activity, or keywords. You can also see some information about the person that posted the feature request: their User ID number, profile and any attribute tags you’ve applied (great for segmentation!).
So you’ve received some requests and discovered that some ideas have garnered a lot of votes. What’s next? How do you decide what to build and when? Should you implement the most popular idea?
Well, not necessarily. You can take this analysis deeper than just counting votes.
If there are a few ideas that stand out to you, you can create an impact-effort matrix to decide which ones are worth implementing.
This exercise is best done in a group, so you can gather as many perspectives and as much information as possible. Together, work through your “maybe” list, one feature at a time. First, decide how much effort it will take to implement the idea. Some factors that will go into this might include cost, manpower, complexity, resources, and so on.
Then decide together how impactful the new feature will be. Did it earn lots of votes? Is it something many people have asked for? Are the people who are voting for this feature in your target segment? All of these questions and more can be worked into this exercise.
You can probably see where this is going. Draw a box with four quadrants, plotting Impact on one axis and Effort on the other. Once you’ve determined your two variables for each feature, plot them on your chart.
This simple team exercise is an easy, effective way to sort through the noise among all these ideas, and make prioritization decisions crystal-clear. Choosing what to add to your features roadmap next doesn’t need to be a complicated process.
Don’t forget to pay attention to both positive and negative feedback. The positive feedback you receive will boost your team morale and motivate them to keep doing well. It also reinforces your roadmap decisions. The negative comments will give you an opportunity to reach out with Instabug (or your preferred feedback tool), close the feedback loop, identify problem areas, and reduce churn.
How User Feedback Will Shape Your Product Roadmap
How much influence should users have on your final product?
There’s no one-size-fits-all answer to this question, but it’s generally accepted that your user feedback should influence your product development… without driving the train. Hearing the customer voice is crucial to building an app that users love, but balancing feedback with overall strategy can be challenging. In this part, we’ll discuss how to manage your users’ relationship with your roadmap:
- Should you make your roadmap public?
- Feedback sources and analysis
- Product validation and continuous discovery
- Feedback segmentation and prioritization
Should you make your roadmap public?
Your roadmap is a visual reference for your product goals and development all throughout its life cycle. Trello is a popular roadmap tool because it’s free and so easy to manage, modify, and move cards. It’s also a great place to host your roadmap online if you decide to make it public.
A public roadmap sends your customers a message of transparency. It keeps them informed about your future together—and this is also powerful for attracting prospects who are on the fence about your product and might be waiting for new features. It’s also a valuable resource to provide during customer support conversations.
Some product managers advise caution with public roadmaps due to the fact that it reveals your strategy and progress to your competitors. This is certainly a potential pitfall—you can manage how much you choose to reveal, if anything. But to others (including major players like Slack), the community engagement and atmosphere of trust are worth the risk. Whether or not you choose to make your roadmap public is completely your call; either way, it will become an invaluable asset to you and your team.
An alternative to a purely public roadmap is creating a public feature request board, which will stir up community engagement and get users more invested. If you allow commenting, it’ll become a new feedback avenue for your most devoted fans. You’ll be receiving valuable feedback while keeping your users in-app. If you’re using Instabug, you can mark your feature requests with status updates, so your users know what’s in progress and which features have been added. This involvement will get them excited about the next steps in your product development, but you have the advantage of not completely revealing your roadmap.
If you’re using Trello and you choose to release your roadmap, all you have to do is set your board visibility to public and choose whether or not to allow visitor voting and comments on your roadmap. For an in-depth look at public product roadmaps, check out this study from Product Coalition.
Here are a few companies you might already know and love who have well-designed, informative public roadmaps (they might give you a few ideas!):
Product validation and continuous discovery
Feedback analysis is an opportunity to give yourself a reality check. By keeping up with your customer thoughts and feelings, you’ll be continuously validating your product strategy throughout its development. Check in with your progress by considering:
- What do your users think of your product?
- What is your product vision?
- Is the customer experience in alignment with your vision?
Keep your perspective fresh (and real) by frequently listening to your users. Continuously build upon valid feedback to inch your product’s reality closer to your vision. Gathering data will also reduce aimless speculation and keep you on an informed path toward your goal—or in some cases, cause you to pivot. This situation is different for everyone, but in the section on prioritizing feedback, we’ll address how much weight should be given to customer voices.
Your users’ perception of your product matters. It’s important for you to examine whether their thoughts and interaction with your app are in alignment with your positioning and messaging. If your most common use cases and your marketing efforts are a total mismatch, then one of them might need to change.
Keeping watchful eyes on your feedback during the development process will help you build a successful product. It’s helpful to adopt an outlook of continuous discovery; to be evolving and iterating incrementally, guided in the right direction by feedback from your users.
Feedback segmentation and prioritization
Receiving large amounts of feedback can sometimes be overwhelming. You’ve probably heard from many users with diverse or even contradictory opinions. What do you do with all of this data? How do you decide which voices to listen to?
As mentioned throughout these articles, customer segmentation is key to balancing these many viewpoints. It’s important to know where all these opinions are coming from, what users with similar viewpoints have in common, and how they relate to other users.
Your app users aren’t all the same: you’ll have some power users, casual users, those who signed up but rarely log on, freemium users, paid users, and so on. There are numerous ways you can divide them up. Just remember that not all opinions carry the same weight. By getting granular with your segmentation, you’ll find out if patterns start to emerge within different groups.
Choosing the right group is context-dependent, according to Kik product manager Ashton Rankin, who uses Instabug for user feedback and bug reporting. Here’s what she has to say about power users:
It’s important to remember that the loudest voices are not necessarily your most valuable users, and segmentation will help you sort through the noise. You are less likely to prioritize requests that come from casual users than those from your most frequent or paid users. Balance feedback intensity with quantity. In order to prevent feature creep, you’ll have to avoid taking too much advice from your users.
Wildly successful e-commerce platform vente-privee segments users before the feedback collection begins, in order to streamline their feedback process. Segmentation should be goal-oriented. For vente-privee, their highest-value clients were the segment whose opinion was wanted.
You should also be keeping tabs on the developers’ vision for the app and how the features you’re considering tie in with them. You and your developers should discuss potential feature additions and decide together which ones are compatible with your goals and abilities.
Once you’ve determined which requests are worth your consideration, it’s time to prioritize them. An impact-effort matrix is a simple way to sort your features into “NO” vs. “GO” by stacking difficulty and cost against the potential outcome. You can check out the full walkthrough for matrices in the feedback analysis guide in PART 5.
In this guide, you’ve learned the elements of becoming a feedback fighter. You now have direction on what to do when it’s time to collect feedback, analyze it, and implement thoughtful, data-driven changes.
Remember that customer feedback spans many channels, and it’s important to collect opinions from multiple sources—but in-app responses are your most indispensable qualitative data source. Balance this feedback thoughtfully with your quantitative analytics data. Using a feedback tool like Instabug allows you to chat with your users in-app, suggest features, and design highly customized surveys.
By focusing on in-app feedback, you’re going to learn so much about your app from the user’s point of view. It’s highly likely that you’ll discover issues that may have gone unnoticed by your team. You’ll receive a higher volume of feedback from strategically targeted segments, and the low-friction approach will gather more responses and widen your data pool.
You’ll also build valuable relationships with people who are invested in your product development. Creating a sense of community and human connection with your users will not only expose you to broader viewpoints—it’ll also increase your customer engagement and satisfaction.
And making users happy is what building apps is all about!