Join CTO Moataz Soliman as he explores the potential impact poor performance can have on your bottom line. 👉 Register Today

ebook icon

User Feedback

General

How Mobile A/B Testing Can Complement Your Customer Feedback Strategy

How Mobile A/B Testing Can Complement Your Customer Feedback Strategy

By Jillian Wood

Leveraging qualitative and quantitative mobile user data to make product decisions is easier said than done.

Mobile product managers often have a shortage of direct-from-the-horse’s-mouth user feedback —and an overabundance of raw usage data to wade through to glean patterns and insights.

However, when app makers can leverage both types of user data effectively, they tend to create stickier mobile products than those who favor one over the other.

That’s why mobile A/B testing and user feedback collection strategies should go together like peanut butter and jam. Qualitative feedback collection lets users tell you what they (think they) want; A/B testing shows you what they (really) want.

By bringing both together, mobile development teams improve their apps faster, prioritize their mobile product roadmap, and increase the amount of high-quality feedback they receive.

In this guest post, we’re sharing insights for connecting these two strategies based on learnings from the Taplytics Enterprise Services Team, who help some of the world’s top brands optimize their apps.

‍

1. Solve UI/UX issues faster

Wading through user complaints or feedback to understand pain points and prioritize fixes can be challenging—especially if you can’t follow up with users easily to ask more questions.

However, A/B testing can help you test theories around UI/UX fixes for users and help you confidently determine whether your changes will positively impact the user experience or not—instead of shipping something that doesn’t end up resolving the issue or accidentally causes a drop in engagement or usage.

For instance, if users are struggling to make it past a certain point in onboarding, you can A/B test changing the screen flow to see what solution best solves the problem.

While it may sound like A/B testing is just another step before actually implementing a fix, many mobile A/B testing solutions offer visual editors that make quick copy, button, CTA or color changes quick and easy. You may also find some variations have little impact on behavior—meaning it may take a few iterations to figure out what the most optimal solution is (which is why having a mobile A/B testing solution is handy).

‍

2. Prioritize feature releases

If users request a feature, you can run a series of low-code A/B tests to see how popular the feature would be by breaking the larger idea down into smaller, lower effort changes, and testing the interaction levels with each.

For instance, if users request a “Wish List” or “Save For Later” feature, you could use a low-effort, low-code visual test—like presenting a “Like” or “Heart” button—to your app. If lots of users tap the button, that may indicate that a saved list of favorites is a worthwhile feature to build.

Setting up a series of these iterative tests can help you prioritize your product roadmap and decide the value of building out more complex user requests before you commit resources to it.

‍

3. Make new product launches more successful

A/B testing can help you introduce new features or changes to users slowly over time. This way, their experience—and your key metrics—aren’t negatively impacted.

Some mobile A/B testing tools have feature flagging tools that make this type of testing easy. You can target the release to specific user segments so you can see how power users react to a release vs. newer users, or release in small markets and glean learnings before launching to your entire user base.

You could also target qualitative feedback requests at users who engaged with the new feature to learn more about their experience before a broader launch.

‍

4. Use mobile A/B testing to get more qualitative feedback

Through mobile A/B testing, you can optimize your requests for user feedback to increase the quantity and quality of the feedback you receive.

You can run A/B tests around the following elements:

  1. When you request feedback. Test the timing of when you send messages by time of day, time zone, etc., or how new or old a user is (ex: Are existing users more or less likely to share feedback?). You can also base it off of specific user actions to see when they’re more likely to share feedback. (ex: Is a survey after onboarding more effective than after they’ve been using the app for a few days? After they’ve made a purchase?)
  2. The content of your request. A/B test the copy, images or buttons within your feedback requests so you can see which calls-to-action get more engagement. Framing a request in the right way can be the difference between users taking part or dismissing it. Our advice: Avoid generic requests as much as possible and explain the value of giving feedback before requesting it.
  3. How you approach inactive users. Don’t forget to test how and when you reach out to inactive users so you can learn what may be blocking them from being more engaged. You may base this off days since last login or the length of time between when they took a specific action (ex: Weeks since last purchase; Days since viewing content, etc.)
  4. How targeted you make your requests. Play around with where you put surveys in your app. Making the feedback specific to the section of the app a user is interacting with may garner most useful feedback than a generic questionnaire. (ex: Ask about the other channels they use in the “Connect To Social” section of your app; Ask them about their in-store experience after they search for a nearby location in your app; etc.)
  5. Your collection methods and length. A/B test different collection methods. Do users favor quick and easy star ratings or longer surveys? While the answers may seem obvious, sometimes longer, more unique looking interfaces can perform better than a generic rating request. You can also use goal tracking in some mobile A/B testing platforms to see where users drop out of your survey—allowing you to design a different flow that will increase completions.

Note: If you’re using a cross-channel A/B testing platform, you could also test which channels are the most effective between email, push, web, and in-app collection methods.

‍

In closing

Bringing your quantitive A/B testing data and qualitative user feedback together can make building an engaging app less of a guessing game and more of a predictable outcome.

Get your research, product, engineering and marketing teams together to think about how you can use A/B testing to improve feedback collection across platforms and channels.

Learn how Taplytics’ mobile A/B testing and feature flagging solutions can support your app optimization process here.

Want to try our latest AI features?
Sign up to the closed beta now
Thank you! We'll contact you once a spot opens up.
Oops! Something went wrong while submitting the form.

Learn more:

‍

Instabug empowers mobile teams to maintain industry-leading apps with mobile-focused, user-centric stability and performance monitoring.

Visit our sandbox or book a demo to see how Instabug can help your app

Seeing is Believing, Start Your 14-Day Free Trial

In less than a minute, integrate the Instabug SDK for iOS, Android, React Native, Xamarin, Cordova, Flutter, and Unity mobile apps