Good afternoon, good morning, good evening everybody I'm really excited to talk about AI, but really, honestly, I'm really excited to talk to a group of people who care as much about mobile apps and mobile app development as I do. It's something that's a personal passion of mine it's the reason why I work at a great company like Instabug so it's great to see a collection of mobile app developers all in one place.
I'm going to be discussing how AI is transforming how world-class mobile teams develop their mobile app, introduce the concept we call zero-maintenance apps, and dive into how AI is evolving, not just the way in which mobile development teams interact with apps, but the way in which users interact with apps.
So without further ado, here's the agenda for today:
- I'm going to set some background
- I'm going to introduce you a little bit to Instabug
- I'm going to define a vision and a direction for where AI mobile observability is headed
- I'm going to highlight some of Instabug's current capabilities in this regard
- Talk a little bit about where we're headed
- Recap
- Then we'll do a Q&A at the end
Okay, let's kick things off!
There has been a lot of hype, and some fear about AI —I think we're more in the hype phase, to be honest— and what we're seeing is that in lots of different places, there are specific use cases where AI excels, particularly when it comes to development, whether that's building scaffolding like you might use in an AI code editor or responding or analyzing large amounts of data like you might see in a customer support tool or in a user feedback analysis tool.
The reality is that more and more code is going to be generated by AI. Just recently Google held their Q3 earnings report and mentioned that 25% of all new code written at Google in the last quarter was written with AI. We're in the early stages of AI's involvement in the code development process and I think it has a particular impact on mobile.
It comes down to efficiency, you know, there are a whole lot of problems and struggles when it comes to mobile app development because the environment in which you're deploying your software is so varied and it comes down to confidence in the fact that you're pushing the quality bar of your mobile app to where your customers are exceedingly expecting it to be. These transformations are going to take time but we think it's important to establish a vision and make sure that the industry is pursuing a vision that focuses on what we think are going to be the most important problems that AI can solve first.
I'm going to say something that's very obvious to all of you: maintaining your mobile app is challenging particularly the quality of your mobile app, it is increasingly challenging. I want to pause before I jump into why it's challenging, which is like I said, going to be fairly obvious to most of you, but I want to double down on the word "quality" that I'm using because quality can mean a lot of things and I want to make sure you understand what I mean when I say it. From Instabug's perspective —we've been around for a long time— we believe that the definition of quality of a mobile app has been evolving since the iPhone was invented in 2007.
At the start, if you were in mobile app development then you know that the most important quality metric was one of stability. Crash-free rate was the primary metric and as soon as we started seeing more and more mobile apps —particularly when the apps were competing for the same user base— there was a lot of competition based on the quality of the app. You can think of the Lyft and Uber days when the two of them were fighting for market share, and one of the primary ways in which they were differentiating on that market share was about the quality of the app. Did users enjoy their app? Did it not crash? Were they successfully able to book rides?
Since then, if you fast forward, you will see that more and more performance metrics are starting to be considered key indicators of a quality app.
At the start, maybe five or six years ago, there was a lot of focus on app launch times and the impact of app launch times on users' perceptions of quality. Now we're talking about screen load times, specific network requests, and specific flow times. All of this is not just because of the competition amongst mobile apps, but also because user expectations of what makes a great mobile app are continuously set by the best mobile apps out there.
You know this from using your phone. When you switch from one of the world's top apps to an app that you might not use as frequently, your perception of how that app —that doesn't have tens of millions of monthly active users— should perform is that it should perform on par with the other app you just used. You're multitasking and switching between apps, which leads you to believe that all apps should perform at the same level. So, the user perception of what it means to be a quality app is continuing to evolve.
The best mobile teams in the world don't just think about stability anymore, they think about broad performance metrics and performance expectations. They are also thinking about other signals of quality, signals that might be a little bit more signals of frustration rather than just a definitive deterministic behavior in the performance of an app. You add on top of this the fact that the definition of quality continues to evolve and the fact that you operate software across millions of different types of permutations of the device, operating system, and network configuration and state, and it becomes very difficult to maintain a high quality bar.
You all know this. From the research that we have, 40% of your development time is spent on maintaining app quality. Traditional organizations have to respond to this in a very reactive way. They receive crash reports, they receive bug reports from users, they receive negative feedback from users or from leaders in their organization and they react to it. What we think AI can do is to substantially change that relationship to be a much more proactive management of app quality.
It's a game changer because it can become more proactive. You can spot quality issues sooner, you can spot quality issues that you wouldn't have spotted deterministically, and you can —by spending less time on quality— make sure that your focus and your team's focus is less on the quality maintenance burden and more on delivering the things that we all as mobile developers love to deliver, which is innovative user experiences directly on their phones.
I want to pause after that background about the state of AI and the state of mobile quality and the burden of mobile quality maintenance and give you a little bit of a word from the sponsor about Instabug. Instabug is a platform built specifically for mobile with AI built-in. We cover a whole suite of products and capabilities that I'll go through really briefly, but I just want to highlight that our platform is built to deliver on the promise that the world's largest and most advanced mobile teams deliver on, which is this very high bar of app quality. We think of mobile teams in various states of the maturity of their focus on quality and a lot of it depends on what signals they're able to get for app quality from their app and the tools that they use. We're built to help teams advance that maturity to the state where, as I mentioned, the world's largest apps live.
Our platform empowers teams to make proactive app maintenance a reality; identifying issues before they impact users and managing the quality bar across multiple teams, where the assignment of the app might be spread across multiple teams. In the end, what it delivers is a better quality app, better business results from that app, smoother experience for, not just your users, but also the developers so that we're spending less time on the toil of app maintenance and more time delivering the Innovative experience as I mentioned before.
As I mentioned, Instabug is built for mobile exclusively. We focus on making sure that our SDK always supports the latest OS versions, cross-platform frameworks, and declarative UI Frameworks, for example. We're also release-centered; we focus on the timeline for mobile apps based on release, not based on time because, as you know, users can be on various released versions at various states of time.
We are really focused on capturing this next wave of user-centered insights —I'll talk about some examples of those in a minute— but we capture types of frustrating experiences that no other tools do because we're so focused on mobile. We do all of this with an SDK that doesn't require any breadcrumbing throughout the app and we're really focused on larger teams and the fact that oftentimes your mobile app is split amongst teams —different experiences or screens are split amongst teams— and we give larger organizations tools to triage assign and track the quality of specific portions of the app that might be assigned to those teams.
I want to briefly go through some of the capabilities of Instabug. So, we have a Bug Reporting tool that's used both in pre-production and production environments. It gives you really rich details to actually reproduce and debug the bugs that are being reported by your users or a UAT (User Acceptance Testing) or beta test program for example. We also have an AI feature called Auto-Detect UI Issues that I'll jump into later.
We have a Crash Reporting tool that captures not just standard crashes, but also OOMs, ANRs, and a unique experience that we call frustrated user terminations where a user closes your app and restarts it within three seconds. That's clearly a sign that something was frustrating but it wasn't long enough to be an ANR, for example. We also include visual reproduction steps to really help you reproduce exactly what happened before the crash happened and we have super intelligent crash grouping —best-in-breed crash grouping— to make sure that you and your team are focused on the right root cause of crashes rather than trying to triage things that maybe aren't really matched together.
We have a Performance Monitoring tool that really helps marry between stability and performance. It follows the Apdex model where you can set thresholds for specific network requests, screen loads, or app launches and then categorize whether or not the user's experience in each individual occurrence or instance was frustrating or not and we give you rich debug tools to help you, including things like flame graphs and breakdown of the network requests and their payload to really help you diagnose what's going on when you are trying to figure out a performance problem.
We have a Session Replay tool that again categorizes the entirety of the user session into whether it was frustrating or not and gives you highlights of the specific moments in that user session that were identified as frustrating. We help debug app ratings by capturing when a user is prompted to perform an app rating; we can connect a session automatically to that. So, instead of just responding to your users and asking for the logs, you can actually look up the user's session and figure out what happened before a user gave you a negative app rating review, for example, and actually be able to respond with something or debug it and make sure that it doesn't happen again.
And then lastly we have a Release Management capability. This allows mobile teams to control how they roll out through the app stores, based on quality signals. So, maybe you want to prevent a rollout if one of your key flows has a drop-off or pause the rollout if your crash rate increases on that app version. All these are some things that we definitely heard from mobile teams where that was a very standard way of deploying. On top of our release management capabilities, we also have the ability to manage and connect to feature flags. As we know, more and more mobile teams are rolling out their software to users using feature flags.
We do all this with a whole bunch of Integrations to your standard systems, whether that's Jira, ServiceNow, Slack, or Teams, and we also take pains to make sure that we integrate with your backend observability tools so that you can have end-to-end observability across the entire experience of your users, including if that request went through a backend service. We make sure that that handoff —the debugging handoff— is seamless by giving mobile development teams shortcut links to backend dashboards so they can hand off to a backend team or go diagnose on a backend dashboard a specific transaction request that they might be debugging.
Okay, so that's a little bit about Instabug. That was the word from our sponsor. Now, let's get back to our AI vision.
I want to start with a fundamental fact: you all as developers understand that AI's ability to write code is continuing to improve. You're probably using it in your IDEs and other editors today. You're probably using that like most of us, for scaffolding work or understanding code that you didn't write. We believe that the first place we'll see more automated usage of AI-generated code is in mobile and is in mobile app quality specifically. That's because the types of problems that you see in mobile app quality tend to be the types of problems that AI can diagnose quickly, and they tend to be the types of problems that have more standard resolutions.
So our investment here in this vision of zero-maintenance apps is not predicated just on the fact that we're a mobile-specific company. We have made specific investments because we believe that AI can be transformational and will be transformational for mobile app development. Oftentimes —we expect— before broad-based development.
We imagine a world where your mobile app quality platform practically improves and maintains quality by itself. You can imagine a tool that captures crashes, spots them, proposes pull requests, runs those pull requests through your standard tests, and —if successful— pushes them to your production app via a controlled rollout for, example. This is the vision behind zero-maintenance apps and we believe that AI makes that possible not just by fixing crashes in an automated fashion, but also by anticipating crashes, by building a more robust testing engine that —typically for mobile teams— is very hard to build and maintain, as well as catching those types of issues in pre-production environments through the spotting of issues in beta or in UAT-type testing environment.
What this does for mobile teams is it reduces workload, leads to happier users, leads to faster time-to-market, and the key thing is it leads to more time for development organizations to focus on the most important thing that they can deliver for their business, which is great user experiences. We really believe that every mobile developer we've ever met wants to spend their time building great experiences for the user that they can hold in the palm of their hand, and that too often we're spending a significant amount of hard time fixing app quality issues rather than doing just that.
From a technical perspective, this can be both proactive issue identification and resolution. We're going to talk about some features that Instabug has where we catch things that wouldn't normally be caught by both human bug reporting tools or by traditional let's call them deterministic SDKs. We believe that we can eliminate repetitive and time-consuming tasks, reducing the overhead so that developers can focus on what's most important for their apps, and we believe that the mobile development platform we built integrates seamlessly with how the developers perform this work today.
And so, it can enable technical organizations and leaders to reduce the amount of time we're spending on maintenance, make that more efficient, be more confident in the reliability and quality of their app when they're releasing it, and again the core result is delivering a great experience for your mobile users.
As obvious business benefits, detecting issues and resolving them quickly, preventing them from propagating to users for an extended period of time, —they can result in negative reviews of your app which we know can doom many mobile apps— reducing the maintenance cost, being able to ship new capabilities to market faster and out-evolve your competitors, and delivering a great user experience. Mobile teams have various final business results; some are producing revenue, some are creating engagement, and some are reducing engagement in other channels or forums, and we believe that with AI and zero-maintenance apps, you can achieve all of those results faster at lower costs.
I want to go over two of the features that are embedded in Instabug today. The first one is what we call SmartResolve. This is the AI smart agent for producing a pull request that will reduce or eliminate a crash that we've spotted. It goes through a three-step process where we —and this is actually part of our SmartResolve 1.0— carry out the first step of performing a root-cause analysis based on the data that we have and suggesting the kind of general idea of a fix, in words, without having access to the app's code.
This is actually something that we found to be a task that AI was fairly good at because the number of crashes that can occur and the types of crashes that can occur on an app are fairly finite, and with the crash patterns that we can get from tens of thousands of occurrences of a crash, for example, we can easily spot the uniqueness that might be causing that crash to occur, relative to the rest of the occurrences happening and develop a fix. I think the general-purpose AI models are great at this and we continue to fine-tune them so that they're geared specifically for mobile apps, and the pipeline that we provide them to highlight the specific information that we're collecting from mobile apps has allowed us to get really robust results here.
The next thing that we added —which was part of SmartResolve 2.0— is generating the code fix. To be honest, the technical challenges here were our retrieval pipeline to make sure we're grabbing the right parts of the code. You know mobile apps are complex; they're typically formed in one code base and it's hard to make sure that you're grabbing the right functions and classes that would have been involved in the specific crash that you're seeing, but through a bunch of fine-tuning, we have developed a really robust pipeline that allows our retrieval to be very successful, and then our generation of the fix to be more successful. We create the pull request in your app repository and kick off the CI jobs that you have already set up to test that fix.
As I mentioned, the benefits of this are similar to the benefits overall. It reduces the mental overhead of having to think through all the things that I just mentioned: looking for a specific pattern, trying to reproduce the crash, asking AI, or looking through the internet or Stack Overflow archives for this crash to see how others have confronted it, generating different fixes based on knowledge about the code base that they might not be familiar with. It skips those steps to just start with: here's something we think is going to fix this do you agree?
So, it really reduces the amount of time that you are ending up having to spend on this and what we found is that the primary benefit that makes teams say "Aha this is great", is that it reduces the amount of time that they can go from responding to this crash to it's now fixed. And every moment that a crash is out in the wild, especially with the review process from an app store, can end up having a serious business impact.
The next feature that I wanted to highlight is Auto-Detect UI Issues. We noticed that, with our bug-reporting users, there was a class of issues that were clearly present within mobile apps but weren't being reported. Those are UI issues. In this image you can see a mis-cropped image, text misaligned, or non-matching font styles. These can all be from minor tweaks or adjustments that were missed during the development process and it turns out that, if you're a UAT tester or beta tester, you usually just skip over these. These are the kinds of paper-cut issues to a user's experience that can end up leading it to be overall dissatisfying. So what auto-detection of UI issues does is it uses AI to catch them. AI is actually fairly good at spotting this kind of "this image has something wrong with it" type of experience, especially when trained on what traditional mobile apps look like.
We report these as a separate class of issues because they're not necessarily along the same class of an issue that a user-reported bug would be. But it really helps —particularly in beta or UAT environments— for development teams to spot these early and make sure that they can fix these paper cuts before they ship them to the store, and it frees up a lot of developer productivity. This is largely because —it's a funny anecdote we almost always hear— these types of issues tend to get reported by somebody else in your company. You can think of an executive, a CEO, or a leader who spots them and then the team has to respond because it feels very urgent. It really helps the mobile team to stay ahead of that kind of paper-cut issues so that they're not interrupted and have to scramble to fix them outside of their regular workflow.
So some key takeaways; We have a fundamental belief that automated code development and quality management are going to come to mobile first, and we believe that AI can really help reduce what is a very high quality and maintenance burden for mobile app teams in particular. We're creating tools that enable mobile developers as well as the teams that support them, like QA and product managers, to see the benefits of this vision. We believe it will reduce the costs and effort to maintain a high-quality app and allow you to pursue the innovation that's going to make a real business impact.
And we believe that standing still isn't going to get anyone anywhere because the bar continues to rise. The bar for what a quality mobile app looks like is never going to stop rising and if you are standing still, not investing in how do we reduce the amount of effort we spend on mobile maintenance, you're going to quickly fall behind.
With that, that's the end of our webinar today. Thank you everybody for joining! I'm happy to take any questions or discuss the AI challenges that you and your teams are experiencing, whether that's about Instabug specifically or about the industry in general. Thank you very much!
I see quite a few questions in our Q&A, but for the rest of the attendees, if you have any other questions feel free to raise your hand as well. I'll try to unmute you and we can talk live, but first of all, let's get to some of the existing questions.
- How will code-based security be handled? Will any of this data be shared with the model for training purposes?
No. So, the data is not shared with the model. The code base security is the standard code base security you would have if you share GitHub access with any other integration. We use a standard GitHub integration to gain that access and we do not send the AI data to the model for training. We just use the code base for the retrieval part to make sure we're able to generate a code fix, but not for training purposes.
- Has this crash resolution been implemented in any real-world applications? If so, what percentage of these fixes were effective in resolving the issue?
So, we have a handful of beta customers. I don't have data on what specific percentage of the fixes, but we do have a training benchmark that we're actually in the process of publishing ourselves. A mobile app AI crash-fix benchmark and we can share where our solution stands against those benchmarks, but we hope that any other tool that's focused on fixing crashes publishes where they stand against the benchmarks.
- Does SmartResolve work with native crashes as well?
Yeah, I think the answer is yes. I don't remember seeing any NDK restrictions on our SmartResolve. It can handle any type of crash, but I think the question would be if SmartResolve 1.0 could because we would be able to give a general recommendation, but maybe not if the NDK crashed. Yeah, I think that's a yes, but let us get back to you and confirm that.
- Assuming you tried different LLMs to generate code suggestions, are there any learnings you can share in terms of what worked well and what did not?
Yeah, this is gonna be part of that benchmarking report. There are a lot of lessons learned about our RAG (Retrieval-Augmented Generation) Pipeline and which specific models generated better fixes or not. I'm going to say our current one is Claude, but I think we've always been testing a handful of them and we'll make sure to publish those with the benchmarking report. So, great question! In the time delta between our smart resolve 1.0 and 2.0, we've been doing a lot of work around figuring out which models, training machines, and RAG pipelines build the best results and we plan to share all of this.
- What security measures are taken when AI processes the code base to provide fix proposals?
I think the real question is whether the code is being used for training purposes and I answered that question, but other than that, we have standard security protocols for transmitting the code to the AI models. But yeah, I think the primary question people are asking is are we using this for training purposes and we are not.
- Is there a thought around UI validation of the app being done when compared with some design tools like Figma or something similar?
Yeah, I love that question! I don't think so at the moment, but I love that idea. So, just to clarify, I assume this is about UI validation of the app against some standard. It'd be similar to when you do UI tests against functional specs. It's a great idea! you could imagine exporting a Figma file and running that through a CI job that also exercises the app and validates that the UI looks like the Figma file and actually it would be great. I love the idea, but we have not started working on that at all.
- We've been using Instabug for a couple of years now and it's been our primary tool for getting feedback from users. I want to know if you support configuration variables or if there is any plan to introduce that in the future.
I'm not exactly sure what configuration variables are, but first of all, thank you for being a customer. I love to hear that you're using it for getting feedback from users. The only thing that I might suggest is you can contact our support team with that specific question and we can make sure we're getting you a direct answer. Configuration variables sounds a little bit like a concept we have called user attributes where you can tag any user session with a specific set of attributes so that when you look at a bug report, for example, you can see things like if this is a VIP customer or not, or you could use that for something like configuration variables which is kind of what was the state of the app when this user submitted the bug report. You might just look in our docs for something called user attributes, but if that's not really the solution, please contact our support team and we can make sure we get it solved.
- What's the plan for these new AI tools to support Flutter apps?
Right now it just supports native apps, but we're working on Flutter and we've been leading the way as a mobile SDK for Flutter and React Native support. I expect that to be coming shortly.
- Any stats about the impact on app performance?
The thing that I want to highlight is that Instabug is on, I think, four billion devices at this moment and we spend a lot of our time —probably 30% of our engineering effort— on making sure that as an SDK that's meant to monitor your app's quality, we are not impacting your app's quality. We take special pains to make sure that there is a low impact. Obviously things like capturing a screenshot and then running that through visual UI issue detection, some of that we can do on the backend but we're hoping to build more models that are happening on the device so that there's less transmission of potentially personally identifiable information, but we're always going to keep the impact of our SDK on your apps quality in mind. We have one of the world's leading quality SLAs as an SDK provider.
- What about encrypted builds will they require any additional handling?
I think a lot of people have asked about encrypted builds. As all of you know, you do encrypt your builds when you submit them to the store. We have a process of decrypting them with dSYM files. So yeah, that is handled today, There's a process within Instabug already for you to upload dSYM files so that we can decrypt your app, particularly the crash portion of it. Then, we can match that in SmartResolve to the code base that you're connected to from GitHub.
- Can AI be linked with A/B testing to confirm solutions?
I'm not sure I completely understand that question. We have a capability where you can view the performance and quality impact of an A/B test or a feature flag that you're rolling out. That's part of our feature flag management capability. You can see the quality difference between two different feature flags that you're rolling out and make a determination about whether or not you should continue. We could add AI to help you make that decision quickly and you can even think about automating the rollout of that feature flag as long as the B test is performing better from a quality perspective than the A test. So, that's not necessarily something that we showcased today or are working on immediately.
- Was the UI validation tool also trained on other types of languages, like left-to-right languages like Hebrew and Arabic?
I don't think it was, but I wouldn't expect it to be too different, I don't believe it was trained on those specifically.
- Could this be trained on proprietary architectures which could be completely different from open-source resources?
The word architecture there, I hope it doesn't mean using different NDKs or something like that, but yes. The thing that we would need is access to the code that, like in a specific crash example, caused or was contributing to that crash so that we could suggest a solution. If some underlying modules or packages that aren't involved with the crash are proprietary or we don't have access to or you wouldn't want to give us access to, it's difficult for us to suggest a fix. It doesn't have to be open-source, but if you're using, for example, a proprietary SDK and it was involved in a crash, or a proprietary module that was involved in a crash, it would be harder for us to suggest a fix, but we would still capture the crash and capture the frames that might not be able to be decrypted.
- Is this capability part of the core product offering or is it an add-on?
It's part of our core offering. At the moment, it's in beta, but if you sign up for our core offering you can request access to the beta test.
- Does any personal user data or sensitive info like payment info get captured by Instabug intentionally or unintentionally? How does information use make sure to be compliant?
We have a very sophisticated masking capability within our SDK. You can turn screenshots on and on or off, you can mask all UI elements or only specific ones on specific screens, and you can interject and reformat the network request payload. We have all of these tools because we don't want anyone to send us sensitive information payment, PII, or otherwise, and we have some defaults that we believe 99% of the time will make sure that you're not doing that with us. It's what we call auto masking, but you can always override or underwrite that, masking as you see fit using our SDK tools. As I mentioned, as an SDK that's on billions of devices we have taken great pains to make sure that we give developers the tools to make sure that they're not transmitting PII or other sensitive information to us.
- Can developers modify code fixes suggested by SmartResolve?
Yes, in the pull request, you can add another commit to that pull request, for example, and further modify it, or if you're in GitHub, you can add comments and make suggestions just like you normally would on top of that. The idea is that it generates the suggested fix, but then you as a development team go through your standard code review process to make sure it's up to your code standards, goes successfully through your CI, and make adjustments before merging.
- What platforms are supported by SmartResolve? Is it both Android and iOS?
Yes, correct. Right now it's the native platforms Android and iOS and we're working quickly to add support for Flutter and React Native.
- Does SmartResolve require source code access?
Yeah, as mentioned, it's right now through GitHub and it's a standard GitHub application. It has all the security and authentication required of any GitHub application and that is what allows us to generate a fix and then actually propose that fix as a pull request.
- Can you explain in a nutshell how SmartResolve generates a code fix with the AI capability?
Yeah, it generates the code by first assessing and kind of outlining a suggested root cause in almost human terms, then it pushes that suggested root cause in human terms and the specific code blocks that we believe are involved in the crash based on the frames that we're seeing from it to an LLM model that generates the fix and then pushes that fix. Actually, it generates three different fixes and then makes a selection about which one is the best fix, then generates a pull request for that one, the best fix. As I mentioned, the real smartness in this is in the pipeline that retrieves the right amount of code to make sure that we are able to generate a fix and that's what we've been working on the last three months.
- Is there a recommended path to enable this to be tried on a lower environment first?
Yeah, I think that's reasonable. The lower environment thing, particularly with crashes, you might not get enough occurrences. If the lower environment is a beta environment with tens of thousands of users that might be useful. Like all crash debugging and resolution, it helps to have more occurrences of a common crash. The other thing I would say is: in a higher environment or in a production environment because we're pushing a pull request that you can choose to merge or not, it's not going to impact your production environment in any way. The only concern would be, that if you are capturing a crash in your production environment, you'd want to make sure that you're taking all the safeguards to make sure you aren't sending PII to Instabug. As long as that's the case, I don't know if I would say definitely start in a lower environment. You can, but you're not going to get as many occurrences to validate it and to be able to really take full advantage of the capability.
- Can SmartResolve work with multi-repository apps?
Today, you have to connect us to one specific repository, but we're working on the ability to add multiple repos. I think the question is: is it assumed that everything is in one specific repo to go search for? And the answer today is yes because of the way our GitHub integration works.
- Is it possible to enable this for only a segment of users?
You can enable Instabug Crash Reporting for only a segment of users. Our SDK enables you to control whether or not our crash reporter is listening or going to send crashes to our backend. There's probably a way where you could say, well this crash only impacted a certain set of users so I'm going to use the SmartResolve button on only this crash, but otherwise, it's just taking all occurrences of the crash that were reported to Instabug, and suggesting a fix based on all those occurrences.
Learn more:
- AI-Enabled Mobile Observability: A Future of Zero-Maintenance Apps
- Transforming Business KPIs with AI-Powered Mobile Observability
- 9 Privacy Must-Haves Before Adding AI to Your App
- The 7 Best AI-Powered AppSec Tools You Can’t Ignore
Instabug empowers mobile teams to maintain industry-leading apps with mobile-focused, user-centric stability and performance monitoring.
Visit our sandbox or book a demo to see how Instabug can help your app