About a year ago, I was helping out our recruiting team at a college job fair when a candidate earnestly asked me “Why are you looking for Android engineers? I mean, the Twitter app is fully built, so what are they working on?” The question caught me a little off guard, but we went on to have a pretty good conversation about the improvements we’re continually making to the app, and how the development work is never really done.
There’s always lots to work on in any app: fixing bugs, or starting to use new tools and OS-level APIs; improving load times and resilience under poor network conditions; making accessibility and internationalization updates; or experimenting with new layouts.
Often, there’s a business goal behind what you’re doing, whether it’s explicitly stated or not. Fixing bugs is a good thing to do for its own sake, but it’s also key for retention, since people tend to delete or stop using slow, crashy apps. Proper internationalization is important for adoption because many people don’t use apps that aren’t available in their country or don’t work in their language. Tweaks in user experience and design can have massive impact on engagement or purchases.
Whenever you make significant changes, it’s important to keep track of how they’re affecting the people using your app. Talking to your customers, in person or online, is always important. Feedback conversations provide nuance and details that you can’t always capture with a few top-line metrics. You can and should use qualitative research to shape your approach to customers, but you can’t always be in touch with each segment of people that’s using your app. Logging metrics about the changes you make is critical, because analytics are user feedback at scale.
When you’re ready to start moving past your MVP and shipping major updates, we have a few suggestions for how to do it right:
It’s hard to measure the impact of changes you make if you don’t know how you’re doing to start with. Before going into any experiments, you should already know key numbers like your current DAUs, MAUs, retention rates, and conversion rates on events like purchases or social shares (bonus: our free analytics tool Answers gives you these stats in real time).
You can safely assume that you should just fix bugs, or just ship a version of your app that works with a screen reader. You should certainly try and measure the effects that these types of change have on your business metrics — but unless it’s a significant change to the way your app operates, you may not need to spend significant time testing it out first.
What do you think might happen after you make this change? How will you know that it has or hasn’t happened? What do you consider success? Measure the effect you think it will have, but also keep an eye on your other key metrics — you may notice unintended effects alongside the ones you were testing.
As in other parts of this series, we’re assuming that you’re operating lean and don’t have a whole data science team behind you to help create and manage these tests. Here are a few important things to keep in mind when you do that:
Running a statistically robust experiment isn’t as simple as it looks on the surface. This blog post from ConversionXL gives some useful details on common mistakes people make, including ignoring validity threats, and increasing chances of false positives by testing too many variations at once. A/B testing frameworks like Optimizely help take a lot of the guesswork out of running tests and bake in some best practices around measurement and interpretation of the results, so you can make better decisions. There are also lots of great tips from around the web about running robust tests for your business.
What key things do you measure for your app? Tweet using #MobileAppPlaybook to tell us what’s important to you!
Did someone say … cookies?