Reading:
Mobile Performance Validations in CI/CD

Mobile Performance Validations in CI/CD

August 27, 2021

Every team would like to uncover performance issues before going live by automating app profiles in Android and iOS, but it’s not always straightforward to implement. 

Most teams who use Apptim for local app profiling ask us: 

Does Apptim work in CI/CD? That would be great! 

How can we automate performance validations? 

Should we run these validations using emulators or real devices? 

What pass/fail criteria should we set? 

There are a lot of questions we get asked about how to integrate mobile performance validations in a CI/CD pipeline. In this post, we’ll answer a few frequently asked questions and provide best practices to get started.

What do we mean by mobile performance validations?

Mobile performance validations refer to measuring and analyzing app performance on the client-side (meaning on the app and device itself). It’s important to understand how the app uses things like CPU, Memory, Battery, Storage I/O, as well as FPS, Threads, End User Response times, App Startup Time, and Crashes or ANRs in Android. Developers and testers will typically access some of this data by profiling their app locally with a tool like Android Profiler or Instruments (Xcode). 

You can analyze these mobile performance KPIs in different environments by changing the device/OS or the network connection (by simulating different bandwidths and latencies). The goal is to detect potential issues your users might face early on when using your app in real-world conditions and act in time to fix them before a new version is released.

How can we automate mobile performance validations?

You first need to find a way to run automated profiling of your app (so you don’t have to manually start profiling, run a test on the app, stop profiling, etc). The easiest way to get started is by reusing automated functional tests that have been created for functional validation purposes. These tests run at UI level on a package version of the app that can be installed in a real device or emulator and simulate a real user journey. While this test runs, you can capture performance data of what’s happening “under the hood”. 

In our experience, most mobile teams already have some type of automated UI tests running in their CI pipeline. If your app is in beta or new to the market, you may be thinking about adding them in the near future. This is the best time to think about how to include mobile performance validations in your app release process.

For example, here’s what happens in an Appium test that runs a typical user journey in an e-commerce app: a user searches for a product, selects the product from a list, adds it to a cart, navigates to the cart, and completes the checkout. This functional test might be checking to make sure that the correct product was added to the cart, the product quantity is correct, or whether the checkout is working properly. At the same time, we validate what the response time is for a simple action like clicking the “add to cart” button as well as the memory usage if the action is performed several times. Will it cause an OutOfMemory error? 

If your team doesn’t have any automated functional tests, we strongly recommend you automate a small and valuable use case to start measuring performance over time. For example, measure the app’s startup time or test the login user experience. 

Should we run these validations using emulators or real devices?

It depends. If what you’re most interested in is comparing or benchmarking different performance metrics of your app (like v2.6 versus v2.5), you should have test environments that are as similar as possible. In particular, the devices used to test should be the same. You’ll want to minimize the noise in the data that comes with using different environments and look at differences in the measured performance on each version. For this purpose, emulators can be of great help because you can specify the hardware and OS version of the emulated device and use the same emulator for benchmarking. It’s also a cost-effective alternative to using real devices if you run frequent benchmarks. 

On the other hand, if you’re looking to evaluate the real user experience, you need to be as close as possible to real-world conditions. This means testing on real hardware. In addition to looking for noticeable performance differences from one app version to the other, you’ll want to make sure the app’s performance is acceptable on specific devices. You can do this by defining thresholds per device. For example, the memory usage cannot be more than 300MB on a specific device. Or, you can get notified if the FPS is lower than 10 on any screen (and probably fail the build pipeline).

What pass/fail criteria should we use? 

This is one of the most common questions we get asked and, arguably, the most difficult to answer. Google and Apple provide some best practices for pass/fail criteria. For example, an app rendering at 60 FPS provides the best experience to the end-user. Does this mean you have a performance issue if your app renders at 30 FPS? Well, it depends on what type of app you have. A mobile game or an app that has heavy graphics will have higher FPS requirements. Transactional apps may not need high levels of FPS because knowing how fast certain transactions are completed is more important. Measuring the end-user response time of the login page or an action like adding an item to a cart is a good way to measure transaction speed.

Our recommendation is to define pass/fail criteria with the whole development team as non-functional requirements. This can be the number of crashes or errors, the average percentage of CPU usage (like under 50%), or the app startup time (like under 3 seconds). The end goal is to have more confidence in the quality of every build. If you’re meeting your pass/fail targets every time, you’ll have more certainty regarding the end-user experience. 

How can I use Apptim in CI/CD?

For those of you who got this far and might be still wondering if Apptim works in CI/CD, the answer is YES! We have a CLI tool that allows you to run automated performance validations of your app, integrate to CI/CD, and set pass/fail criteria for the most important KPIs that affect user experience. If you’re interested in a demo, you can schedule one here or reach out to our team at hello@apptim.com.

Do you have experience integrating performance validations into your app release pipeline? What has and hasn’t worked for you? Share your thoughts below!



0 Comments

Leave a Reply

Related Stories

October 7, 2021

8 Causes of Bad App Reviews That Can be Prevented

mobile game testing article image
March 30, 2021

Mobile Game Testing: Industry Leaders’ Challenges and Strategies

Arrow-up