Resources | Resources |



Performance principles

Software performance is typically evaluated by measuring the time it takes to perform the designated tasks. Therefore, most of the software optimization effort is made toward increasing the speed of the software to reduce the turnaround time. However, in the same manner that high-performance vehicles are equipped with features in addition to speed, there are other factors that have a significant impact on software performance. This is particularly the case in software developed for embedded systems, such as mobile devices, when hardware resources are usually limited. Two additional factors to consider are power and memory consumption.

If performance is characterized by the overall cost of the execution of the software, and the overall cost is the sum of the weighted cost of time, power, and memory consumption, then mobile software performance can be improved by optimizing the software to approximate the minimal value of the cost of the software in the following equation:

Cost(software) =w1*cost(time) + w2*cost(power) + w3*cost(memory), where w1+w2+w3=100%

The factors on the right side of the equation sometimes pose a design trade-off against each another (for example, consuming more memory for a speedy response). Although the main focus of this guide is on lowering the cost of time, it is recommended that these trade-offs be thoroughly evaluated before any design or implementation decision is made in an effort to improve overall software performance. The key to performance improvement is to test it, and measure it accurately before and after your optimizations.

Improve software algorithms

Any software task can be characterized by its computation complexity, which is described by a Big O notation in disciplines such as mathematics and computer science. Since time is a function of computation complexity over processing power and processing power is limited and hardware dependent, a lot of effort in software development is to leverage better algorithms to reduce the computation complexity of the tasks to reduce time and improve performance.

Besides improving the software algorithms, there are other high-level programming best practices or principles to consider to reduce the time factor or to improve general user experience. These principles are the main focus of this document and serve as the basis for the recommended tricks and tactics.

Shift computation in time

You do not need to compute everything at once. Some recommendations include the following:

  • Precompute: compute the results ahead of time and cache the results. For example, contact list.
  • Evaluate lazily: compute the results only when needed.
  • Share expenses or batch: compute the same results to be shared for multiple tasks.

Shift computation in space

The basic idea is to relax the requirements for the results. Trade accuracy for time: sacrifice result accuracy for faster turnaround time. For example, sampling touch events.

Make the common case fast

Amdahl's law, which is commonly applied to computer architecture design, describes this effect. It is used to find the maximum expected improvement to an overall system when only part of the system is improved. Assuming the problem size remains the same after the improvement, if P represents the proportion of the system the speedup improvement can be applied to, (1-P) represents the portion of the system unaffected by the improvement, and S represents the factor of improvement, the overall speedup of the system can be described as:

If an improvement can apply to 30% of the system (i.e. P = 0.3) and the improvement makes the portion improved twice as fast (i.e. S = 2), the overall speedup of the system will be around 1.1765 (i.e. the system with the improvement will be roughly 1.1765 faster than the one without the improvement). However, say another improvement can apply to 70% of the system (P = 0.7), with the same factor of improvement, the overall speedup jumps to around 1.538. In a software system, a higher improvement factor can be accomplished by more efficient coding, better algorithms or other programming best practices. However, to gain better overall performance, Amdahl's law tells us to make the common case faster.

Improve data management

Memory or file access is considered relatively expensive in software operations, particularly for a large amount of data. Accessing data from the file system (EFS) is considerably more expensive than accessing data from memory. Therefore, passing a data pointer rather than copying all the data for data sharing is commonly deemed a better programming practice. Where and how data is stored typically also have a significant impact on the overall performance of an application.

Manipulate human perceptions

The fundamental reason behind improving software performance is to improve user experience with the software. However, there are limits to human perceptions that need to be taken into consideration in the design of the software to essentially mask slow operations or avoid processor-hogging tasks that do not lead to noticeable differences in user experience. For example, a simple animation can be displayed to keep the user occupied while a slow loading process is taking place behind the scene. Or the calculation for rendering a faraway 3D object can be simplified since the details will be barely noticeable to a human eye.

Leverage efficient APIs

Brew MP offers a rich set of interfaces for applications to access system services and resources. Some interfaces may have performance impact that may not be clear to developers. This document is intended to capture such information so that developers can make more informed design or implementation decisions in order to minimize any performance impact to the applications.