The iPhone was revolutionary for its use of direct manipulation – the feeling that you’re really holding content in your hands and manipulating it with your fingertips. While many mobile platforms have touch, it is the realistic physics and fluid animation of the iPhone that sets it apart from its competitors.
However, jerky scrolling ruins the experience. The new UI of Twitter for iPhone 4.0 contains many details that could impact performance, so we had to treat 60 frame-per-second animation as a priority. If you are troubleshooting animation performance, this post should provide some useful pointers.
Animation on iOS is powered by Core Animation layers. Layers are a simple abstraction for working with the GPU. When animating layers, the GPU just transforms surfaces as an extended function of the hardware itself. However, the GPU is not optimized for drawing. Everything in your view’s
drawRect: is handled by the CPU, then handed off to the GPU as a texture.
Animation problems fall into one of those two phases in the pipeline. Either the GPU is being taxed by expensive operations, or the CPU is spending too much time preparing the cell before handing it off to the GPU. The following sections contain simple directives, based on how we addressed each of these challenges.
When the GPU is overburdened, it manifests with low, but consistent, framerates. The most common reasons may be excessive compositing, blending, or pixel misalignment. Consider the following Tweet:
A naive implementation of a Tweet cell might include a
UILabel for the username, a
UILabel for the tweet text, a
UIImageView for the avatar, and so on.
Unfortunately, each view burdens Core Animation with extra compositing.
Instead, our Tweet cells contain a single view with no subviews; a single
drawRect: draws everything.
We institutionalized direct drawing by creating a generic table view cell class that accepts a block for its
drawRect:method. This is, by far, the most commonly used cell in the app.
You’ll notice that Tweets in Twitter for iPhone 4.0 have a drop shadow on top of a subtle textured background. This presented a challenge, as blending is expensive.
We solved this by reducing the area Core Animation has to consider non-opaque, by splitting the shadow areas from content area of the cell.
To quickly spot blending, select the Color Blended Layers option under Instruments in the Core Animation instrument. The green area indicates opaque; the red areas point to blended surfaces.
Spot the danger in the following code:
CGRect subframe = CGRectMake(x, y, width / 2.0, height / 2.0);
If width is an odd number, then subFrame will have a fractional width. Core Animation will accept this, but it will require anti-aliasing, which is expensive. Instead, run floor or ceil on computed values.
In Instruments, check Color Misaligned Images to hunt for accidental anti-aliasing.
The second class of animation problem is called a “pop” and occurs when new cells scroll into view. When a cell is about to appear on screen, it only has 17 milliseconds to provide content before you’ve dropped a frame.
As described in the table view documentation, instead of creating and destroying cell objects whenever they appear or disappear, you should recycle cells with the help of
If you are direct drawing and recycling cells, and you still see a pop, check the time of your drawRect: under Instruments in Core Animation. If needed, eliminate “nice to have” details, like subtle gradients.
Sometimes, you can’t simplify drawing. The new #Discover tab in Twitter for iPhone 4.0 displays large images in cells. No matter how simple the treatment, scaling and cropping a large image is expensive.
We knew #Discover had an upper bound of ten stories, so we decided to trade memory for CPU. When we receive a trending story image we pre-render the cell on a low-priority background queue, and store it in a cache. When the cell scrolls into view, we set the cell’s layer.contents to the prebaked CGImage, which requires no drawing.
All of these optimizations come at the cost of code complexity and developer productivity. So long as you don’t paint yourself into a corner in architecture, you can always apply these optimizations after you’ve written the simplest thing that works and collected actual measurements on hardware.
Remember: Premature optimization is the root of all evil.
-Ben Sandofsky (@sandofsky), Ryan Perry (@ryfar) for technical review, and the Twitter mobile team for their input.