017 iPhreaks Show – Performance Tuning with Brandon Alexander

by admin on August 22, 2013

Panel

Discussion

01:19 – Brandon Alexander Introduction

02:00 – Performance Tooling & User Experience

04:30 – Reproducibility with Experiments

07:50 – Measuring Frame Rate

09:31 – CPU vs GPU

12:56 – Tools

  • Frames Per Second
  • Time Profiler

16:24 – OpenGL ES

17:35 – Performance Tuning for Memory-Bound Applications

19:26 – Memory Allocation

28:16 – Network Requests

36:14 – Visual Changes in iOS 7 and Performance Tuning

39:05 – Mocking and Stubbing

41:15 – Battery Life

45:24 – Profiling CPU-Bound Stuff

Picks

Next Week

Software Craftsmanship with Ken Auer

Transcript

[This show is sponsored by The Pragmatic Studio. The Pragmatic Studio has been teaching iOS development since November of 2008. They have a 4-day hands-on course where you'll learn all the tools, APIs, and techniques to build iOS Apps with confidence and understand how all the pieces fit together. They have two courses coming up: the first one is in July, from the 22nd - 25th, in Western Virginia, and you can get early registration up through June 21st; you can also sign up for their August course, and that's August 26th - 29th in Denver, Colorado, and you can get early registration through July 26th. If you want a private course for teams of 5 developers or more, you can also sign up on their website at pragmaticstudio.com.]

CHUCK: Hey everybody and welcome to Episode 17 of The iPhreaks Show! This week on our panel, we have Pete Hodgson.

PETE: Hello, hello from San Francisco!

CHUCK: Jaim Zuber.

JAIM: Hello from Minneapolis!

CHUCK: Andrew Madsen.

ANDREW: Hi from Salt Lake City!

CHUCK: Rod Schimdt.

ROD: Hello, hello from Salt Lake!

CHUCK: I’m Charles Max Wood from DevChat.tv. This week we have a special guest, and that is Brandon Alexander.

BRANDON: Hello! I’m coming from Atlanta, Georgia.

CHUCK: Since you haven’t been on the show before, do you want to give us a brief introduction, let us know who you are?

BRANDON: I’m currently a iOS and hopefully Mac developer for Black Pixel. I do a lot of the client development work and test as much as I can a lot of our products. I’m also an author, a conference speaker, and I’ve also done a training video to appear soon.

CHUCK: Nice! Sounds like fun! What book did you write? I’m curious…

BRANDON: The book I wrote is called “Pro iOS 5 Tools”. It’s a couple of versions of iOS old, but the techniques in the book are still completely valid today.

CHUCK: Very nice. Alright, we’ll tell people to go check it out. We brought you on the show to talk about “Performance Tuning” for your iOS app. I think it’s interesting; we’re talking about a resource-constrained environment. Is it about the user’s experience? Or, are there other concerns as well that we’re trying to optimize for?

BRANDON: Ultimately, it’s about the user experience. If you try to implement something and the user doesn’t have a good experience with it, or it does something over the phone like battery life, you might want to rethink that feature or rethink the assumptions of your application.

CHUCK: That makes sense. So what kind of things do you recommend to people that they start with?

BRANDON: Well, Performance Tuning is actually a very scientific process. If you think about the scientific method, you come up with the hypothesis. In terms of an application, you used to notice that, say, scrolling TableView or CollectionView is rather sluggish. So you figure out what it could be so you make some measurements beforehand. In terms of scrolling performance, it’s usually frames per second. It could be memory-related so you look at how much memory your application is using using the various instruments available. And then you make a tweak to your code, so you’d make the modification. You basically go from your hypothesis of, “Okay, I think I’m using a ton of transparent views in my TableView or CollectionView,” so I make them not transparent and change the way I render everything on screen then I test again. And then I look at my 2 pieces of data if I improved by X% (that percentage is really up to you). Or, if you’re trying to hit a certain frames per second mark — in most cases, it’s 60 frames per second — if you hit that, then you’re good to go. If you didn’t, rethink your hypothesis, rethink your reproach, and go again.

PETE: Are we like that way of thinking about that? I think it’s super good because you end up crashing around and kind of making guesses and then you don’t know if you’re improving things or not; having a little bit of rigor to this kind of stuff probably really helps. Keep focused on the goal rather than just spending a day futzing around with ScrollViews order.

BRANDON: Exactly.

PETE: I guess I have a question around that, the scientific method thing. One of the challenges is kind of reproduce ability of experiments, if we’re going to kind of keep it that metaphor, you measure something with the ScrollView, you notices the frames per second are off on your ScrollView or whatever, and then you make some change. But then when you retest, you need to make sure you’re kind of testing the same way. Have you got any thoughts on good place to do that or pitfalls to avoid on that?

BRANDON: Yes. A good example of something that may affect scroll performance could be, say, you’re scrolling and all of the sudden, something in the background kicks off a network request, and then it starts parsing upon a bunch of JSON or XML, if that’s happening on the background while you’re scrolling, your performance could be impacted. So really, when it comes to testing these types of things, you need to make sure that you’re controlling your environment so the thing you’re testing, the thing you’re changing is really the thing that you’re looking for.

In the case of parsing happening in the background, making sure that all of the data has been parsed and is ready to go, and then perform your test. That would be a good example or good thing to do. If your application refreshes data every 60-120 seconds, bump that up a little bit when you’re testing the actual scroll performance. If graphics are rendering, you want to test specifically for graphics rendering. If you think maybe your parsing code is impacting your scroll performance, for example, test your parsing code. It’s really just a matter of honing in on one particular component of your application, knowing that everything is going to impact performance, but honing on one specific aspect of it and make it as good as possible. With all of those changes and tweaks happening through your application, it’s going to make performance a lot better.

I focus on scroll performance because that’s probably the number one thing that we can do as developers to make the application easier to use because a user has an expectation of, “I move my finger and the content moves with it,” compared to other platforms where you can have a little bit of lag, and the users are a little more forgiving because that’s something they’re used to on that platform. But with iOS, when you move your finger, the content better move.

PETE: Yeah. I guess the freedom you have to make it slow is not that much because even just a few milliseconds, the frames per second drop down; whereas if a page takes like a few more milliseconds to load, no one really notices. But if 5 TableView cells each take 5 milliseconds longer to refresh, then you’re going to feel that in a very human way, I suppose, because it’s going to feel not smooth.

BRANDON: Yeah, definitely. And we also need to remember that we have, I think, 15 or 16 milliseconds to render everything. So when your code’s ice skating in the runloop, when you’re wanting to render something, you have to have your code running in like 10 milliseconds. And then what the rendering code have that extra 6 milliseconds to actually render in capacitive screen.

CHUCK: How do you measure this? How do you measure the frame rate or whatever?

BRANDON: With the iOS development toolchain or toolkit, we have Xcode and we have Instruments. With Instruments, we can use the Core Animation Instrument connected either to the simulator, which is not really a good test for device performance, but it’s a pretty good test for memory performance like finding leaks and other things, but I’m getting a little off topic of the scroll performance. But when we’re talking about scroll performance, you hook up your device to Instruments, it’s like the Core Animation Instrument, and you start scrolling around, and it’ll show you how many frames per second is being rendered by the graphics system.

There are also several other things you can do. You can have it Color Blended Layers, so as your View hierarchy is built, the more transparency you use, the rather certain pieces it will get. So you want to try to have your application be as little red as possible for highly scrolled Views. If it’s something static, you can kind of flodge that a little bit. But when you’re scrolling things, you don’t want a lot of transparency because when the compositor composites down to a flat rest or image, it’s actually going to pass over all of those UIElements rather than transparent that many times. So if you have 3 Views deck on top of each other and they are all transparent, they’re all going to get rendered and they’re all going to have a pass done over them even though they are 100% transparent.

PETE: Can you talk a little bit on — I think this is kind of related — the whole kind of CPU versus GPU thing like detecting when stuff is going to be rendered by the graphics processor versus the main processor?

BRANDON: That is a good question. To be honest, I’m not as familiar with how those things work, how some code goes the GPU versus how some code goes to the CPU. I know doing some things like setting a transform on a layer will bypass the CPU and go to the GPU. But some of those things are, to me, they’re hidden because I don’t necessarily have to worry about those things specifically. I’m not a game developer so I can’t kind of rely on the CPU/GPU kind of doing their things as appropriate. Knowing when it goes the GPU would be nice, but I haven’t had the need to know that at this point. But that’s really a good question, I’d love to do some research and be able to answer that in the future.

PETE: Yeah. I’m definitely not an expert on this and I have this kind of vague memory of one of the colors. [Chuckles] I remember when I was doing some performance tuning on like scroll optimization, one of the colors, the reason it was a bad color when you turn on the coloring in Instruments was because it meant that the main CPU have to do some work to render that View, and the CPU is super duper slow compared to the GPU because the GPU is super duper optimized to do stuff with pixels, and the CPU, I want to say, is a general purpose.

BRANDON: Are you referring to the offscreen-rendered –

PETE: Yes, that might be what it is.

BRANDON: What that does is that, I think it colors everything yellow, everything that’s rendered offscreen by the CPU.

PETE: Yeah, that’s what I was thinking.

BRANDON: So doing things like the corner-radius and then – I’m not sure if scaling does it – but I know the side in the corner-radius on that layer will force that layer to be rendered offscreen. There are other techniques for doing corner-radius. You can actually go in to, say, in iOS 6, if you go into the Stocks app that ships with iOS and connect your device Instruments and turn on Color Offscreen-Rendered Images, you can actually see where on some of the TableViews that have corner-radius has set, there’s actually a very, very small image that they used. That small image is being rendered offscreen instead of the whole View itself.

CHUCK: Hah!

PETE: Interesting.

BRANDON: So running some of these tools with Instruments and then launching other applications will actually show you how the other developers are doing this. So learning from Apple in the case of the Stocks app is a great thing to look at.

PETE: I remember when I first start using Instruments on the device, I was kind of shocked that you can actually just run Instruments against any app that’s installed on your phone. So you can run Instruments on the Facebook app and you can see how good they are doing offscreen rendering and lining up that pixels and that kind of stuff, it was kind of funny.

CHUCK: It seems like you have to know kind of these areas where performance is impacted. Are there any tools that will just tell you you have a performance problem somewhere? Or, do you have to be looking for these kinds of things?

BRANDON: In my experience, you have to look for them. A lot of times, things are pretty, pretty apparent.

PETE: I guess it’s kind of like it feels obvious enough that maybe we didn’t even think to talk about it, is the Frames Per Second Instrument, or in Instruments, you can just have run the Apple, it tells you what the frames per second are as you’re playing with the application. Or you can just launch Instruments until they run in your application and scroll some things and move around and just kind of wander around your application. And then watch that graph; you’ll see very clearly when it drops below. Even if you can’t totally sense it in like playing with the application, you can see it in the graphs. That’s like normally the first thing I would go to if I want to just kind of do a survey of the app and see where the potential kind of problem ahead is on.

BRANDON: Another thing that I do when I’m doing any type of performance tuning and when it comes to scrolling, is I use the Time Profiler. If I know that I’m doing a lot of resourcing in terms of processing while scrolling or doing a specific animation, I will fire up the Time Profiler and it will show me what parts of my application are running more often. A good example that I have in my book is, because I use the Fibonacci Sequence, so when you [unclear] the Fibonacci Sequence, each term is the sum of the previous 2 terms, with the first 2 terms being 0 and 1. There’s 2 ways to implement it: the first way is Recursive, that’s the definition of it. I implement it recursively and then I put each number in a simple TableView, so I just have the text label as whatever the term is. And as you’re scrolling, when you get to a higher term, you start seeing its raw performance suffer a lot. So when you look at the Time Profiler, you will see that your Fibonacci Method is running all the time so it could take up to like 100% of the CPU. And then a simple switch from a recursive implementation to [unclear] implementation changes the performance such that when you run it again, you don’t have any scroll issues. That’s another way to look at your application. So if you’re doing a lot of data crunching while the user is scrolling while you’re building a cell, you might want to look and possibly pre-compute or cache a lot of data for those cells, if that makes sense.

ANDREW: So Time Profiler will really help you figure out any place performance as CPU-bound; you can figure out what the CPU is doing that’s taking up all the time.

BRANDON: Yes.

ANDREW: I find it quite helpful for all kinds of performance testing. Of course, for non-graphics related stuff, it’s the main tool, but even for graphics related stuff, it can tell you a lot. But it’s important to keep in mind that it’s not the whole story and the Core Animation Instruments also, very useful I think. Do you know very much about — so there’s an OpenGL ES Instrument, I don’t do any OpenGL, so I’m curious to know if that’s ever useful because I know OpenGL is used internally, of course, for system drawing. So do you have any experience with that OpenGL ES Instrument?

BRANDON: I do not. Again, I’m an application developer so I don’t get to spend a lot of time in OpenGL. It’s something that I would like to do, but the abundance of free time that I have, I would prefer to focus and hone in on my current skills of doing application development.

ANDREW: Yeah, I know exactly the feeling.

BRANDON: But OpenGL is really cool. I know several other developers are really, really big in OpenGL. In fact, couple of my teammates at Black Pixel didn’t know a lot about OpenGL, and they would love to be able to pick their brand and know how OpenGL works behind the scenes.

CHUCK: That’s really interesting. I would love to get some perspective on that so we’ll probably ask you after the show about who you would recommend to talk to us about OpenGL.

BRANDON: Okay!

CHUCK: So we talked a little bit about CPU-bound performance issues. Are there other issues for like Memory-bound applications?

BRANDON: Yes. Another thing to watch out for when you’re doing performance tuning is the lovely feature that takes usually the homescreen, also known as “your application crashing”.

CHUCK: [Laughs] That’s a feature, huh?

BRANDON: Yes, it’s a feature. It takes usually the homescreen.

CHUCK: And never pass it somewhere, right?

BRANDON: Hopefully, applications that write on have it. But I still haven’t unlocked that feature yet. When we’re talking about memory issues, there’s a broad spectrum of things you need to look out for. There’s a case where we’re allocating too much memory so we’re just holding on to way too much memory. And the system is going to give us a warning, and it’s going to really give us a warning, and it’s going to kill us. That’s one issue.

Another issue is “Leaking Memory”. That is grabbing a reference to something and then not telling that object that it’s free to go away so not deallocating that object or releasing it properly, and then losing the reference. So that memory is still allocated according to the system, but we don’t hold any references to it. That’s the Memory Leak.

And then third issue, it’s not really a performance or a memory allocation issue, it’s actually having a reference to deallocated memory, that’s called “Zombie Reference”. A good example is CollectionViews; CollectionViews Delegate is an assigned property. If the ViewController for a CollectionView goes away and the CollectionView is still around and doing stuff and it close back to Delegate, it’s you’re going to call what’s called a “Zombie” or “NSZombie”, that’s basically trying to do reference junk memory. Those are 3 main issues. I guess I can talk about them in order.

The first one is all about Memory Allocation. That is just simply allocating too much memory. If you say or consuming a web service, any web service since back megabyte sub-JSON or XML and you have to parse that, you would normally want to keep some of that memory so you can parse it. As you’re doing that, the system is going to watch your allocation. If your allocation grows way too fast to a certain point, it’s going to send you a memory warning, or maybe even just kill the application in general. That’s the washed out process.

JAIM: Brandon, do you have a feel for how much memory you’re allowed to have in an application for an iPad or iPhone as your benchmark if you do different things?

BRANDON: That is a really good question I get asked a lot, and the answer I give is, I have no idea.

CHUCK: [Laughs]

BRANDON: Because Apple will not give you that information; we really shouldn’t have to rely on that information. I know some game developers, they can screen because they know exactly how much memory they have once, say, a console, or just say it does at computer which is essentially infinite. But in terms of an iOS device, we have to remember that our application is not running by itself. It’s running with a bunch of other applications on a resource-constrained device. So we have to be as dingy with memory as we can. I’m not saying you can’t allocate this in megabytes and use it and then release it later, but try to limit the amount of time you do that.

CHUCK: Are there techniques then? Let’s go back to the example where you import a whole bunch of JSON. Are there techniques then for processing that, maybe a piece at a time?

BRANDON: First, I would ask why is my data provider giving me megabytes of JSON. If possible, talk to your service IT team and see if you can chunk that done into multiple requests. This gets into a lot of balancing and trying to make the right balance of, “Okay, network traffic is expand service especially on unreliable sole network,” especially if you’re in San Francisco around WWDC –

CHUCK: [Laughs]

BRANDON: So knowing that making a bunch of request is hard to do, but pulling down act upon of data is also not necessarily recommended, figuring out that balance and working with your server team is probably the best way to go. I can’t really give you a good rule of thumb, but experiment and see what works well in your application.

CHUCK: Yeah, it makes sense.

JAIM: In the cases where you don’t have control over maybe the [unclear] of service side thing, maybe it’s a large image file or audio file, do you have any techniques for kind of managing memory at that point that you can share with us?

BRANDON: If it’s something like an image or an audio file or something that you don’t have to parse immediately, you can just string that to disk. As it’s coming down, just take that NSData you have and append it to a filing disk, and then you can work with it later.

JAIM: Okay.

CHUCK: So you send it to disk, and that means that you don’t need to have it in memory?

BRANDON: Right.

ANDREW: Of course, there are times when you do need information in memory like when you’re actually going to use it.

CHUCK: Yeah, that’s true.

ANDREW: What about that? When you’re, for example, playing an audio file, how might you deal with keeping memory usage low or you’re playing the data in an audio file?

PETE: That kind of stuff, you’re going to be streaming it off the disk and not loading the whole thing into memory, right?

ANDREW: Yeah, hopefully, if you’re careful, right?

BRANDON: There’s multiple things. I’m not a Core Audio expert; I’ve done just a very, very little bit. From what I understand, if you’re streaming it from the internet, you basically get Core Audio a buffer to use, and it uses that buffer by itself. In those instances, the Apple engineers are doing the hard work for you. But if you’re trying to do something like parsing XML – I say XML because XML, as much of a pain in the butt, streaming XML parsers are really nice. So you can actually get a chunk of XML parser right there streaming, like use a streaming parser and just continue parsing it that way.

But if you’re doing something like JSON, JSON is a little harder to parse with the streaming parser. I know they exist, but I personally haven’t had to use it because all of my JSON payloads have been relatively small.

PETE: I think if your JSON file is so large that you’re needing to process it in chunks to stream it, then you’re probably going to have to work with your service like guys to get it into a chunky form because you have to get to the end of each kind of chunk before you can process it, as JSON is kind of the same with XML.

CHUCK: Yeah. Well, there are a lot of ways to handle that. You can do the same thing with XML, but you can break it up so that you’d basically say, “This subset of the data that you’re asking for is at this URL,” and then just take it out so it’s not so big. And then if you have an object that is so large that it’s causing you problems, it’s just one flat level, then maybe you need to rethink the way you’re structuring your data.

PETE: And normally, when I’m talking to a client about building, I guess I normally would [unclear] clients, but when I’m talking to them about building a mobile thing, I normally encourage them, the team that’s building the app, to have some server side kind of layer that they have control over so that they can format the API in a mobile-friendly way. So that means, doing stuff like Chuck is saying and also, maybe chunking up requests where it makes sense if you can call X and then call Y and then call Z always the same way than do that on the server side and just have one API call the chunks that stuff up.

CHUCK: Yeah.

PETE: That’s actually kind of a weird thing to talk about when we talk about performance tuning. But actually having some kind of server side component that’s in your control maybe lets you avoid having to do lots of the stuff under device if you can just do it on the server side instead.

CHUCK: Yeah. And having built a lot of APIs, there are a lot of ways that you can pare down to just give the client exactly what it needs and know more. A lot of times, people just take their entire object or data set and they just convert the whole thing JSON where in reality, you really only care about a handful of fields across the set of data or a handful of fields on just the one object that you’ve requested. There are definitely techniques for that.

PETE: Have any of you guys used kind of alternate data formats like binary JSON or a Protocol Buffers or any of that stuff sort of to try and help with this? I can imagine Protobuf is kind of designed to optimize this kind of stuff. I imagine it would be good, but I have a feeling if you get to the point that you need to optimize that much, then that doesn’t feel like the lowest hanging in your performance tuning efforts, I think.

CHUCK: Yeah.

BRANDON: Right. And also, I’d like to take this opportunity to say that while we are talking a lot about paring down data and making data payloads coming back across the wire or invisible wire smaller, you don’t necessarily have to worry about that I would say 98% of the time because modern devices — I want to say modern devices that are a couple years old — are pretty fast and they have fairly large chunks of memory. So we don’t have to worry about them as much as we did 4 or 5 years ago, but it’s still an issue if we have all lot of data. This comes back to, when I was talking earlier about the whole scientific process with performance tuning, let’s make sure we have an issue first. So write your application, go ahead and develop the way you think it should be developed, and then go back and analyze it, go back and look at your issues. That way, you’re not doing something that you might hear get turned around premature optimization.

ANDREW: Premature optimization is the root of all evil.

CHUCK: [Laughs]

BRANDON: Yes.

ANDREW: No, that’s a famous saying.

BRANDON: Yeah, Donald Knuth said that.

ANDREW: Right.

CHUCK: I have another question, and that is about doing these requests. Network requests can sometimes take time, so let’s say I tap a row in my TableView and it needs to load some data about whatever thing I tapped, should I be making those calls before they tap that? In other words, when I get the list, go and request as many of those as I can so that I have the information on my device? Or, am I better off just making the request and telling the user, “Hang on, I’m getting that off the internet.”

BRANDON: That’s really a good question. That really depends on the application, and it also depends on your designer. Sometimes, designers want on all of the data available at any given time. Something I wouldn’t recommend is spinning off a bunch of request because if you get, say, a list of 20 items and you make 20 more calls to get more data and the user may look at one of them, that could be problematic.

When I’m writing an application, I’m generally approaching it as, “Okay, I have a list of data. I just got a back full of service; it’s been turn of Core Data,” so I’m working in Core Data. At the top of an item, I go to the next View, and now I spun off a new request. If my server side is performant, and then my local code is performant, the lag between having an empty View or a View that says, “Hey, I’m loading data,” that may only be, maybe less than a second, but it could be a couple of seconds. But ultimately, I like to have it where I load my data for View A which is my list, I tap on an item, and then I make my request. I usually put that data in a Core Data so if they tap on it again, I can show them data and then make a request for getting updated data.

CHUCK: And so then depending on how critical it is that the data is immediately up to date, you might put some feedback in it that says, “This is what I have on the device and I’m updating it as we speak” kind of thing.

BRANDON: Yes.

PETE: One of the bigger points is I think the original, what we originally said, was like the root of all of this performance tuning is the user experience. A lot of times, the way to improve the user experience is not necessarily to improve the performance, but to improve that perceived performance. So show the user something and then fill in the details is going to feel faster than showing them all of the details even if you actually technically faster to show them all the details rather than doing it in 2 steps.

CHUCK: Yeah.

ROD: Yeah. Common example is a list and each one has an image so you completely fill in the list and then you load the images in the background and they fill in as the user scrolls.

PETE: That one is really interesting as well because I think some people’s instinct is to put like a spinner or something to indicate all of things that are loading. But I think I remember reading an Apple doc or something that where in the Apple start guide or HID that it’s actually better to just leave like a placeholder like that it kind of dotted square isn’t obviously loading and just filled in because that always kind of highlights the fact that there’s load of stuff that you’re waiting for so it makes the user feel like there’s more stuff being loaded.

ROD: Uhm-hmm.

JAIM: Yeah, the placeholders are pretty good pattern to do that with.

BRANDON: And there are multiple techniques for loading the data. So getting into like in-depth pieces of implementing or something like that, when the user is scrolling, say, you have NSURLConnection, you spun off your requests using the asynchronous part of the NSURLConnection as you’re scrolling. If you’ve configured your URL request, your URLConnection by the default configuration when it has this [unclear] the run loop, the data that’s coming back as you’re scrolling is not going to get called back to the Delegate because as soon as you start scrolling your run loop switches to a different mode. So giving in which mode you’re in, you get different call backs and the system does different things. Does that all make sense?

JAIM: My head just exploded.

[Laughter]

JAIM: So what are the 2 different things?

BRANDON: A great example of that is, say, you have a TableView and you have a bunch of images that you need to load. When the table is just sitting there, the run loop is in the default mode. It’s in the mode where it says, “Okay, anything you throw up me, I will respond and callback,” that’s performSelector afterDelay performSelector , all of that stuff that gets attached to the runloop will get called when the runloop is in that mode. I forget the details of what each mode is called, but I know that by default, everything will get called back. As soon as you start scrolling, it switches to a different mode so it ignores, didn’t ignore, but it queues up all network request – all network data that’s coming back across the wire. Say you have 10 images that are loading, as you’re scrolling, all 10 of those requests may have come back while your user is scrolling. But you’re application isn’t going to know until they release, then it gets hit, then when it switches, all of those callbacks go back to the URLConnection Delegate. Does that make sense?

JAIM: Yeah, that makes sense. I’ve seen that in an app that I worked on. It throw a lot of stuff to the background thread, loading image or whatever, and it scroll fast and we stopped, you will still see the callback stuff come back on whatever’s on the screen –

BRANDON: Yes.

JAIM: So it kind of shows up at the right time. Okay.

ROD: Well, you don’t want to be updating all the cells as the user is scrolling either; you want to wait until they’re done, anyway.

JAIM: It sounds like that’s what the framework is doing for us. Is that right?

BRANDON: The framework is doing it for you, but you can – Something I wouldn’t recommend is attaching as configuring your URLConnection to run on ‘all modes’ there. There may be a handful of cases where you want to do that, but most of the time, you want to just have the default configuration of your URLConnection. And then you can also take an NSOperation, create a runloop inside that operation for that thread, and then attach it to that runloop, and have multiple runloops going. So you have one runloop in the background that is just in that common mode all the time, and then your main UI runloop.

There’s a bunch of different ways to approach this problem. 90% of the time, you’re going to attach your URLConnections to the main thread and then do all your processing in the background inside of NSOperation or something.

JAIM: Okay, for those of us who are going to head to Google to look this stuff up, how do you search for this?

BRANDON: NSRunLoop modes –

JAIM: Okay.

BRANDON: Also, the documentation for NSURLConnection is pretty good. It’ll explain how it works, how to attach things to different runloops. If you want to look at code, a networking library that I use, it’s not well-known, but it’s called “ESNetworking”, it’s written by Doug Russell, who is Mattt’s colleague. He wrote that and he’s a really, really smart guy. If you want to see how to set up runloops and how to configure URLConnections with extra runloops or on the main runloop, check that out. It’s on GitHub.

ROD: Depending on what you’re doing, there are easy ways to do it. For example, AFNetworking has categories on ImageView that will automatically load the image in the background.

JAIM: Yeah.

PETE: I have a random question that may not be related to this at all, but I’m just going to ask it anyway, and this is also a challenge because we’re still on the NDA for a lot of the iOS 7 stuff. But the change in the visual style in iOS 7, do you guys think that’s going to make things easier or harder for any performance [unclear]? Or, is it just like any relevant question?

BRANDON: No, that’s a really good question. First, I’m trying to figure out how much –

PETE: Yeah, right [laughs]. I’ve got a feeling I always shouldn’t ask because it’s so annoying trying to figure out like, “What was said during the Keynote…”

CHUCK: They did show off that visual style, though.

PETE: Yeah, the style is definitely –

BRANDON: So one thing that I would say that I’m pretty comfortable talking about because there are multiple libraries popping up all over the place now on iOS 6 that have the slightly translucent Views, say, you have a navigation bar, and you want your content to scroll behind it, so it’s sort of mask sith and at the blur, sort of a frosted glass effect. Doing something like that is extremely presents you’re attentive. Trying to apply those effects while your user is scrolling, it is going to impact performance. One thing I know that in an upcoming version of iOS, the UI engineers who are doing all of that work are really, really smart; they’re way smarter than I am. They’re going into those things way better than I can. They also have access to all of the UIKit code.

PETE: And they have access to internal APIs as well, and they can add internal APIs if they need to to help with that kind of performance tuning.

BRANDON: Yeah. It is going to impact performance. How much? It remains to be seen. I guess we’ll find out when iOS 7 is released in the wild.

PETE: It’s an interesting idea. I actually had like a way more naïve view or it was just thinking because now it has this kind of flat look and less of the silly skeuomorphism and gradiant and all that stuff. Maybe that means that the performance that rendered on that stuff would be quicker. But now that I think about it, really, with a modern phone rendering a gradiant or rendering an image for a fake level look doesn’t really make any difference anyway. So maybe it’s going to be negative rather than positive.

CHUCK: Yeah, that makes sense. Alright, well, I think we’re toward the end of our time. Are there any other questions or –

ROD: I have a question. Back when you’re talking about measuring performance and you had background processes that you didn’t want to interfere, do you ever use mock objects or stuff things out to help you isolate things?

BRANDON: It really depends on the situation. In recent history, I really haven’t had a need to. When we’re developing applications, we develop all the features first and get everything working. At the end, we always try to have a good chunk of time so we can go in and do all the performance tuning.

I should also say, when you’re doing performance and you’re like this, if you have one particular issue, say, it’s a particular animation that is running fairly slow and it’s hard to reproduce, creating a quick project to hold some of that code over and using mock objects and mock data can be useful. But try to test everything that you’re doing when it comes to performance in a real-world setting. If you’re testing your application just in a full end-to-end task, you’re checking network data performance so you’re wanting to check to see if the server is responding in time. If you’re pale object are too large, doing that on a WiFi connection isn’t going to be testing 4G or 3G connection so turn on the Network Link Conditioner. That can be found in the settings app under developer if you have configured your device for development.

You also want to make sure that everything you’re testing in terms of scrolling performance is something that you would see in the real world. If you’re testing a photo application, takes some real photos. Use the large images and see if you need to go ahead and generate smaller sizes if necessary so you don’t have a lot of issues when you’re scrolling. Does that make sense?

ROD: Sure!

PETE: We mentioned very early on in a call, battery life. That seems that’s like a challenging thing in that kind of experiment thing of like measure something and then make change and then measure again. How does that work with battery life presumably you’re not going to run the app for 5 hours and see how much battery it uses up? What’s a good rule of thumb for optimizing for that kind of stuff?

BRANDON: With battery life, there’s actually an instrument for looking at power usage. If you have a device configured for battery life or configured for development, you can actually go in and turn on the power usage logging.

PETE: It’s cool! I didn’t know that was it!

BRANDON: As soon as you turn that on, the device is going to track every event that happens that could affect battery. It’s going to detect when the screen comes on and off; it’s going to detect when some of the chips turn on and get turned off. It’s going to log everything that’s happening and every event that is triggered by an application. So, a push notification comes down, will come to the foreground, do a little bit of processing and then go back down, and it will show you the notification.

When you do all of that, when you’re using your application as you would in a real-world scenario, you’re going to actually see how push notifications will impact battery life. If you’re using a game, if you’re developing a game, while you’re playing the game, your power logging turned on. You can see what the system is going to estimate as to how much power you’re actually using. If all of the radios are turned on – GPS, WiFi, you’ve got the presence of [unclear] something – you might have a 10 out of 10 saying your battery is going to die in an hour. Or, if you’re doing something like Tweetbot or just like Twitter or just random web serving, it could be a 2 out of 10, which means it could die in 10-11 hours.

All of that information is in the Apple documentation. I also think it was talked about in the WWDC videos from last year or the year before. So watch the power logging videos. Also, if you wanted a handy resource, go pick up a copy of my book.

PETE: [Laughs] That’s awesome! I guess one of the main culprits is going to be firing up those radios, like if you’re constantly checking GPS or if you’re constantly making network calls, then it’s going to kind of drain the battery super fast.

BRANDON: Yeah. Something that’s sort of interesting and maybe a little – when you think about it, it makes sense. You have different types of radios in your device; you have a WiFi radio, you have a 3G or so radio. The Edge radio takes the least amount of power, but it’s going to be on the longest as it’s transferring data at a very slow rate. The 3G radio takes more power, and the WiFi radio takes the most power. But if you compare WiFi and 3G, since WiFi is a higher speed if you’re on the high-speed internet connection and you’re going over WiFi, a single connection making the round trip and getting all the data going from power up the chip to power down, it’s actually taking less power than the 3G chip who would be doing the same thing. That’s on the old 3G network, I’m not sure about the whole LTE thing; I haven’t done any reading on how much power those chips use.

PETE: That’s really interesting.

BRANDON: But knowing how much power the chip uses versus how long it’s going to be on to do data transfer is going to impact battery life.

PETE: And presumably that sort of stuff that the operating system has this smarts built in to handle, we don’t need to do anything with that exquisitely in our code, right?

BRANDON: Right.

PETE: That’s cool. I never thought of that before. Do you guys want to talk about the kind of Profiling CPU-bound stuff? Do we want to do that? Or, do we want to go to the picks? I’m easy eitherway.

ANDREW: It’s interesting to me because we talked about performance profiling in terms of scrolling performance, which is sort of an animation or graphics area. But I’ve actually had to do just as much work on things that are just CPU-bound operations, not graphics, but just things that the app is doing, or jobs that the app is doing. So I think that might be useful.

[Crosstalk]

PETE: That stuff relates to the scrolling stuff as well because if you’re doing some kind of CPU-intensive operation as the scroll is going on, then it’s going to give you a jittery scroll, right?

ANDREW: Right.

PETE: So you need to know about one in order to fix the other sometimes.

JAIM: And for pegging your CPU, you’re battery life is not going to be too good either.

PETE: True.

ANDREW: That’s true.

BRANDON: So the moral of the story is, everything impacts everything.

PETE: [Laughs]

CHUCK: [Laughs]

BRANDON: Everything it’s going to impact is the user experience – from battery life to scroll performance to “My goodness! Why is my iPhone really, really hot?” stuff like that. When we’re talking about Time Profiling and CPU-bound things, there are number of things that we can do to make sure that we’re using the CPU as effectively as possible. In terms of making sure that we have a responsive UI, offload – not offload as much as you came to the background – but for it make sense, do stuff in the background. GCD is amazing for doing that.

The downside to using something like GCD is you have no control over how many threads you’re going to be spun up by GCD. So you start off a couple of parse operations and group those together, say, a GCD dispatch group. And then you look at your application while debugging and all of a sudden, “Oh, my gosh! I have 42 threads going!” That could impact performance because the operating system is having to bounce between 42 different threads and having to do a lot of context switching. So that could impact performance.

ANDREW: I think one thing to mention in that case is NSOperationQueue and NSOperation, which is a higher level API built on GCD.

BRANDON: Yes.

ANDREW: Maybe you could say a little about that.

BRANDON: Using an NSOperation…I can’t think of an application that I’ve developed recently that hasn’t used NSOperation; it’s a really simple API. The great thing about NSOperation is as you create an operation and throw it into an operation queue, if you don’t need that operation anymore, you can cancel it compared to something in GCD where you send it off to GCD and it’s going to get done; you may not be able to cancel it. If you can, there’s a lot of work that has to be done.

So using NSOperation is your first thing to do when you’re wanting to do any type of parsing in the background for example. That’s the number one thing that I’ve done. In applications that I’ve helped work on is we create operations when we’re doing parsing. So it get JSON back and we parse it in the background, throw it in Core Data in the background, and then our UI gets updates from Core Data and on the main thread.

ANDREW: I think another nice thing about NSOperation and NSOperationQueue that you mentioned was the limitation of GCD, is that,NSOperationQueue will let you set the number of operations that run concurrently –

BRANDON: Yes.

ANDREW: So you can tell it you only want 2 things to happen at the same time or whatever. In fact, you can even make that scale based on the number of course in the device or something like that.

BRANDON: Uhm-hmm.

ANDREW: You can throw it all the way down if you really need the CPU for one thing, you can throw it down another thing that’s happening so it doesn’t take up so much time. It’s quite a nice API.

CHUCK: I want to jump in and clarify something for new people. GCD is Grand Central Dispatch.

ANDREW: Yeah. It’s an API introduced in iOS 4. I believe that allows you to – that’s a topic for an entire show –

CHUCK: Yes, it is.

ANDREW: But it makes it easier to do multiprocessing so essential as multithreaded programming.

BRANDON: And if you never worked with GCD before, go read about NSOperation. NSOperation is a much easier way to get to work with. So you would create an operation, or actually, with subclass operation, or you can create an operation and get to the block, and then send that operation to an operation queue, and that work will get done. And then you can also configure it to notify you when their operation is complete, so you can get a completion block and various other things. But NSOperationQueue will manage everything in the background for you. So we don’t really have to worry about creating threads.

You still have to worry about things like, “Are these two operations accessing mutated in the same array because NSArray is not thread safe, so we have to worry about things like that. But in general, creating and managing threads which used to be a huge pain, is now a lot simpler.

CHUCK: Alright, good deal! Well, let’s go ahead and get into the picks. Thanks for coming, Brandon! It’s been a terrific discussion.

PETE: Yeah, super interesting.

JAIM: Right. Great stuff!

BRANDON: Thanks for having me!

CHUCK: Alright, Jaim, what don’t you start this off with picks this week?

JAIM: We talked a little bit about memory mapping. I have a pick that’s a [unclear] called “mmap”. This really saved our bacon when I was freeing on desktop app and bring it to the iPad, this is all I use. If you have a large audio file that you have to keep in memory and play periodically, if the people actually have to do the streaming, you can actually save the memory to a file and access it through mmap, which allows you to keep less memory and actually active, but still it’s fast enough with iPad, iOS devices. It’s really useful and keep your memory profile and not get crashed. That’s my pick – mmap.

ANDREW: You stole my pick, Jaim.

JAIM: Oh, no! I’m going to get you one.

ANDREW: I actually have something to add to that. That really was going to be my pick, but NSData actually has support for mapping files. It’s using mmap internally, but it’s an Objective C API to do the same thing. So when you create an NSData object with the file, you can pass in an option to tell it to mock that file. What that really means is that when you, the NSData object looks like a regular NSData object with lights you can read using all the regular methods, but it’s actually only pulling in data from the disk when you ask for that particular chunk piece of data instead of reading the entire file. It’s an Objective C API whereas mmap is a low-level C API that leaves you responsible for freeing up that map memory when you’re done, and that kind of thing.

JAIM: Oh, yeah! Nice. Good stuff!

CHUCK: Awesome. Andrew, did you have any other picks?

ANDREW: Yeah, I’ve actually got fallback pick and then one more. My first pick is “appledoc”, maybe this has been picked before, but I actually started using it for the first time yesterday, seriously. Appledoc is a program that was written by a guy who’s name I probably can’t say right, but he goes by Gentle Bytes, that’s his company. It’s a program that reads specially formatted Java-docs style comments in your code and generate documentations that’s exactly in the same style as Apple’s documentation, and it will even create documentation sets that can be installed in Xcode so that you can read them in the regular Xcode documentation gear. And also, CocosPods uses this so there’s – well, CocoaPods doesn’t use it exactly, there’s a website called “CocoaDocs.org” that somebody wrote that goes through CocoaPods and automatically generates Apple documentation if the comments are there. So that’s actually where you can get documentation for a lot of CocoaPods out there like AFNetworking. I’m actually working on adding this to my open source stuff. Now, it’s really easy to use, and there’s also a blog post about this by “Cocoanetics” that I’m going to went to that just talks a little more about how to use appledoc because ironically, as good as appledoc is, its documentation is almost non-existent, so it’s left off for third-parties.

The other pick I had was blog post by Wil Shipley. He does this regular series called “Pimp My Code”, or at least he used to. I think it’s relevant today where he talks about a bad performance problem and even a crash that he had in an application, and how he solved it. I think it’s really interesting. He ended up using some of the NSData stuff I was just talking about for map reading and actually, there’s another option where you can do uncached reading, which I’ll leave to the blog post.

CHUCK: Awesome. Pete, what are your picks?

PETE: I have 4 picks, but I will be fast. First pick is probably someone’s pick before, but the “WWDC Videos” particularly for stuff around performance tuning. I found watching those videos really, really helpful in explaining some of the stuff, particularly the stuff around Core Animation and how to optimize your UI and offscreen rendering and all that stuff we talked about at the beginning of the show.

My second pick is the idea of having a “Device Lab”. That sounds really a fantasy, but what I’m talking about is having some physical devices that you’re testing with and being quite conscious of what devices you want to support. If you want your app to work acceptably on an iPad 1, you need to get an iPad 1 and play with it. I know this is the hard way when we only start testing our app on an iPad 1 halfway through our project and it ran like a dog because there’s not much memory and the process is pretty weak in the rest of it. So having physical device and constantly choosing which devices you’re going to support is enough [unclear].

Third pick is “Instapaper”. Totally unrelated to this topic, but I just used it 3 times today and realized how much I still love that app. It’s like a read-later thing so you can install a little bookmarklet on your phone and also on your browser, and when someone send you a link to an interesting blog post about performance tuning or whatever, you press the button and it magically get sent to your phone and to your kindle so you can read it later on.

ROD: I use it all the time.

PETE: Yeah, me too. Super awesome. It’s one of the things that’s so awesome, you kind of don’t even notice how awesome it is because it just works and gets out of your way.

And then my forth pick is the “Network Link Conditioner”. We mentioned this and I think we’ve talked about this in previous shows, but it’s kind of one of those little hidden gems of OS X that people don’t know about, or some people don’t know about. It’s a way for you to simulate poor network conditions. So if you’re wanting to do this kind of experimentation thing and consistently reproduce being on an aged network, then Network Link Conditioner is a great way to do that. That’s it!

CHUCK: Awesome.

ANDREW: Can I have one thing with Pete’s picks?

CHUCK: Yeah, go ahead!

ANDREW: On the WWDC Sessions, there are 3 from last year; 2012 WWDC Session 238 and Session 242, they’re both Performance Sessions and they’re full of good stuff related to what we talked about today. There’s one more that I don’t actually have handy. But anyway, I learned a whole, whole lot from those and one of those is about Memory and the other one is about Core Animation. I don’t know if we can link directly to WWDC Presentations because they’re behind our login –

PETE: Well, we can link to them. It’s just someone will not going to be able to get to them as they’re logged in to the ‘now’ functional developer portal.

CHUCK: Yeah, I got an email saying it works! [Laughs] Happy things! Alright Rod, what are your picks?

ROD: Alright my first pick is called “FormatterKit”, a library from the Mattt Thompson. It’s a library for formatting strings and then various things – the format addresses, arrays, locations, all kinds of things. So, pretty useful.

My second pick is a article that I found called “The Mathematical Hacker” that talks about the importance of mathematics to programmers and how we kind of ignored mathematics for a while. And one of the examples he gave that was very interesting is we talked about earlier how Fibonacci and factorials typically done in a recursive fashion, but he gives a formula in this article that shows how to calculate in constant time. I had no idea it was possible. So, that was interesting.

CHUCK: Nice. I’ve got a couple of picks that I’m going to share. The first one is, if you’re looking at project management software, I really like “Pivotal Tracker”, so I’m going to pick them. But one issue that I’ve had with it is that, since they went paid, you can’t add more than like one person to a project, which is kind of silly. So I’ve switched over, unless the client wants to pay for Pivotal Tracker, I switched over to Redmine, which is a Ruby on Rails application; it’s a web project management software and it’s pretty good. That’s my other pick, it’s “Redmine”. Brandon, what are your picks?

BRANDON: I’ve got a few picks. The first pick I have is “Xcode and Instruments”. We can’t do our jobs without Xcode and Instruments. I really can’t say much more than that.

The next pick I have is, because the documentation viewer and the connect shipping version of Xcode is pretty terrible, I use an application called “Dash”. It’s available on the Mac App Store and it lets you view documentation for UIKit or Cocoa AppKit, all of the Apple documentation plus  Javadocs, Ruby documentation, HTTP documentation, Python documentation. It ships with a ton of different types of platforms so you can read the documentations from there.

And then the next pick that I have is good to be a little self-serving because Black Pixel developed it, but that’s “Kaleidoscope”. Kaleidoscope is a diff utility. I can’t think of a day where I haven’t used Kaleidoscope to diff and merge things as I’m developing during the day. To me, it’s much more of user-friendly than FileMerge and even the command-line diff utility. You can check it out.

The last one is sort of an obscure thing and it has a debugging. At WWDC this year, I spent some time in the lab, I was talking with the LLDB engineers about using their Python hooks. So I’m running Python scripts to have custom LLDB commands or even automating some of your debugging using Python.

CHUCK: Awesome.

BRANDON: So checking those out is great. The documentations are a little light, but all the source code for the Python stuff is actually in Xcode so you can check it out and put ground, and some which you can break.

CHUCK: Cool. Alright, well, it’s been an awesome show. Thanks for coming again, Brandon. I feel smarter. I’ll just say that [chuckles].

BRANDON: Well, I spend a lot of time researching these things and anytime I can impart some of this knowledge and also be asked interesting questions to make sure I know what I’m talking about and also learn from other people so I can just add to this body of knowledge is great.

CHUCK: Awesome. Alright, well we’ll wrap up the show. I want to thank everybody for coming. We’ll catch you all next week!

0 comments

Previous post:

Next post:

Web Analytics