iPhone Applications Tune-Up
High performance tuning guide for real-world iOS projects
Loyal Moses
BIRMINGHAM - MUMBAI
iPhone Applications Tune-Up Copyright © 2011 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: October 2011
Production Reference: 1041011
Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-84969-034-8 www.packtpub.com
Cover Image by Vinayak Chittar (
[email protected])
Credits Author Loyal Moses Reviewers Antonio Gomes Rodrigues
Project Coordinator Zainab Bagasrawala Proofreader Aaron Nash
Hai "EvilJack" Nguyen Indexer Acquisition Editor
Hemangini Bari
Steven Wilding Graphics Development Editor
Valentina D'silva
Meeta Rajani Production Coordinator Technical Editor
Melwyn D'sa
Ankita Shashi Cover Work Melwyn D'sa
About the Author Loyal Moses is an accomplished business owner and proven entrepreneur as well
as technical speaker, author, writer, and multi-lingual developer with more than 20 years of programming experience on every major operating system platform. With a long and diverse technical background, Loyal has a wide-ranging set of both basic and advanced skills from network security and professional hacking to desktop, web and mobile device development. Loyal is recognized as creating the first intrusion detection and correlation App for both the iPhone and iPad to complement Aanval, his globally successful Snort and Syslog network intrusion management creation. Additionally, Loyal has designed, developed and published several profitable iOS applications, which remain available today on the Apple App Store. Loyal also semi-regularly maintains his personal blog, which can be found at
http://www.loyalmoses.com.
I would like to thank my wonderful family and friends; my wife Eileen for her unconditional support, patience and care over our home while I was tucked away behind the keyboard, our children Van and Gwen whose happiness and joy rang throughout our home, my brother Landon for his excitement and encouragement, Peter for being awake when no one else was, Kenneth for his friendship and wisdom and finally my dear Heavenly Father for providing us this opportunity and making these experiences possible. I love you all.
About the Reviewers Antonio Gomes Rodrigues earned his Masters degree at the University of
Paris VII in France. Since then he has worked in various companies with Java EE technologies in the roles of developer, technical leader, technical manager of offshore projects, and performance expert. He currently works on performance problems in Java EE applications in a specialized company. I would like to thank my girlfriend Aurélie.
Hai "EvilJack" Nguyen fits your typical engineering stereotype: scrawny, loves to program, and scared to death of women. He spends his free time tinkering with gadgets and updating his Facebook status.
After finishing graduate school at the University of Florida, Jack moved to Taiwan in mid 2003. Shortly thereafter SARS hit the Asia pacific region (unrelated to Jack's arrival, of course). He then joined a software company that worked on mobile phones (Aplix) and got a chance to play with all the latest phones and gadgets. Eventually he left that awesome job and moved to Korea a few years later (to chase a girl) and spent the better part of a year studying Korean. Shortly after moving there, North Korea began conducting tests of their nuclear stockpile (unrelated to Jack's arrival, of course). Eventually he moved back to the USA and began working for a voice over IP startup creating mobile applications for them. Shortly after moving back to the US (2007), the greatest financial crisis in almost a century occurred (unrelated to Jack's arrival, of course).
Jack currently splits his time between California and Florida while trying to avoid getting kicked out of (yet) another country. He is currently hiding away in his mother's basement writing iPhone apps. His mobile game company and personal projects: www.infinitetaco.com and www.lolmaker.com respectively
www.PacktPub.com Support files, eBooks, discount offers and more
You might want to visit www.PacktPub.com for support files and downloads related to your book. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub. com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at
[email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
http://PacktLib.PacktPub.com
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books.
Why Subscribe? •
Fully searchable across every book published by Packt
•
Copy and paste, print and bookmark content
•
On demand and accessible via web browser
Free Access for Packt account holders
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access.
Table of Contents Preface Chapter 1: Performance, Bottlenecks, and Fundamentals Success and performance Perception of performance Performance fundamentals Approaching performance Process management Memory Storage Network User interface Be a good neighbor Application design and architecture Application performance Summary
1 7
7 9 11 12 14 17 20 23 25 27 28 30 32
Chapter 2: Design for Performance
35
Chapter 3: Maintainability
53
Preparing the project Project organization Project structure Groups and files Code structure Summary
Variable naming conventions Method naming conventions Camel case Syntax efficiency Readability versus compactness
36 38 39 42 47 52 54 56 57 60 61
Table of Contents
Dot syntax Re-factoring Library bloat LIPO Comments Documentation Summary
63 64 66 67 68 71 74
Chapter 4: Reliability
75
Chapter 5: Performance Measurement and Benchmarking
97
Exception handling Error checking Unit testing Preparing a project for logic unit testing Preparing a project for application unit testing Summary Static analyzer Instruments Summary
76 79 79 81 87 95
99 101 106
Chapter 6: Syntax and Process Performance
107
Chapter 7: Network Performance
127
Iteration loops Object reuse Bitmasks Sorting Bubble sort Selection sort Bucket sort Quicksort Run loops Timers Semaphores Summary
Sockets Streams Protocols Bandwidth Compression Façade pattern Summary
[ ii ]
108 110 111 113 114 116 118 119 122 123 125 126 129 132 138 140 143 144 146
Table of Contents
Chapter 8: Memory Performance
149
Chapter 9: Application and Object Lifecycles
165
Chapter 10: Animation, View, and Display Performance
187
Chapter 11: Database and Storage Performance
199
Garbage collection Alloc Dealloc Copy Retain Release Autorelease didReceiveMemoryWarning Summary
Mise en place Application lifecycle Application startup sequence Application execution Application termination sequence Application init awakeFromNib application: didFinishLaunchingWithOptions applicationDidBecomeActive applicationWillEnterForeground applicationWillResignActive applicationDidEnterBackground applicationWillTerminate Object lifecycle Object init Summary View performance Animated content Core Animation Item renderers Summary Disk Cache Compression SQLite Core Data Summary
[ iii ]
150 154 154 154 155 155 160 162 163 167 170 172 173 173 175 178 178 179 179 180 180 181 182 182 184 188 190 192 193 197 201 202 204 205 206 206
Table of Contents
Chapter 12: Common Cocoa Design Patterns
209
Chapter 13: The Xcode Advantage
219
Index
231
Why design patterns are critical Singleton Mediator Delegate Adaptor Decorator Model-View-Controller Summary Distributed builds Dead code stripping Compiler Debugger Source code management Summary
[ iv ]
212 212 213 214 214 215 216 218 222 223 224 224 225 229
Preface Every line of code presents an opportunity to improve upon the effective performance of an application. This book begins with the fundamentals of performance, demonstrating the impact that poor performance can have on the success of an application. Apple's App Store is riddled with applications that fall just short of success and it isn't too much of a stretch to attribute many of these failures to a lack of optimization. Readers can expect to be lead through each chapter, learning every aspect of performance tuning from simple syntax tips and tricks to advanced process management, network, and memory optimizations. In addition to theories, syntax, and detailed code examples, readers will learn to take advantage of Apple's powerful performance measurement and benchmarking utilities to identify the specific components of an iOS project that might need attention.
What this book covers
Chapter 1, Performance Bottlenecks and Fundamentals Identify the core principles behind performance-driven development and the effects of poor application performance. Chapter 2, Design for Performance Learn the proper way to organize an Xcode project for stability and efficiency. Chapter 3, Maintainability In this chapter, we focus on the core principles of project and source code maintainability. Chapter 4, Reliability Learn why and how exception handling and unit testing will increase both project reliability and performance. Chapter 5, Performance Measurement and Benchmarking Take advantage of Xcode's native and powerful performance measurement and diagnostic tools.
Preface
Chapter 6, Syntax and Process Performance Uncover hidden performance gains in basic syntax and other common coding operations. Chapter 7, Network Performance Learn the fundamentals of network performance along with when and how to use network sockets to increase performance. Chapter 8, Memory Performance Understand and take advantage of object retention and garbage collection to elevate application performance. Chapter 9, Choosing the Right Components A more in-depth look into item renderers, objects, and the component lifecycle and how proper use can affect performance. Chapter 10, Animation, View, and Display Performance Identify and affect performance gains with animated and layered content. Chapter 11, Database and Storage Performance Increase application performance with proper implementation and usage of cache, compression, SQLite, Core Data, and data synchronization. Chapter 12, Common Cocoa Design Patterns A deeper look into the most important and impactful design patterns every developer should be familiar with. Chapter 13, The Xcode Advantage In this chapter, we cover every facet of compiling, building, prepping, and releasing iOS projects with performance in mind.
What you need for this book
Any current Apple Mac capable of running the current major revision of Apple's Xcode 4. Readers interested in deploying code directly to iOS devices or to the Apple App Store should have an Apple Developer subscription.
Who this book is for
This book is for iOS application developers who are interested in resolving application performance bottlenecks in both new and existing Xcode projects. Readers should be familiar with the basic concepts and principles of iOS development, Objective-C syntax, and use of Apple's Xcode development environment.
Conventions
In this book, you will find a number of styles of text that distinguish between different kinds of information. Here are some examples of these styles, and an explanation of their meaning. []
Preface
Code words in text are shown as follows: "We can include other contexts through the use of the include directive." A block of code is set as follows: - (void) tryCatchMeIfYouCan { Automobile *myAuto = [[Automobile alloc] init]; @try { [myAuto start]; } @catch (NSException *exception) { NSLog(@"Caught exception %@: %@", [exception name], [exception reason]); } @finally { [myAuto release]; } }
When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: - (void) tryCatchMeIfYouCan { Automobile *myAuto = [[Automobile alloc] init]; @try { [myAuto start]; } @catch (NSException *exception) { NSLog(@"Caught exception %@: %@", [exception name], [exception reason]); } @finally []
Preface { [myAuto release]; } }
Any command-line input or output is written as follows: # cp /usr/src/asterisk-addons/configs/cdr_mysql.conf.sample /etc/asterisk/cdr_mysql.conf
New terms and important words are shown in bold. Words that you see on the screen, in menus or dialog boxes for example, appear in the text like this: "clicking the Next button moves you to the next screen". Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of. To send us general feedback, simply send an e-mail to
[email protected], and mention the book title via the subject of your message. If there is a book that you need and would like to see us publish, please send us a note in the SUGGEST A TITLE form on www.packtpub.com or e-mail suggest@ packtpub.com. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
[]
Preface
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you.
Errata
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you would report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub. com/support, selecting your book, clicking on the errata submission form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded on our website, or added to any list of existing errata, under the Errata section of that title. Any existing errata can be viewed by selecting your title from http://www.packtpub.com/support.
Piracy
Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works, in any form, on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at
[email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors, and our ability to bring you valuable content.
Questions
You can contact us at
[email protected] if you are having a problem with any aspect of the book, and we will do our best to address it.
[]
Performance, Bottlenecks, and Fundamentals In this chapter, we focus on understanding the fundamentals of performance and bottlenecks, which will improve the overall performance of your iOS project and application. Specifically, we will look into the following areas of performance bottlenecks: •
Performance fundamentals
•
Process management
•
Memory
•
Storage
•
Network
•
User interface
•
Being a good neighbor
•
Application design and architecture
•
Application performance
Success and performance
When you design and develop an iOS application, you want your users to find value in it. Additionally, you might intend for your application to not only be useful but also generate revenue from sales on Apple's App Store. Every successful application begins with a great idea. However, the idea alone is rarely enough to achieve greatness without an equal or greater amount of execution.
Performance, Bottlenecks, and Fundamentals
Almost any good idea has the possibility of being worth as much as a million dollars or more. However, in reality an idea without proper execution is just that, nothing more. The most innovative and imaginative application can end up gathering dust as a complete failure without an adequate amount of execution. While proper execution of an idea is critical to the overall success of any application, in its entirety, it is very much outside the scope of our particular focus. We will, however, touch lightly on a few key areas of proper execution as they relate to application performance. One of the most important aspects of a successfully executed application is no doubt performance. Successfully tuning an application for performance requires a broad set of skills and a number of good tools. Tuning is all about gathering data, analyzing, and adapting results to improve application efficiency. Application performance measurements are quite arguably subjective and may vary significantly from one application to the next as well as rely heavily on intended usage and expectations from end users. The performance threshold at which one application would be considered a failure might be more than adequate for another. Performance by business sector is also an important factor to consider. Knowing what limitations are acceptable in your particular application genre will help ensure you meet your users' desired expectations. Building this knowledge into your design and development process only serves to improve your application. We live in a near real-time world in which performance requirements are greater than ever. We compete with user expectations, which are defined by search engines that return result sets from hundreds of billions of web pages in only tenths of a second and mobile Internet devices that link our fingertips to more than 500 gigabytes of online data 24 hours a day, 7 days a week. The days of waiting by the mailbox for freeware and shareware archives on CD-ROM are gone, as are the boundaries and limitations of our users' expectations. Regardless of how we measure and define an application's overall performance, anything less than optimal, results simply in not meeting a user's expectations.
[]
Chapter 1
Perception of performance
Application performance is not solely restricted to basic, logical, or computational limitations. A much overlooked and still rather important aspect of application optimization is the perception of performance. This is the feeling or sense that an application is or is not performing as efficiently as possible. We've all experienced that strange or awkward feeling, where an application seems overly sluggish or simply appears not to be performing as well as it should. Maybe touches and swipes aren't being recognized as quickly as we'd like or button presses are slow and unresponsive. Regardless of the symptom, something has grasped our attention in a negative way and this red flag absolutely has an effect on our overall perception of the application. Even the most technically advanced application may be a failure in the eyes of its users if their perception of the application is negative. Negative perceptions of performance are enough to deter future usage of an application, even without validity. To understand the basic importance of performance, let's outline an application with obvious symptoms of poor performance as follows: •
Lengthy launch time: This is an inordinate amount of time to open or fully initialize an application once launched
•
Unresponsive user interface: This is an application interface, which is often unresponsive to user input
•
Blocking network communications: This enables the applications to freeze or become temporarily unresponsive while network communications take place
•
System busy symbol: The system busy symbol is the spinning pinwheel or hourglass that appears often and for long periods of time
•
Application freezes: This occurs when an application is plagued with common lock-ups and freezing
•
Wasted storage: The wasted storage is the large amount of wasted or unreclaimed storage and disk space
•
High CPU utilization: An application which may regularly consume as much as 100 percent of the CPU has the high CPU utilization
•
Excessive memory consumption: This shows that memory is consumed and never appropriately released back to the system
[]
Performance, Bottlenecks, and Fundamentals
Without a doubt, we've all experienced each of these symptoms in real-world application usage and know the frustrations that result from such bottlenecks. Performance bottlenecks occur when an entire application is fundamentally limited by a single or small number of factors. Each of these bottlenecks alone have the capability of significantly limiting an application's potential, and if combined an application would almost certainly be considered useless. Symptoms like those listed can be remedied by understanding the basic principles of designing and developing an application with performance in mind. Personally, I group these principles into a common concept that I like to call performance-driven development. The theory is that performance and ultimate optimization should be a critical portion of an application's core foundation from design to deployment. Performance is not an accident; it is a facet of development and should never be treated as an afterthought or bolt-on addition to an application. A core focus, performance goes hand in hand with leading development, best practices, and can be a guide when making decisions that a effect development architecture and direction. In parallel, application performance may be likened to security. When application security considerations are not part of the design and development process, an application is more likely to fall victim to weaknesses in security. An application developed without an understanding of core application security vulnerabilities is more likely to be negatively impacted than an application designed and developed to prevent exploitation. In recent years, organizations throughout the world are placing a great amount of focus on application security, no doubt in part because of the new nature of our instant and interconnected environments. Negative impressions, experiences, and bad press are of course serious considerations that drive these decisions. Performance, like security, must be considered a critical component of the development lifecycle. Performance has a significant effect on perception, which in turn effects impression and ultimately effects an application's market penetration. Not often does an application have a monopoly over a concept or sector. Choices are readily abundant and applications, which do not deliver, will surely be replaced by those that do.
[ 10 ]
Chapter 1
As a positive initial impression of an application may lead to great success, inversely, a negative initial impression will surely lead to measurable failure. Apple's App Store may be considered the ultimate testing grounds for these basic principles. A good first impression may be all that your application needs to find traction and succeed, while at the same time that single impression may be the only one you get. Of course, not every successful application in Apple's App Store is a refined piece of beautifully executed code. The fact that there are a handful of successful exceptions proves the rule.
Performance fundamentals
Imagine a car, designed with a complete focus on top-end speed and performance. Each component is selected solely based on its individual performance characteristics. The lightest weight frame was selected, the body chosen solely for its aerodynamic properties and the engine and transmission designed to put the maximum amount of horsepower to the wheels as possible. Initially, the idea of the ultimate performance focused car sounds absolutely wonderful until we dig a bit deeper. All bolts and screws were selected based upon their weight property and not load or shear strength. To save additional weight, the interior is barely large enough to carry more than a couple of average sized adults and no longer includes standard safety features. Crumple zones, air bags, bumpers, mirrors, and air conditioning are no longer included, not to mention an audio system. The overall integrity of the vehicle to be used for anything other than this specific performance purpose has been completely jeopardized. Its ultimate functionality is limited and the potential for failure has increased. Not only does a car need to perform well, there are basic requirements for which it must function. Minimums and maximums for speed, under carriage clearance, carrying capacity, fuel efficiency, safety, as well as terrain for which it may operate, must all be considered. Without seats, windows, heating, or gauges, the sole focus on performance begins to make the vehicle less attractive to the general user. The overall functionality of the vehicle suffers and in turn the lack of market acceptance is the result of earlier poor design and development decisions. Similarly, an application utilizing these same poor design principles will struggle much the same way a vehicle would in real-world usage scenarios. [ 11 ]
Performance, Bottlenecks, and Fundamentals
An application with a complete focus on performance is not functional, while at the same time an application designed solely for functionality will more than likely have poor performance. Finding an appropriate balance between functionality and performance is an important factor in success. As a whole, a car can be said to perform great but if any area is lacking in performance then the entire car may fail to deliver. Even so, a car is not designed in whole; it is designed component by component much like an application. These components are designed to work closely together to achieve the highest levels of performance and functionality. Both physically and logically tethered, a vehicle passes energy, data, and functionality from one component to another while minimizing as much loss as possible. An application developer should strive to deliver solutions, which respect similar principles. In practice, we should design applications to run at 200 mph, but design them to be just as effective at 5 mph as they are at 55 mph. Develop with exceptions in mind, that our application might be used outside of the limitations for which we've designed. The potential exists for our application to break through a barrier, fall thousands of feet, and explode into a million little pieces. We should anticipate such failures and develop sound architectural solutions to prevent users from experiencing such crashes. Much like performance requirements vary from application to application, the delicate balance of functionality and performance will be application unique and just as important.
Approaching performance
A common approach to developing functionality with performance in mind is to first approach the situation from the needs of the feature. When the feature is clearly outlined, we can begin to look at potential performance bottlenecks and make appropriate design changes while being careful to sacrifice as little performance as possible. For performance-driven development, our goal should be to design finely tuned applications with absolute perfection in mind. Applications that perform at the highest level of optimization, utilize the language in its most efficient form to produce perfection.
[ 12 ]
Chapter 1
To some, producing perfect code may sound impossible to achieve, however, regardless of its possibility we should maintain perfection as the ultimate goal for every line of code. Performance isn't achieved by setting our sights lower or by choosing mediocre remedies to programming problems. Additionally, performance is not only achieved by selecting the quickest or most efficient components. Performance is a delicate balancing act that requires a great amount of attention and understanding. Finely tuned code is art. Take pride in your code and don't accept less than perfect when you don't have to. Clean, well-formatted, and optimized code is productive code that contributes to the overall health of an application. Spending a good portion of our time ensuring that our code is cleanly formatted, commented, and indented correctly has a good chance of paying off much later on in a project. Inversely, messy and unorganized code has a great probability of introducing bugs, inefficiencies, as well as a myriad of performance related problems. Application performance isn't always about a single theory or concept; it isn't going to be the one silver or magical bullet that solves all performance issues. More often than not it isn't one single problem that is the cause of an application not performing well; it's many small things, which are compounded into a larger issue. The aeronautics industry describes a similar principle, in which a catastrophic event is rarely ever caused by a single failure. It is the culmination of many events and failures, which lead to the ultimate breakdown or disaster. Poor performance is very similar in theory; it is the combination of poor memory management, bad data model design, improper logic, and other small and seemingly passive coding mistakes. In a project of five or ten thousand lines of code, problems such as these have a limited effect; however, code that approaches 500,000 lines of code or reaches into the millions is much more likely to be seriously affected and be much more difficult to remedy. Any particular performance bottleneck, which can affect an application, may be separated into the following categories: •
Process management: Issues and solutions related to processes and threads come under process management
•
Memory: Issues and solutions which effect or are effected by memory come under memory
•
Storage: Issues and solutions related to methods or structures for the storage of data both temporary and permanent come under storage [ 13 ]
Performance, Bottlenecks, and Fundamentals
•
Network: Issues and solutions relating to all forms of network communication come under network
•
User interface: These are issues and solutions relating specifically to the components and functions that a user might interact with
Each of these categories is quite broad in scope but does encompass very much every potential programming bottleneck we will encounter. Continuing, we'll break down each of these core performance categories in greater detail and discuss their overall importance in iOS application development and optimization.
Process management
Process management performance at its most basic form is the ability of an application to effectively balance each of its processes and threads to maintain a consistent performance profile. Ideally, a successful and consistent performance profile is one that is maintained within safe process execution boundaries and is maintained over the lifetime of the running application or process. An application is a logical grouping of instructions, while a process is defined as an instance of instructions executing within an application. A process may be composed of a single thread or in many cases multiple threads, which execute processing instructions concurrently. In general, an application thread is a method of implementing multiple concurrent lines of processing within a single application. Less technically accurate, it is the principle of doing more than one thing at a time. Every application that is developed, utilizes at least one single process and thread. However, as important as processes and threads are to grasp, understanding and effectively managing application processes and threads is likely one of the most misunderstood topics in the programming world. Knowing when to employ threading and when to rely upon alternative language features like notifications, timers, and asynchronous functions is a skill that is widely underdeveloped. That lack of understanding regarding processes and threads is a leading factor in poor performance for many iOS applications. All developers should take the time to fully comprehend the ramifications of process and thread management.
[ 14 ]
Chapter 1
For instance, the assumption by a developer to use threading when concurrent processing is necessary is not accurate. Threading, much like any programming principle, has its own relative benefits and drawbacks, especially as they relate to general application performance. Improper process and thread management can significantly impact an application's performance. Spawning an abundance of unnecessary threads increases complexity and application overhead, while under utilizing threads for complicated architectures will have the opposite effect. Without an appropriate understanding of process and thread management, both extremes of either thread abuse or thread underuse is more likely to occur. Process and thread management is similar in concept for virtually every device and language which supports this functionality, however, we must pay close attention to and respect the capabilities and limitations of the device when working with threads and processes. iOS devices utilize a mobile processing architecture known as RISC ("Reduced Instruction Set Computing"). RISC (pronounced risk), is a CPU design principle in which simplified or limited instruction sets result in higher levels of performance. Regarding RISC instruction sets, the terms 'reduced' and 'limited' should not be confused to mean reduced or limited functionality. These terms commonly refer to the amount of work any single instruction performs when executed at any given time. Although in many instances, limited instruction sets may be relatively more powerful than their desktop cousins, RISC processors are designed for optimal performance in restricted environments. Physical space limitations, power consumption, heat, and cost are only a few of the factors considered when RISC architecture is selected for a mobile device platform. For completeness, many years ago an intellectual architecture battle was in full stride as proponents of CISC (Complex Instruction Set Computing) waged war against the increasingly popular RISC architecture. The principle behind CISC is that more instructions per request are better and more efficient, the fact that compiling several instructions together into byte code is efficient but only if all operations within those instructions are necessary. Unused operations waste limited resources and this truth opened the door for RISC to step forward as the dominant architecture for nearly all modern processors, both mobile and desktop. However, the common usage of the term RISC is implicit of processors that benefit from a truly reduced instruction set.
[ 15 ]
Performance, Bottlenecks, and Fundamentals
Simple programming mistakes that would otherwise go unnoticed in a desktop application are magnified when executed on RISC-based mobile processors. As developers we should be aware of the target architecture we are developing on and ensure we understand the ramifications of programming shortcuts or simple logic and programming errors. As an example, instantiating an object within a simple for loop several dozen times may not be an issue within a desktop application which seemingly has unlimited resources when compared to a mobile device. However, this same technique may have extreme consequences on overall performance when running on an iOS device. Greater care must be taken to ensure our code is as optimized as possible. Small performance issues can quickly add up in number creating more complicated bottlenecks and be difficult to profile and locate. The following is a table listing of iOS devices along with integrated processor architecture: iOS device
Processor
iPhone
620 MHz ARM
iPhone 3G
620 MHz ARM
iPhone 3GS
833 MHz ARM Cortex-A8
iPhone 4
1 GHz ARM Cortex-A8 Apple A4
iPod Touch (1st gen.)
620 MHz ARM
iPod Touch (2nd gen.)
620 MHz ARM
iPod Touch (3rd gen.)
833 MHz ARM Cortex-A8
iPod Touch (4th gen.)
1 GHz ARM Cortex-A8 Apple A4
iPad
1 GHz ARM Cortex-A8 Apple A4
iPad 2
1 GHz dual-core ARM Cortex-A9 Apple A5
Apple TV (2nd gen.)
1 GHz ARM Cortex-A8 Apple A4
Actual performance processing differences between iOS devices isn't overly considerable. Users may find that a newer device feels more responsive but this is more likely due to actual hardware and operating system improvements rather than an application's lack of ability to perform across different devices. In addition to language and device capabilities, it is important to understand and consider the provided feature sets of the host operating system. iOS 4 includes a much anticipated multitasking feature which allows multiple applications to remain in memory, while providing an interface for users to easily switch from one application to another. [ 16 ]
Chapter 1
While this newly available feature is very convenient for a device user, it does add to the importance of ensuring applications developed for iOS devices are as highly optimized as possible for performance and efficiency. With this new feature brings a greater level of responsibility for iOS application developers. The days of having sole control over process and memory while your application was running are in the past and developers will now be required to be less intrusive on system resources in order to be a good neighbor.
Memory
Memory performance can be said to have been achieved when an application appropriately limits the amount of memory it consumes to an absolute minimum, while not crippling functionality or performance. Memory performance in relation to mobile devices is extremely important due to the limited amount of resources available to any given running application. When we refer to the memory configuration or capabilities of an iOS device, we are specifically referencing the amount of available RAM (Random Access Memory) available within the device. RAM commonly refers to the amount of limited memory used for the temporary storage of data throughout the life of an operating system and its host applications. RAM is a type of volatile memory, in which data contained within is lost when power is removed. While executing, an application retains and releases considerable amounts of memory for every operation that is executed. In general, an application that has access to greater amounts of memory will have increased potential capabilities for data processing and execution. Memory-related programming issues are one of the most common problems a developer will experience and of these common problems the memory leak is the most notorious. A memory leak in its most simple definition is when an application consumes memory but does not release it back to the operating system, when it is finished or no longer needed. An application leaking memory will continue to consume great amounts of memory over the lifetime of the running application, affecting the application's performance and eventually the performance and ultimate stability of the host operating system.
[ 17 ]
Performance, Bottlenecks, and Fundamentals
Memory leaks account for a great portion of programming errors and are more common in languages that do not have full implementations of automatic garbage collection. Leaks happen when an object in memory is retained but never released for reuse or garbage collection. Memory leaks are quite possibly the leading cause for performance related issues on iOS devices. Although Objective-C 2.0 supports optional garbage collection, the iPhone implementation of Objective-C prior to iOS 5 did not provide garbage collection due to the significant overhead of the feature, which would have negatively impacted battery power and general iOS performance. Garbage collection in most major implementations directly affects performance because of the basic premise behind the feature. Garbage collection is executed by the operating system when resources are in need of reallocation and release and determining when or if at all it is going to happen is not straightforward. When the operating system chooses to perform collection operations, it is not necessarily aware of the most appropriate, or worse the most inappropriate times for these operations to take place. Intense graphics, animations, video, audio, and user interactions can be significantly impacted if garbage collection spins up when mobile processors are at peak utilization. Other mobile device and operating system vendors have chosen to implement and allow garbage collection at the expense of potential performance issues. Within Apple's new LLVM compiler it has implemented and enabled the new memory and resource management option ARC (Automatic Reference Counting), which is an intelligent form of garbage collection that keeps track of retains and releases to automatically free resources without the need for developer involvement or a full blown garbage collection implementation. ARC as garbage collection by definition is currently being heavily debated throughout the development communities; however, if we look at garbage collection as a form of automated resource management, ARC quite definitively falls within this sphere.
[ 18 ]
Chapter 1
The following is a table listing of iOS devices along with available memory configurations: iOS device
Memory
iPhone
128 MB
iPhone 3G
128 MB
iPhone 3GS
256 MB
iPhone 4
512 MB
iPod Touch (1st gen.)
128 MB
iPod Touch (2nd gen.)
128 MB
iPod Touch (3rd gen.)
256 MB
iPod Touch (4th gen.)
256 MB
iPad
256 MB
iPad 2
512 MB
Apple TV (2nd gen.)
256MB
As we've touched upon, memory management can be tedious and time consuming but contains enormous potential to cause almost limitless performance and stability related issues. One particular strategy that I've found quite useful is to micro-manage memory consumption while programming. In practice it is the concept of paying strict attention to all memory that is retained, and appropriately releasing it when necessary. Personally, I've found it much easier to manage memory consumption and prevent leaks if I micro-manage consumption while I develop, rather than coming back hundreds or even thousands of lines later and attempt to remember where I need to release objects. Within iOS 5, this micro-managing tactic isn't as necessary and unless you have specific circumstances, which require specific knowledge of memory consumption and object management, you can do without it and simply rely upon ARC to do this for you. However, let's not use this as an excuse not to understand the impact of memory management. In addition to the full suite of performance and optimizations tools that Xcode provides, we have access to a powerful memory analyzer and profiler that will help us locate possible memory leaks. Analyzing our source code is as easy as building our project and is available from this same menu. [ 19 ]
Performance, Bottlenecks, and Fundamentals
The "Build and Analyze" feature was introduced in Xcode 3.2 and allows users to run the Clang Static Analyzer at the same time the project source is compiled.
Static analyzation in Apple's own words is a "revolutionary feature", that provides deep inspection of source code to locate bugs prior to application execution. The analyzer is extremely helpful in locating logical programming errors such as memory leaks and other bugs unrelated to syntax errors. Used in conjunction with standard compiler warnings, the analyzer gives us increased control over the performance and stability of our code base. Because static analyzation is built directly into Xcode, optimization results are integrated directly into the development environment by way of message bubbles and graphical direction displays. This innovative integration allows us to walk through identified issues and understand why our logic may be causing memory leaks or performance problems. Traditionally, developers for iOS devices had to have a greater understanding of application memory usage and memory management techniques, not only because the operating system lacked automated garbage collection features but also due to limitations in device memory. However, as mentioned earlier, Apple's newly added automatic reference counting is going to create an additional layer of stability without the costly losses in performance that any traditional garbage collection service would bring. Developers can enable ARC within the new LLVM compiler and be nearly worry free when it comes to managing memory. Personally, I fear we'll see an increasing number of applications, that simply waste device memory, because it's easy to offload memory management responsibility to the compiler and be less disciplined. Even with recent iOS changes making memory management simpler, it remains as relevant as anything else to understand the object retain and release cycle just in case an exception situation arises.
Storage
Storage performance is the capability of an application to read and write core data structures without limiting application functionality or degrading overall performance. Storage performance has a wide-ranging effect on an application, both positive and negative. An application riding upon a poorly designed data model will have limited feature capabilities, resulting in poor overall application performance degradation. While, an application with an appropriately designed model for data will have near limitless data access capabilities and greater overall performance and future scalability. [ 20 ]
Chapter 1
The importance of storing data correctly for use with your application is essential. Data storage schemes should be well planned and integrated during the design process of an application. Poorly laid out data storage designs are much more difficult to change once they have been deployed and data is in use. Developers who frequently change data models after an application has been deployed will find themselves writing thousands of lines of data migration code and taking unnecessary risks with their users, data. The iOS SDK provides developers with many options for data storage and persistence. Among them, Core Data is the most powerful, fully integrated, and supported persistence framework available for local iOS storage. As defined by Apple, "Core Data provides a flexible and powerful data model framework for building well-factored Cocoa applications based on the Model-ViewController (MVC) pattern. Core Data provides a general-purpose data management solution developed to handle the data model needs of every kind of application, large or small. You can build anything from a contact-management application to a vector-art illustration program on top of it. The sky is the limit." Core Data was migrated from traditional OS X desktop development to iPhone OS 3.0 and has since become the de facto standard for local data storage on all iOS devices. Of course, traditional storage of files and data exists and should be a consideration if necessary. Core Data is a complete data modeling solution that allows developers to continue focusing on application development and not be concerned with the gritty details behind data storage. It creates an abstraction layer between the data model and application, providing almost instant access to common functions such as create, save, restore, undo, and redo. Behind the Core Data curtain, objects are serialized and stored using XML, binary, SQLite, or even custom storage formats. Core Data acts as the glue that binds an application to its data model. It provides a developer with a single point of interaction. Take the necessary time to seriously consider all factors that may affect current and potential data storage requirements and build around these needs. Consider taking advantage of Core Data to enhance overall application performance as well as maintain easily understood code. Applications with poor data model design will suffer significant performance loss as well as with the inability to adequately support new functionality that may be outside the boundaries of that poorly designed model.
[ 21 ]
Performance, Bottlenecks, and Fundamentals
A major benefit of developing for performance on mobile devices is their particular use of more advanced storage mediums than traditional hard disk drives. iOS devices utilize flash memory for data storage, which is a form of solid state nonvolatile memory. Unlike a traditional storage disk, flash memory does not have any moving parts, which makes them an ideal storage solution for portable devices which may be prone to significant movement as well as unfortunate sudden stops. Flash memory, in addition to its many benefits, is significantly faster at reading and writing data, which increases overall usage performance and potential functionality. Not widely known, over time, flash memory chips wear out and become less efficient, requiring more frequent bad block remapping, resulting in erase and program operations taking longer than normal to complete. The performance loss of worn out flash memory isn't grossly significant, however, it is important to understand as it can effect performance related optimization and application benchmark results. Since the release of the iPhone, iOS devices have had a limited selection of storage options, ranging from 4 gigabyte flash disks to as much as 64 gigabytes of flash storage in recent iPod Touch updates and the iPad. The following is a table listing of iOS devices along with available storage configurations: iOS device
Storage
iPhone
4, 8, and 16 GB
iPhone 3G
8 and 16 GB
iPhone 3GS
8, 16, and 32 GB
iPhone 4
16 and 32 GB
iPod Touch (1st gen.)
8, 16, and 32 GB
iPod Touch (2nd gen.)
8, 16, and 32 GB
iPod Touch (3rd gen.)
32 and 64 GB
iPod Touch (4th gen.)
8, 32, and 64 GB
iPad
16, 32, and 64 GB
iPad 2
16, 32, and 64 GB
Apple TV (2nd gen.)
8 GB
[ 22 ]
Chapter 1
iOS devices much like any mobile media device may be frequently loaded down with music and movies in addition to podcasts, TV shows, applications, books, e-mail, and more. Understand that resources can be restricted before and during the execution of an application and much like any other traditional disk or storage system, we must adequately monitor these conditions for change that might negatively impact an application.
Network
Network performance is the ability of an application to effectively transmit and receive data while simultaneously allowing core functions of an application to continue unaffected. Network communication is very much a dark art, and without proper knowledge or technical understanding, network performance can quickly become a bottleneck for even the most simple of applications. Networking is an entire field of its own and without a great amount of sector experience, common networking mistakes are made and never really identified as the source of performance issues. iOS devices, along with any other interconnected mobile platform, reply upon network communications as their lifeline to the rest of the world. Asynchronously sending and receiving network packets over bluetooth, Wi-Fi, and carrier networks is an underdeveloped skill and closely tied to application performance. Imagine an application, in which every time it accessed a remote network to retrieve or store data, the application paused until the operation was complete. In truth, we don't have to strain to imagine this scenario as we've all experienced it, either in our own applications or others. I've spent a great amount of time watching a spinning gear on my iPhone wondering why this developer chose to make me wait while the application downloads data. It's less noticeable over an adequate Wi-Fi network, but extremely frustrating with a spotty carrier network signal. Both technical and design solutions readily exist to limit these performance issues. Network performance isn't simply about asynchronous versus synchronous communications; performance is also greatly affected by poor application architectures, in which unnecessarily large amounts of data are sent and received. Additionally, application performance and stability can be seriously affected by network data models that become out of sync or performed at inappropriate times.
[ 23 ]
Performance, Bottlenecks, and Fundamentals
An example of poor network communications design is for an application to perform dozens of small network transactions, when a single, less frequent and more precise operation would suffice. Inversely, performing large network data transfers during the initialization of an application, when smaller transactions would be more appropriate is equally unforgiving. Poor network data management is one of the primary reasons that an application is available for Wi-Fi communications yet will not function, or functionality is prevented when a device is utilizing a carrier network. Choosing an appropriate network communication's strategy is situation specific and one single theory may not be effective for every application nor for every distinct communication need of a particular application. Developers must take into account the types and speeds of network access available to the application and either make appropriate accommodations or prevent network communications from taking place to protect end users from poor experiences with applications. It is common for desktop application developers with wired network connections to have difficulty in managing network performance with what would seem an unlimited amount of connectivity resources when compared to a mobile device. Even so, mobile devices traditionally have tighter tolerances to work within, and as a developer it is important to understand these limitations and express great restraint when it comes to over utilization of network communications. It is the responsibility of the developer to architect a communication's solution that provides the necessary features and functionality that an application requires as well as be optimized to transmit and receive only necessary data and do it efficiently. Understand that if your application supports communication over a carrier network, that an end user may have resource limitations as well as costly charges for exceeding these data limits. Frequent or constant data polling to remote network servers can have a significant impact if an application is left running. We must assume that the potential exists for a user to use an application as much as 24 hours a day, 7 days a week, and factor this consideration into our overall design. Mobile applications utilizing network communications are directly affected by a device's network signal strength. Bandwidth levels are uncertain and cannot be guaranteed but must be accommodated for. Monitoring bandwidth levels and network availability is essential for all mobile applications to perform effectively as a user moves from location to location, in and out of network availability.
[ 24 ]
Chapter 1
In particular, data caching and compression can help reduce the amount of network round tripping that an application might otherwise need if implemented correctly. Cache, compression, and tighter protocols can have a significant effect on overall bandwidth usage at the cost of processing power. Decisions such as these should be made during the design phase of application development; however, they may be easily integrated after the fact for quick performance gains. In general, reducing the total amount of data communications that an application performs will ultimately increase performance and decrease the potential for failure and performance bottlenecks.
User interface
User interface performance is the combination of how well an interface performs in regards to processor, memory, and display as well as how efficient a user is with a particular interface. A user interface is the point at which a user interacts, views, or controls a particular object, system, or environment. The user interface can quite arguably be the most important component of an application. This of course heavily depends upon the type and focus of an application; however, a poorly designed and developed user interface will have a significant impact on user adoption and market penetration. Poor adoption and lack of market penetration will prevent an otherwise brilliant application from being successful. An interface that is less than adequate will surely lead down this road. Designing and integrating an optimized user interface into an application requires a well planned, systematic approach. Frequently, developers are forced into refactoring code due to modifications of user interface concepts or design, delaying product development and possibly introducing bugs as a result of changes in direction. It is understandable that changes throughout the development lifecycle will take place; however, good planning and design strategies will limit these necessary evils and help maintain the health of the project. With hundreds of thousands of applications available for iOS devices, a user's first impression of an application will decide whether or not it makes it onto their device. Other than a product description, a well-designed and presented user interface may be the difference in success or ultimate rejection.
[ 25 ]
Performance, Bottlenecks, and Fundamentals
An interface is not simply a visual or artistic element of an application. It is as important as the technology, which supports it. An interface is just as capable of performing poorly as unoptimized source code might be. Poorly designed user interfaces can be found just about anywhere, and rarely does an application with a poor interface achieve any real level of measurable success. A quick browse through the Apple App Store is proof enough that interface design is directly related to performance and success. Apple has provided its iOS developers with an abundant resource of intuitive and consistent interface components. These components have been designed to allow users to comfortably navigate between iOS applications without interface learning curves for each new application. An interface should be instantly usable by users. A learning curve for an application's interface should not exist. Users should instantly recognize and feel comfortable navigating throughout your application. Familiarity is achieved by utilizing standardized controls and interface components that users may already be experienced with while using other iOS applications. Apple's own recommendation is to utilize these standardized components to take advantage of the consistent iOS experience they provide. Unless an application has specific user interface requirements, it is highly recommended that they be utilized. Remember, Apple has spent millions of dollars in research and development on user interface design. If possible take advantage of their recommendation, experience, and industry research. Within virtually every application, our experience using an iOS device is filled with some form of motion or animation. Scrolling through contacts, swiping through photos, and even unlocking an iOS device presents a user with the illusion of motion. Core Animation is the foundation framework that iOS uses to perform all of the default animation eye-candy that we are used to. Scrolling through pages of an application's animation as we change from portrait to landscape as well as applications that appear to zoom in and out as we open and close them, are all effects of Core Animation. The Core Animation framework is designed to allow developers to build dynamic and animated user experiences with minimal coding effort. Abstracting much of the hard work behind simple to use classes, Core Animation benefits can be implemented just about anywhere and in as much as a few lines of code. Core Animation is extremely efficient and should be utilized as much as possible where basic animations and transitions are required.
[ 26 ]
Chapter 1
Of the three human senses we use with an iOS device, two of them interact with a single component, the display. Sight and touch are both integral to the usage of iOS devices and rely heavily on the technical characteristics of the display. Unlike desktop development, the limited screen size of mobile devices requires a greater level of user interface design creativity to ensure that a user's experience is not restricted. Interface components must be large enough to manipulate with our fingers but not too large as to obscure the display's main view or limit usable real estate. Performance is not just how the application performs; it can also be thought of as how efficient your users can be with it. The following is a table listing of iOS devices and their respective technical display specifications: iOS device
Resolution
Aspect
PPI
Size
iPhone
480 x 320
3:2
163
3.5"
iPhone 3G
480 x 320
3:2
163
3.5"
iPhone 3GS
480 x 320
3:2
163
3.5"
iPhone 4
960 x 640
3:2
326
3.5"
iPod Touch (1st gen.)
480 x 320
3:2
163
3.5"
iPod Touch (2nd gen.)
480 x 320
3:2
163
3.5"
iPod Touch (3rd gen.)
480 x 320
3:2
163
3.5"
iPod Touch (4th gen.)
960 x 640
3:2
326
3.5"
iPad
1024 x 768
4:3
132
9.7"
iPad 2
1024 x768
4:3
132
9.7"
Usability and user interface design is a complicated and artistic field, which usually falls outside the responsibility of a development team. If usability or interface design is not something you are comfortable with, consider taking advantage of a dedicated usability expert to locate more efficient methods of displaying or presenting data.
Be a good neighbor
Both the Apple iPhone and iPad are cohesive operating environments that rely extremely heavily on a wide range of applications expected to play along nicely with one another. Designed with a consistent user experience, these devices represent refined pieces of cutting edge technology that users trust to operate at their highest level. [ 27 ]
Performance, Bottlenecks, and Fundamentals
Poor performance by your application can significantly impact the performance of the overall device as well as other applications and processes that may be running simultaneously to yours. An important aspect of performance tuning is to ensure your application follows the good neighbor theory concept. You should make it a point to be aware that other applications may be running in the background or simultaneously to yours and make appropriate accommodations. Following the principles of performance driven development outlined in this book, your applications will no doubt be that good neighbor that we all wish we had. These simple steps will ensure that when the iPhone does begin to have resource related performance issues, you are not the problem. In a perfect iOS world, our developer neighbors would feel just as strongly about protecting the integrity of the operating environment and be focused on performance driven development much like we are. However, both history and reality tells us that this is not the case. Much like a student driver is taught the concept of defensive driving, we must learn to develop defensively and protect the integrity of our applications. Not only is it important for us to play nicely with one another and be aware of our application's effect on the performance of the overall device, it is equally important for us to defend against poorly written applications, which may negatively impact our performance. Take advantage of native programming language features to identify when system resources become limited or unavailable and act accordingly. Monitoring memory, network connectivity, CPU utilization, and storage availability will help us manage real-time performance more effectively. Keep in mind, the more performance we squeeze out of an application, the worse everyone around us can be before our application is hindered.
Application design and architecture
Think of performance tuning as a feature of your application, a feature, which every user of your application places a great amount of reliability upon. A feature, which is essential to the operation of the application and should be integrated from design to deployment. Design is just as critical when developing an application for performance as its actual lines of code.
[ 28 ]
Chapter 1
An application in its most simple form is a compilation of components that are tied together using frameworks and design patterns. Each component in itself might be comprised of smaller components, frameworks, and libraries all resulting in the bulk of the greater application. Stepping back into our vehicle analogy for a moment, performance is not an after-market option, it is a characteristic of the vehicle itself. It is a primary focus of the vehicle from the design board to final production and ultimately as it rolls off of the assembly line. Each component of a vehicle is selected for its individual characteristics as well as the potential for integration and support of the vehicle as a whole. Designing a finely tuned application requires an understanding of the problems, which might arise during development and knowing what solutions are available to resolve them. Design patterns are programming guidelines and general theories, which aid in development by providing answers to the most common programming problems a developer might experience. Design patterns are solutions, which are project and time tested to be functional and stable. Becoming familiar with and taking advantage of common programming design patterns will increase overall efficiency and application performance. Try not to spend frustrating hours attempting to re-invent the wheel when a common design pattern is available to provide a remedy to your exact problem. Understand, the majority of an application is not revolutionary code and that common and frequent problems you may be experiencing have more than likely been resolved and solutions made readily available. From a pure design principle, when we develop an application it should have the features and functionality that users require, yet run and perform at its highest level. Every application we develop should be the ultimate application. Not the ultimate application from the perspective of its genre or type, but from a pure logical and architectural angle. As we've mentioned prior, an application should be designed to run at 200 mph and take into account, how unintended usage may negatively affect your application and its overall performance. For instance, we might anticipate a UITableView handling as many cells as a few dozen or more. However, we should plan for this UITableView being used at 200 mph with 1,000 cells. In this scenario, we would ensure that we are properly recycling cells for reuse and not instantiating 1,000 new cells as a user scrolls through the view. [ 29 ]
Performance, Bottlenecks, and Fundamentals
Design your application architecture not around the perfect user, but around the exception user, the one who will push your application to its breaking point. Design for that 1 percent of our users, which will push our applications past its means. Design and scale our application to take the abuse from the most aggressive users, and our application will perform near perfect for our ideal user. Developing functionality for 200 mph does not mean that every feature and function must execute at that speed. Imagine for a moment, a convertible sports car cruising along at 140 mph when the button to release the convertible top is pressed. We should understand that there are features of an application, which should not be performed at high speed or at the maximum limits of our application. Just as a car would have a speed governor, limits on seating, and fuel capacity, an application should have and limit its users in the same fashion. Gently guide, instruct, or prevent the user from performing these functions or utilizing these features under the more aggressive conditions. Guide them by preventing them from making mistakes that would negatively impact performance. Show them how to maximize performance in the way the application was designed to be used.
Application performance
Application performance loss and gain happens throughout the entire application development lifecycle. Performance may be severely affected from the initial concept and design to the development, testing, and final release of the application. Balancing performance is a process of continual give and take, where we may have to accept a minor performance loss in one particular area of an application in hopes we can regain a performance advantage in another area of our application. Although in many cases technology and code can resolve poorly designed application architectures, it isn't the ideal solution. Source code written to overcome poor design is never a solid solution and will only lead to further feature and performance related issues. Regardless of how hard we try, performance issues will be identified much later on in the development lifecycle, many of which might even be discovered after initial release. At this point, core application design changes are no longer an option and we must choose solutions which achieve the greatest levels of performance gain without compromising the integrity of the application or project. The two most common schools of thought for application performance tuning are 'resolve now' and 'resolve later'. [ 30 ]
Chapter 1
Resolving performance issues as you identify them has the benefit of keeping your code up-to-date and tuned well; however, spending a great amount of focus on performance during your first pass in development may impede productivity. When resolving performance related issues as they are identified, we could easily fall into the never-ending tuning trap, in which we spend hours and days squeezing every last bit of performance from each of our methods and functions and never progressing towards the ultimate completion of the component or application. Resolving performance issues during a later development cycle has the benefit of allowing us to focus on core application functionality, but creates an opportunity to forget when and where optimization techniques should have been employed as a project grows in size and complexity. A good practice when resolving performance issues at a later time is to comment in great detail the code you believe to be weak and mark for cleanup and performance review. Revisit this marked code on a second or third development pass, when you have the time and focus to be concise and efficient. Achieving great performance from an application is not an accident. It's quite uncommon to stumble across perfect and fully optimized code. It takes a dedication of time, energy, and understanding to create and mold good code into great code that runs as efficiently as possible. Tuning takes time and time is always a factor, when developing an application. Whether this time is a looming deadline or contractor's billable hours, time is an important part of the decision on when we choose to optimize our code. The decision on when to resolve performance issues is a personal developer preference and may change from application to application as project needs or development requirements evolve. The most ideal approach is to choose the solution that works best for your development style and build a strong habit and strict process for performance tuning. Focus on good patterns and design architecture and never settle for anything less than optimal. A band-aid design pattern does not exist. Making a list of performance priorities for your application early on, allows you to more quickly identify if a performance issue is related to design or implementation. Knowing when and where you can sacrifice performance or make suitable trade-offs is important. If we identify opportunities that might increase application performance without requiring great effort, we should take the necessary few moments to immediately address them. [ 31 ]
Performance, Bottlenecks, and Fundamentals
Personally, I use what I refer to as the 5-minute rule. If I can resolve the problem in 5 minutes or less, then I will make the necessary changes and continue. If the problem or situation requires a greater amount of time, I simply mark this code with detailed comments and return during a later development pass when time and focus permits. Of course, there are times when even a 2 minute stop to optimize code may break a fast moving train of thought or interfere with our overall productivity. When this situation arrives, our developed habit and process for performance tuning will guide us in the most appropriate direction, which in my case is to comment code and move on. Just as we learned basic development skills, we need to learn and adapt our development cycle from our tuning experiences and results. Take this knowledge and apply it to each new line of code, ultimately increasing our development efficiency and overall application performance and quality. Take advantage of automated testing platforms, profilers, and analyzers that will help you locate and resolve bad code. Learn from the remedies provided by these tools and add these to your programming repertoire. An application that performs well does more than just one specific thing efficiently, it does everything efficiently.
Summary
In this chapter, we covered the fundamentals of performance-driven development in which we've identified that performance is not an accident, it should be an integral part of the design and development process. We've identified the basic categories of which virtually all performance related bottlenecks may be classified as well as touched upon their overall importance regarding application performance. Tuning an application is a lengthy and time consuming process, however, it is a worthy cause and can be made easier by following the basic programming principles outlined in this chapter. Achieving optimal performance from an application is our ultimate goal and our application's users will measure our success.
[ 32 ]
Chapter 1
More specifically, the following is a list of the areas that we focused on in this chapter: •
Basic principles of application performance and bottlenecks
•
Applications must be developed to perform well at all things, not a select few
•
A single bottleneck has the potential to cripple an entire application
•
iOS architecture and the impact of effective process management
•
iOS memory limitations and the importance of efficient memory usage
•
iOS storage capacity, application data models, general storage, and Core Data
•
Impact of network related bottlenecks
•
A user interface is just as critical to performance as the underlying code
•
Being a good iOS neighbor and coding defensively
•
Building performance from architecture and design to development and deployment
•
Learn and employ common design patterns when applicable
•
Performance management is a continual process
[ 33 ]
Design for Performance Designing an application is much more than selecting user interface components and choosing color schemes. Although these decisions may be rather important for the overall success of an application, we will be specifically focusing on the performance aspect of design and how the creation of a stable foundation and making solid decisions early, will pay off in the end. With performance in mind, this chapter will focus on the primitive concepts of designing your project from the ground up for maximum performance. Specific areas in which our concepts will focus are as follows: •
Preparing the project
•
Project organization
•
Project structure
•
Groups and files
•
Code structure
The design phase of development is typically where we take into account any element of an application that may have a significant impact on the overall architecture of the final product. Project structuring, required functions, preferred features, hardware specifications, interoperability, and logical limitations are all factors that should be considered within this phase. Elements not regularly included during the design phase may include visuals, color schemes, intricate feature details, and other interchangeable aspects of the final product. When designing with performance in mind, you must take into account the desired characteristics and levels of performance you are looking to achieve in your application.
Design for Performance
Knowing precisely where your application's performance needs are and focusing greater attention in those areas is the basic premise of the performance-tuning concept. Identifying areas where performance tuning is necessary in many circumstances may be the most difficult part. Obvious areas like memory, database, and network communications may stand out and be somewhat simple to diagnose, however, less common user interface or architectural issues may require profiling and even user feedback for identification. For instance, a database-laden application would be expected to be as optimized as possible for efficient querying, while an application tailored towards video recording and playback may not necessarily require a focus on database efficiency. Similarly, a project which may end up with as little as a few thousand lines of source code may not require a great deal of project structuring and framework planning, while a much larger project will need more time dedicated to these areas. Overloading your application and testing for weaknesses by pushing it beyond its capabilities can prove to be extremely valuable. As an example, databases and table views can be loaded with overly large datasets to identify missing keys or object misuse. The design phase may help you identify potential bottlenecks, giving you an opportunity to alter the layout and design of your project before any development has taken place and it becomes too cumbersome to resolve in the midst of coding. Bottlenecks that are unavoidable can be highlighted as areas in your application which you may want to spend more time squeezing efficiency from. Bottlenecks, which are identified early, stand a good chance of being resolved much easier than waiting until after an applications project is secured and in motion.
Preparing the project
To take full advantage of Xcode means to understand in depth the philosophy behind the Xcode user interface. Becoming proficient with Xcode will have a great impact on your effectiveness as a developer. Like any tool, knowing its capabilities as well as its limitations allows you to make smarter decisions, quicker. A car is not designed from the interior to the exterior or from the roof to the tires; it is designed from the core outward. Any good vehicle gets its start from a well engineered, tested, and proven frame. The frame is the single key component to which all other components will be bolted and attached. A poor frame design will lead to various structural issues, which in turn lead to more granular problems as these components get further away from the frame. [ 36 ]
Chapter 2
An application project is quite similar, without a solid frame to build an application upon; the quality of the final product will surely be affected. Source code, files, and other resources become cluttered, which has the potential to create similarly damaging granular issues later on in the development lifecycle. Just like one single automotive frame is not the answer for every vehicle on the road, developers are free to organize a project in the way that is most beneficial for the application as well as the workflow and preference of the developer. Although refactoring has come a long way and making organizational project changes during the development phase can be done, it is highly recommended that project decisions be made early on to limit problems and keep productivity as high as possible. A large portion of project management as far as iOS applications are concerned are handled by and through Xcode, Apple's standard integrated development environment. Xcode is an extremely powerful and feature rich integrated development environment with dozens of configuration options that directly affect an individual project. Xcode is not limited to iOS development and is quite capable of creating virtually any type of application, including applications for OS X, command-line utilities, libraries, frameworks, plugins, kernel extensions, and more. Xcode is regularly used as a development environment for varying compiled languages as well as nearly all mainstream scripting languages. For those of you who keep regular tabs on Apple and Xcode, you are more than likely well aware of the release of Xcode 4 and may have actually followed it throughout the beta process as well. Xcode 4 is an entire rewrite of the popular development environment, making needed changes to a tool that was begging for upgrades. Xcode 4 follows the paradigm of single window applications, in which all development and testing is performed within the single Xcode 4 interface. Most notable is the integration of interface builder into the core Xcode 4 interface, which brings all of the functionality of these previously separate tools together, integrating them completely. Xcode's abilities far surpass the needs that developing an iOS application require and it is again very important to understand the development environment in great detail in order to maximize its benefits. One particularly useful project configuration option is the ability to treat compiler warnings as errors. Warnings are the compiler's way of telling the developer that something is happening that it may not understand or that it's just not bad enough to prevent an application from running, but still noteworthy to inform the developer. [ 37 ]
Design for Performance
Good programming practice suggests, that every developer strive to produce warning free code. Warning free code is simply healthy code and the practice of resolving warnings as early as possible is a habit that will ultimately help in producing code that performs well. Within the Build Settings for a specific target, we can enable the Treat Warnings as Errors option to nudge us in the proper direction for maintaining healthy code. Although this feature can have a slight impact on development and testing time, it comes highly recommended and should be considered for developers interested in high quality and well performing code. In addition to helping create higher quality code, it's a forced education that may be priceless for career programmers and weekend code warriors alike. It is shown in the following screenshot:
Project organization
Every feature, function, and activity that is performed within Xcode revolves around the project. Much like any other project concept we have been exposed to, Xcode uses projects to organize files, resources, and properties for the ultimate purpose of creating applications and products.
[ 38 ]
Chapter 2
For most intents and purposes, the default project settings of Xcode will be sufficient enough for the average developer to create a multitude of applications with relatively little issue. However, we are interested not in achieving averages but in tuning, optimizing, and grabbing every bit of performance possible. We're also interested in streamlining the development process as much as we can. This is precisely why a better than average understanding of the development environment we will be working in is critical. Obviously, the majority of an application project is going to be made up of its classes, libraries, and other source code specific components. Organization of source code is a core principle for any project that is more than a handful of classes and libraries. Once a project begins to mature into dozens or hundreds of files, the importance of well-organized code becomes more apparent. Inevitably, without some type of organizational form, source code, and general project resources become unruly and difficult to find. We've all experienced giant monolithic source code and project structures with resources wildly strewn about without order. Personally, I find this level of chaos revolting and believe order and organization to be a few of the many characteristics of quality.
Project structure
Xcode's project source code structure is an open canvas, one that doesn't force a developer to use any particular method for organizing code. Unlike various programming environments, Xcode provides the freedom for a developer to build in virtually any way they like. While this level of freedom allows a developer to use the project structure that best fits their style and experience, it leaves plenty of room for mess and confusion if not entirely setting them up for failure. The solution to this problem is to have a well organized and thought out plan of action for how a project and its resources will be laid out. Remember that not a single project structure will work, nor should it work for every proposed project; however, knowing what options are available and the positive and negative effects they might have is quite important. To understand the basic principles of how to organize an Xcode project, we must first understand how a default Xcode project is structured.
[ 39 ]
Design for Performance
Initially, the appearance of a new project within Xcode looks well structured and organized, when in fact the underlying file structure of the project is much more simplistic. Xcode's underlying project directories utilize a relatively flat directory and file structure, while the Xcode interface uses a concept called groups to logically organize project resources. Groups are logical containers used within Xcode to make the organization and overall location of source code files more efficient. Essentially, the file structure that you might see within Xcode is not necessarily the actual file structure in the underlying operating system's project directory. The following is a screenshot of a new default Xcode project in which the structure in the left-panel appears to be well organized:
Contrast the logical organization of the Xcode interface with the project's underlying file structure and the grouping principle becomes clearer. Xcode stores the logical reference of the project's underlying file structure and uses groups to help developers visualize order within the development environment. In other words, what you see within Xcode is not what is actually happening on disk. The structure within Xcode is comprised of references to the disks, files, and directories. This additional layer of [ 40 ]
Chapter 2
abstraction allows developers to group or relocate a project's resources within Xcode for easier management, but not affect the actual disk structure of the project, as shown in the following screenshot:
At first glance, the underlying structure looks rather clean and simplistic, but imagine this directory in a few days, weeks, or even months, time with dozens more classes and resources. Now, one might argue that as long as the logical representation of the project is clear and concise that the underlying file architecture is unimportant. While this might be true for smaller and less complicated application projects, as a project grows in size there are many factors to consider other than how data is represented within a development environment. Consider the impact that a flat storage architecture might have throughout the life of an Xcode project. The free naming of classes and other resources may be significantly limited as all files are stored within the same base folder. Additionally, browsing source code within a source code repository like GitHub and Google Code may become difficult and tedious. Choosing exactly how a project is laid out and how its components will be organized is akin to selecting the right vehicle frame for which our project will be based from.
[ 41 ]
Design for Performance
Groups and files
One particular strategy of creating a well-organized source code structure is to create the underlying file structure according to your project needs and then import these changes into your Xcode project. We do this by opening the underlying file structure of our project, which can be done by right-clicking the main project file in the left-side Groups & Files panel of Xcode and selecting Show in Finder, shown as follows:
As you might have guessed or experienced, this opens the underlying disk structure of our Xcode project. With the underlying project file structure in view, we are free to create the structure that best fits our needs and then simply import this structure into Xcode. The following is a sample project file structure that I use very regularly for nearly every iOS project I develop. This structure includes modifications to the default project directory as well as the addition of a Resources directory. Under the default project directory, the following directories are created: •
App
•
Controllers
•
Helpers
•
Models
•
Resources
Under the newly created Resources directory, create directories for all the types of resources that you might expect to use throughout the life of the project. [ 42 ]
Chapter 2
These directories might include the following as well as anything else specific to the project: •
Audio
•
Animations
•
Images
•
NIBs
•
Databases
This is shown in the following screenshot:
We can then create the necessary directories and move source code and resources as we like. There are many benefits to organizing source code and resources in this particular manner. An example would be that if you had a series of images that were named similar yet dependent upon a particular application variable, you could easily separate these resources into individual directory structures and utilize the same file naming convention for simplicity and maintainability.
[ 43 ]
Design for Performance
Another benefit of using the same principle is localization, in which you could store all necessary translation resources in their individual, respective directories and achieve a much greater level of structure, something similar to the following directory structure: /root of project/Resources/language/
The languages stored in the previous directory are English, Dutch, German, French, and Spanish. Because Xcode stores the logical structure of the project separate from the underlying directory structure, it is important for us to inform Xcode of the changes we have made. At this point, unless of course this is a newly created project, I highly recommend having a current backup of your project and any related resources prior to making any of the following changes. To do this, select all of the child items and groups of your project within Xcode, press the Delete key, and select Remove References Only to ensure that only the group references are deleted from the project and the actual files themselves remain in their original locations, shown as follows:
Remember to only remove references or risk losing your project data.
Now that we have removed the default group references from Xcode, we can add the changes to the underlying directory structure by dragging and dropping the underlying disk structure that has been altered into the root of our Xcode project.
[ 44 ]
Chapter 2
When prompted, we will ensure that Copy items into the destinations group's folder (if needed) checkbox is NOT selected and the radio button for Create groups for any added folders is selected and click Finish, shown as follows:
Xcode will now recursively create the appropriate groups for each of our underlying folders. This will give us group references to the underlying and newly created directory structures to achieve our desired organizational effect. The changes we've made are now reflected within Xcode. The App, Controllers, Helpers, Models, and Resources directories effectively give us a cleaner and more organized project, both within Xcode and on disk.
[ 45 ]
Design for Performance
Once completed, your Xcode project's new group structure should look similar to the following screenshot:
With a place for everything and everything in its place we can already see the overall effect a solid file and group structure will have on the overall performance and future of the project. Keep in mind, any future modifications to the underlying directory structure will not be immediately recognized within Xcode. Should changes be required, simply follow the same previous steps of removing the groups by reference only within Xcode and then dragging and dropping the newly changed folders into the base Xcode project. This may seem a bit of a hassle, but is well worth the few extra moments it takes once a project becomes large enough for a lack of organization to begin impacting development.
[ 46 ]
Chapter 2
Code structure
The importance of a consistent code structure is rather important for the overall health of any application. The order in which declarations and methods are placed has a direct impact on maintainability as a project grows in size and scales beyond a few thousand lines of code. Each of us has experienced the time-wasting frustration of not being able to quickly locate methods and properties. Scrolling and searching our way through classes with thousands of lines of code is not a particularly great use of our time. Having a consistent class structure will keep source code neat and tidy and aid in shortening the transit time we spend jumping between class methods. One particular strategy that I find helpful for organizing @implementation source code is to use a simple and consistent class structure recipe. Regardless of project, using this same class recipe allows me to more easily locate and manage class contents and ultimately increases my own development performance. The class structure recipe is shown as follows: @implementation class structure imports init public methods delegate methods dealloc
The key to a successful class structure recipe is development obedience to the rules and guidelines you apply to yourself. A good recipe will have no effect if it is not adhered to 100 percent of the time. Take the time to implement a class according to your preferred recipe. Adopt the recipe into your development habit and stick with it. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www. PacktPub.com/support and register to have the files e-mailed directly to you.
[ 47 ]
Design for Performance
A look at the following basic class example demonstrates how simple a consistent recipe for class structure can be: #import "MyClass.h" @implementation MyClass #pragma mark #pragma mark init - (id) init { self = [super init]; if(self != nil) { } return self; } #pragma mark #pragma mark Public Methods - (void)myPublicMethod { } - (void)myOtherPublicMethod { } #pragma mark #pragma mark Delegate Methods - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } #pragma mark #pragma mark dealloc - (void)dealloc { [super dealloc]; } @end
Of course, this strategy may work best for my particular development style; however, any mutation of the previous code will surely help increase overall source maintainability if you are not currently using any type of class recipe or guide. [ 48 ]
Chapter 2
Regardless of how organized your class recipe may be, a class of several thousand line can be an absolute nightmare to navigate without a little assistance. To remedy this problem, we can take advantage of one of the more underused elements of the Xcode interface, which is the source code selection menu, also known as the symbol pop-up menu. This menu is available within all source editing windows of Xcode and provides a handy interface to quickly jump between methods and sections of code, shown as follows:
We can extend the basic functionality of this selection menu by taking advantage of an additional source code organization technique, known as the pragma mark statement. Within Xcode, the pragma mark statement can be used to provide details to Xcode on how source code is organized. Originally, the pragma directive was solely a compiler specific feature that various compilers used for additional instructions. However, integrated development environments now commonly take advantage of these directives for things like source code access, formatting, and delineation. Most compilers ignore unrecognized pragma statements like #pragma mark, leaving Xcode free to interpret it at will. Within Xcode the syntax for a #pragma mark is as follows: #pragma mark label_name
[ 49 ]
Design for Performance
The following example code helps Xcode format the direct access source code selection menu, such as the following screenshot:
#import "MyClass.h" @implementation MyClass - (id) init { self = [super init]; if (self != nil) { } return self; } #pragma mark #pragma mark Examples - (void)exampleMethod1 { } - (void)exampleMethod2 { } [ 50 ]
Chapter 2 #pragma mark More Examples - (void)exampleMethod3 { } - (void)exampleMethod4 { } #pragma mark #pragma mark dealloc - (void)dealloc { [super dealloc]; } @end
Looking at the previous code , we can see the #pragma mark in action. The following are the details of exactly how they are interpreted by Xcode: •
#pragma mark -: Draws a horizontal separator line within the direct access source code selection menu
•
#pragma mark Examples: Creates a new bolded section heading called Examples within the direct access source code selection menu
In smaller and more manageable projects, pragma statements may make little to no sense; however, in larger projects with dozens or hundreds of class methods they become essential in maintaining a healthy code base.
[ 51 ]
Design for Performance
Summary
In this chapter, we introduced the importance of creating, organizing, and designing an Xcode project for optimal performance. We covered a few selected steps for organizing a project from core project settings to source code structure, and leading practice techniques for increasing project maintainability. More specifically, we covered project preparation and organizational guidelines that affect the life of an Xcode project. Additionally, we laid out an improved project and file structure that can help organize resources for projects of any size. A project that is built upon a strong and well-designed frame is more likely to be easier to maintain and produce higher levels of performance than an application without dedicated planning. Organization is the key to quality and ultimately success, while chaos leads to unpredictable results. Additionally, organization will no doubt garner points for you on a development team or in situations where others may have to come in and work with code that you have developed. In the next chapter, we will introduce and focus on the core principles of project and code maintainability and how poorly managed code can have a direct and significant impact on the performance and success of any project.
[ 52 ]
Maintainability Maintainable software is software that is designed to be easily read, understood, and updated by its author as well as future potential authors. Maintainable code is developed to be processed and understood by humans just as it is designed to be understood by machines. Maintainability is achieved through a delicate balance of clear and concise coding practices along with common sense use of syntax and detailed commenting and documentation. Software maintainability isn't something that most developers can quickly relate to application performance. However, it should be common knowledge that unkempt and messy source code may be riddled with bugs and surprises that ultimately affect an application's overall level of performance. Productivity in many circumstances might be impacted the most, as unmaintainable code can be a nightmare to work with. A developer should not only produce code that works well but also code that is easily understandable. Under most circumstances, a developer has the option of developing complicated code or choosing simplicity for the same effect. When possible, selecting simplicity will have a more positive effect on the overall success of the application. Simplicity is easier to revisit and follow when source code lies untouched for days or weeks at a time. Additionally, simple and well-formatted code is easier to comment as well as produce documentation for. As an analogy, public speaking is ultimately about communicating a message to the audience in an easily understood and acceptable manner. A speaker should not use public speaking as an opportunity to demonstrate their infinite vocabulary but should focus on delivering a clear message. A well practiced speaker chooses the words that will be most effective for the audience they will be speaking to. Similarly, a developer would benefit by applying these same principles to programming.
Maintainability
Avoid flexing your syntactical programming muscles on every line, there are always more appropriate times. Unless a project is intentionally short lived, there is no excuse for not developing an application with maintainability in mind. In the open source community, there is a maintainability theory on the length of or life of a community developed project. The theory is quite simple: if a project has been around and is in continued development even after a great number of years, chances are that it is easily maintainable. In short, the primary reason for this theory is that if the project is still in active development, then somewhere along the line someone has re-factored or completely re-written the project with maintainability in mind. Thus the reason that so many open source projects rely upon the code of other neighboring projects yet stability and maintainability still exist. It is quite possible that we can all learn something from the open source community, if only about correctly implementing maintainable source code and project practices. Remember that maintainability, performance, productivity, and consistency are all factors in the success of any application project. In this chapter, we take a deeper look at the following coding practices that may have a significant effect on the overall maintainability of an application: • • • • • • • • • • •
Variable naming conventions Method naming conventions Camel case Syntax efficiency Readability vs compactness Dot syntax Re-factoring Library bloat LIPO Comments Documentation
Variable naming conventions
Choosing an appropriate name for variables isn't as easy as you might think. Variable names mean absolutely nothing to a compiler, but to those of us who interact with them, they need to relate to the information they store or point to in a meaningful way. [ 54 ]
Chapter 3
First, let's cover a few of the basic rules that affect the naming of variables within Objective-C. Like C and other syntactically similar languages, Objective-C variable names cannot start with numbers or contain spaces. Additionally, variable names cannot include special characters other than the underscore. In addition to these basic rules of variable naming, Apple has provided developers with several best practice guidelines, which can be followed to ultimately produce more effective code. Apple recommends that class, category, and protocol names begin with uppercase letters, while method and instance variables begin with lowercase letters. Most developers are familiar with the convention of beginning instance variables with an underscore; however, because virtually all instance variables within Cocoa are protected, it is actually a practice that is discouraged by Apple. In fact, method names beginning with an underscore are reserved for Apple; use according to Apple's "The Objective-C Programming Language" documentation. Variables need to accurately reflect the type of information they intend to store. A programmer spends far more time reading code than actually writing code and as is the case, variable names that do not effectively convey their purpose make it that much more difficult to read, debug, and re-factor. Over the years, I've spoken with a good number of programmers who don't place a great amount of value on naming conventions, choosing instead to use abbreviations for words, acronyms, or silly pseudo code variable names like foo and zot. In my experience, running across variable names like those above in an application of several hundred thousand lines of code is extremely frustrating. Even within local scope, these variable names have no place and easily exacerbate slow reading and understanding of foreign code. As an analogy to demonstrate the importance of consistency and standards, imagine you are purchasing a new home and have two equally attractive options presented to you. The first, designed and built using industry standards and the second built without a blueprint and without adherence to any regulations. From the curb, both of these housing options may appear equal, but beneath the paint, drywall, and flooring, one of these homes is a potential rats nest of safety and maintenance concerns. The same is true when it comes to choosing standards and conventions for application development. Without a good solid plan to approach common programming pitfalls, the end result will surely be the maintenance and bug nightmare that we would likely avoid if done the right way.
[ 55 ]
Maintainability
If you were to create a variable that would hold the name of your favorite car, you might use myFavoriteCar or favoriteCarName as the variable name. These names are descriptive and the possible value they hold is more easily understood than the abbreviated variable's name favCar or cName. A few examples of descriptive and well-formed variable names are as follows: •
myFavoriteCar
•
numberOfCylinders
•
currentCarColor
A few examples of non-descriptive and poorly formed variable names are as follows: •
myfavoritecar
•
noc
•
current_car_color
Taking those few extra keystrokes to clearly spell out a descriptive variable name will pay off many times over compared to using a shortcut variable name, as well as earn you points of respect from other developers that may find themselves reading your code. An additional and no less important tip is to use English when naming variables, classes, methods, and functions. This isn't an unwarranted bias, but an important convention that follows the example of almost every mainstream development language today. These languages are written in English for compatibility and standards, and we should ensure we adopt and hold to this wisdom. Regardless of what particular convention you follow for naming, consistency is the ultimate key. Select a style that you find comfortable and deviate as little as possible.
Method naming conventions
Equally important as variable naming, the naming of class methods can have a significant impact on the usability of a code base. We have all experienced the wrath of poorly named methods and how difficult it can be to recall them when needed. Code completion is of course extremely helpful but only if we know the first few characters of the method we are looking for. Attempt to use code completion for a method name that either starts with or without get, retrieve, and download, or some other random verb. You are likely to become frustrated that you are filling your brain with unimportant and inconsistent method names that could have easily been resolved with just the slightest focus on naming conventions. [ 56 ]
Chapter 3
As an example, the following few method names might be potential method names for the same hypothetical method: •
- (NSDictionary *)sensorList;
•
- (NSDictionary *)getSensorList;
•
- (NSDictionary *)retrieveSensorList;
From this list we can easily see that a standardized naming convention would make recalling the correct method from memory much quicker as well as being able to use code completion features that much easier. In the previous list, none of these options are incorrect. It is simply a preference. However, be consistent in your usage and it will be obvious to those who find themselves reading or maintaining your code. Knowing exactly how to name class methods is something that many developers take for granted. Mastery of syntax is not an automatic qualification for properly naming variables and methods. It is an under-addressed skill that all programmers should focus more time on. A common convention for the naming of methods is to use a verb followed by a noun as in the following: •
- (NSDictionary *)loadSensor;
•
- (void)saveSensor;
•
- (void)createSensor:(NSDictionary *)newSensor;
Following the verbNoun convention for naming methods, creates a greater level of consistency as well as predictability later on for you and your team. If you are not yet using a common convention for naming methods and variables, I encourage you to sit down and write out a clean and clear convention that you are capable of adhering to or simply follow the basic conventions outlined in this chapter. The benefits of doing so will begin to appear almost immediately.
Camel case
Case matters; case plays a significant role in almost all current languages including Objective-C. Not only does case apply syntactically, it is a critical factor in the readability of source code.
[ 57 ]
Maintainability
Our eyes are trained to use case as signals to differentiate between start and end points. Outside of programming, we obviously use spaces to separate multiple words that relate to one another. But of course programming language limitations do not allow these more traditional separation techniques, which makes the importance of case that much more significant. Without spaces or case a sentence is nearly unreadable. The same is true with case and source code. Imagine the following case-crazy and exaggerated pseudo code: foR (i = 0; I < cOuNT(aRRayofiteMS); i++) { DOSoMethINg(); iF (i > 1000) bREaK; dOnOthing(ArRaYoFITems[I]); }
Compared to the following case-correct pseudo code: for (i = 0; i < count(arrayOfItems); i++) { doSomething(); if (i > 1000) break; doNothing(arrayOfItems[i]); }
Even comparing the following lower-case pseudo code to the previous case-correct pseudo code can have a significant impact on readability: for (i = 0; i < count(arrayofitems); i++) { dosomething(); if (i > 1000) break; donothing(arrayofitems[i]); }
Obviously, the previous case-crazy pseudo code could never unintentionally exist, but what I am hoping to emphasize here is that a consistent use of case has a significant role in readability and maintainability. Imagine debugging code where every variable is written with different variations in case and the underscore is randomly used as a word separator. Honestly, we shouldn't have to imagine very hard as there is a legitimate reason for including such a seemingly silly concept in this chapter. What would appear to be over-exaggerated silliness contains a good portion of truth as I am sure we have all written or experienced poor case use in one project or another. [ 58 ]
Chapter 3
Leading variable naming practices suggest that method and variable names begin with lowercase letters and use uppercase letters when a new word begins within the compound name. This practice is widely known as camel case or camelCase and is used to make the reading and recall of variables and methods significantly easier. The naming of camel case comes from upper case letters in the middle of words, somewhat resembling the humps of a camel, like the following: NSString *humpyBumpyCamel = @"my camel's name is hermann";
I don't know about you, but I always picture a camel when I look at internally capitalized compounded words. One particular area of camel case usage where I see the most confusion and deviation is when dealing with acronyms. Acronyms are mostly always in uppercase form when in their natural environment and this appears to cause a bit of confusion when two or more of them find themselves next to one another within a camel case name. To demonstrate, the following acronyms XML, DB, and FTP will be used in a variety of names. First, the inconsistent naming examples that I continue to see each and every day are as follows: •
writeXmlToFtp
•
dBConnect
•
updateFtpDb
For clarity and simplicity I prefer to leave acronyms as nature intended them and simply include them in my camel case convention. Now, my preferred and acronym-friendly naming convention examples are as follows: •
writeXMLToFTP
•
DBConnect
•
updateFTPDB
Whether you agree with my preference or have a custom case convention of your own, the primary point I am trying to make here is that consistency is the key. No matter how you choose to use case, ensure that you and your team are in agreement. Nothing can be more frustrating than to be working on code, where half of the methods use underscores and start with uppercase letters while the other half attempts to use some sort of consistent naming scheme.
[ 59 ]
Maintainability
Case convention may not necessarily have anything to do with actual performance of an iOS application; however, it will have a significant impact on the productivity of the developer as it will be easier to choose how method and variable names are to be written. Not to mention, decreasing the time spent in recalling names while developing. Another relevant note is that when the first letter of a word is capitalized it is known as Pascal case.
Syntax efficiency
Earlier in this chapter, we covered the importance of choosing simplicity over syntactic complexity for the sake of maintainability and easy-to-read code. However, as with any good principle it must be used with moderation. The over simplification of code will certainly have a negative and significant impact on application performance. While similarly, over simplified code is more than likely to have an equally negative impact on developers who find themselves reading it. Choosing an alternative to overly complex code doesn't mean writing code that insults the intelligence of project maintainers. Our ultimate goal in maintainability is to find a balance between both of these extremes. The average developer will have a moderate threshold level for which they can easily comprehend advanced coding techniques and we should strive to develop as closely to that threshold as possible while at the same time not giving up performance or syntax efficiency. Syntax efficiency is also closely related to how syntax is interpreted and compiled. Poor choices in syntax or code placement can have a significant impact on overall performance. Let's look at a few examples of syntax efficiency and how they relate to maintainability and application performance. The @"string" directive defines a compile-time constant NSString object. Because this object is a compile-time constant and managed by the compiler, it does not need to be manually released. An example of the @"string" directive is shown as follows: NSString *myString = @"hello world";
The previous myString object was not allocated in the traditional sense using alloc. Because of this the developer is not responsible for releasing myString and should not necessarily attempt to do so. NSString *myString = [[NSString alloc] initWithString:@"hello world"];
[ 60 ]
Chapter 3
The previous string object has been manually allocated and will need to be manually released. However, simply taking advantage of NSObject's autorelease method is a quick and highly efficient solution should you need to allocate an object without the worry of remembering to release it later on like the following: NSString *myString = [[[NSString alloc] initWithString:@"hello world"] autorelease];
A general rule to help remember is that unless a method call includes new, alloc, retain, or copy there is no need to release the object and in fact doing so may cause runtime errors.
Readability versus compactness
Any developer with experience in Perl knows the slogan "There's more than one way to do it" and the same holds very much true for almost any language including Objective-C. There are near limitless options for achieving the same result programmatically; however, this does not imply that some solutions are not better or worse. I've written better, seen better, written worse, and of course seen worse coding. The artistic side of programming allows a developer to express themselves as eloquently as their native spoken language, while at the same time figuratively allowing them to program their foot into their mouth. Compact code is the code that is written in as few lines as possible with little fluff and padding. Compact code is quite literally the complete opposite of easily readable code as it is designed solely for compaction. Do not make the mistake in thinking that compact code is more efficient. Ultimately, code efficiency is determined by the compiler and how closely the language is used as it was intended to be. Readable code is code written for the human behind the curtain. As the name implies, it is written to make Saturday afternoon debugging sessions more tolerable by being easy to read and understand. Try not to confuse readable code with lazy code. Lazy code may in fact be just as easy to read, but there is no excuse for not having a firm grasp of syntax and using a language as it was meant to be used. With that said, compact code in my opinion is simply beautiful. I would like to think that I am not the only developer who gets a little giddy over extremely compact and efficient code. Nested function and method calls as well as classes and methods done in as little code as possible puts a tiny grin on my face. [ 61 ]
Maintainability
However, reality as well as history demonstrates that extremely compact code may be nothing more than a thinly veiled nightmare in most circumstances. Debugging, re-factoring, and even simply reading deeply nested and compacted code can have you pulling your hair out. Let's take a look at a working example of compacted code: [[[self navigationItem] rightBarButtonItem] setTitle:[[[NSString alloc] initWithFormat:@"%d", [[NSNumber numberWithFloat:[[liveRepeatin gTimer fireDate] timeIntervalSinceNow]] intValue] + 1] autorelease]];
Although it's quite beautiful, this single line of code is trying just a little too hard. Not only is this difficult to quickly read and comprehend at a glance, it is entirely unmaintainable. Knowing that my brain would not be able to process that line of code as quickly in as little as a few days it was quickly re-factored to a semi-long hand form to make reading, debugging, and commenting a little simpler. Now we'll take a look at a more readable working example of the previous compacted code: int secondsRemaining = [[NSNumber numberWithFloat:[[liveRepeatingTimer fireDate] timeIntervalSinceNow]] intValue] + 1; NSString *newTitle = [[[NSString alloc] initWithFormat:@"%d", secondsRemaining] autorelease]; [[[self navigationItem] rightBarButtonItem] setTitle:newTitle];
Even in semi-long hand it is much easier to see what is happening in the previous code. We are calculating the remaining number of seconds from the liveRepeatingTimer and using this value as the title for the rightBarButtonItem in our navigationItem. Good commenting or documentation can augment compact code in many situations, but ultimately readable code is much more easily maintainable and gives the developer more control over code performance. Readable code leaves more opportunity for precise commenting, which helps everyone who has to look over the code. I myself have discovered hundreds of bugs over the years while re-factoring compact code and commenting as I long-hand. Additionally, we must realize that eventually we may not be the only developer working on this code. We can imagine that these future developers will be grateful for our restraint and use of common sense when it comes to the delicate balance of compact versus readable code.
[ 62 ]
Chapter 3
Of course as we all know there are exceptions to every rule. Compact code does have its place and I find it extremely useful in methods which have been debugged and are very rarely revisited. Compact and deeply nested code, as I mentioned earlier is something that I find great joy in, but I do use it within the limitations that we've covered here. An additional benefit to long-handing code is the option of being able to comment out specific lines and alter them while debugging. With deeply nested code we lose this valuable convenience. Take those few extra moments to decide when readable code may be necessary or required. An important tip when long-handling Objective-C code is to always be sure and release objects when you are finished with them or remember to use the native NSObject autorelease method to prevent memory leaks and performance degradation. In my experience with both writing and reading Objective-C code written by others, simplifying code for easy readability has a tendency to introduce unforeseen bugs. Be careful.
Dot syntax
The dot syntax is an alternative to the familiar Objective-C square bracket notation for invoking accessor methods. In Apple's own words, the dot syntax is simply "syntactic sugar" as these calls are actually transformed by the compiler to invoke proper accessor methods and not actually provide direct access to the underlying instance variables, in turn preserving encapsulation. An example of dot syntax is as follows: myObject.name = @"John Doe"; NSLog(@"Name: %@", myObject.name);
While the same would normally be accomplished with square bracket notation as follows: [myObject setName:@"John Doe"]; NSLog(@"Name: %@", [myObject name]);
Square bracket notation is the standard Objective-C method invocation syntax that we are all very much familiar with. An advantage to the dot syntax over square bracket notation is that the compiler can throw an error if a write attempt is made on a read-only property while the same is not necessarily true when using square bracket notation to update a non-existent property. [ 63 ]
Maintainability
Regarding performance, because both the dot syntax and square bracket notation generate equivalent code, both solutions compile and perform equally. There is no performance loss or gain attributed to either method under these circumstances. Although dot syntax is quite useful and extremely helpful for those with experience in other object oriented languages, it should be used only on accessor methods and in a consistent manner. If you prefer to use dot syntax over the standard method invocation syntax, be consistent so that your code remains predictable and easy to follow. Jumping randomly between syntax styles is a sign of sloppy coding practices and will ultimately lead to further problems.
Re-factoring
A sometimes-overlooked portion of application maintainability is the all too familiar and sometimes unenjoyable process of re-factoring code. Although most developers have a highly negative view of code re-factoring, it is an essential component in the overall stability of an application. With an introduction to re-factoring like that, you might find it surprising that I quite enjoy the process of re-factoring. Throughout the lifetime of a project I dedicate a large portion of time to reviewing and updating existing code to squeeze as much performance and organization from them as possible. Over a year's time, I find that significant portions of my time were spent upgrading applications to the latest SDK revisions as well as cleaning up code and putting to good use the knowledge that I continually seek. In fact the single largest factor behind tuning up an application is the refactoring of an applications source code. It's highly probable that if you are indeed tuning up an existing application, you intend for the internal operations to be optimized while the resulting applications functionality remains the same. Generally, re-factoring code is the process of updating the internal workings of source code without effecting the overall functionality or expected behavior of the code itself. Simply put, we're looking for better source code without changing the way the application works. There are many reasons to step in and re-factor source code and for most developers these may be the most common: •
Performance
•
Maintainability
•
Core language or SDK updates [ 64 ]
Chapter 3
•
Increases in knowledge
•
Code smell
Performance and maintainability related re-factoring will be a common thread throughout this book as we focus on maximizing performance and tuning up iOS applications. In general, refactoring for performance and maintenance is the simple process of looking at our existing source code and identifying cleaner and more efficient ways of accomplishing the same task. SDK updates are of course a common reason to re-factor code, whether due to deprecation of functionality or feature additions. Once a new language version has been properly vetted, I urge you to begin re-factoring and making compatibility changes at your earliest opportunity. There is nothing more frustrating than to write absolutely beautiful code only to find out that it needs tossed in the bit bucket because of a new SDK revision. Lack of knowledge and programming experience may be the single biggest cause for the need to re-factor code. Gains in knowledge, updated concepts, and new frameworks create the need for a developer to re-factor and take a look at each portion of an applications source code to identify better technical methods of achieving the same functional result. If you've ever thought to yourself that the code you were working on didn't smell right, then you have experienced code smell first hand. This is a sign or symptom that a deeper problem exists but you just haven't quite identified it yet. Whether or not a developer is familiar with the coined term code smell, it is still more than likely the primary reason they are ultimately re-factoring in the first place. The following is a short list of common reasons that something doesn't smell quite right: •
Complex code: Unnecessarily complex code that needs to be simplified
•
Duplicate code: The duplication of functionality throughout source code
•
Lazy code: Poor syntax, unnecessary or shortcut code
•
Lengthy code: Methods and classes that are absurdly lengthy
•
Tight coupling: Methods and classes with too much internal knowledge of one another
[ 65 ]
Maintainability
At the risk of sounding desperate, I want to stress the importance of code re-factoring one last time. As I mentioned in the introduction of this chapter, the open source community has adopted a strong focus on maintainability. Maintainability is not possible without re-factoring. Code is very rarely written perfect the first time around and a good project is continuously being re-factored at some level or another.
Library bloat
Sometimes referred to as library abuse, library bloat is the practice of mashing up applications together using a large number of libraries that may or may not duplicate efforts. Projects, which a library centric have a higher probability of bugs and maintenance issues as a developer is relying upon the code of others for more often than not core features and functionality. One particular problem that I see quite often with library heavy mash-ups is the problem a developer encounters when an update of one library is necessary because of a bug, security issue or other concern. This update rarely ever is simple and almost always affects other libraries within the application, ultimately leading down a path of library mayhem. It's not unheard of to have dozens of libraries all working cohesively and playing well together, however, be aware of library bloat and avoid it when you can. Additionally, another negative impact of library bloat is the introduction of large amounts of classes and methods that for the most part go unused, but are still compiled into your application. Including a library for a single class or method in many cases can be a waste of resources. Consider alternative libraries or even writing the code yourself. Researching and writing the code yourself has the added benefit of exposing you to a greater level of knowledge and understanding. Over the years I've had many debates with developers over the use of off-the-shelf libraries versus learning and writing the code. There is always the chance that as you re-invent the wheel sort-of-speak, you may introduce issues that have been solved many times over, however, with experience and taking the time to understand what it is you are writing, the bug problem is nothing a seasoned developer is used to avoiding and in the end your code belongs to you and is much more likely to stand the tests of time and integration.
[ 66 ]
Chapter 3
As with most things, libraries serve a purpose of developing applications by squeezing them together, which is not actually developing, it's well, squeezing applications together using libraries. Take the time to understand which libraries are necessary, how they are being used as well as take into account their capability.
LIPO
Lipo is a command-line utility on Mac OS X that creates and manipulates universal binaries. Also referred to as a fat binary, a universal binary is a combination of multiple computing architecture instruction sets in a single program or application binary. Please note that the lipo utility will only ever produce an output file and never alter its input files.
The lipo command-line utilities basic functions are as follows: •
List the architecture types within a universal file
•
Create new universal file comprised of multiple architectures
•
Thin a universal file down to a specific architecture
•
Replace and / or remove architecture types from a universal file
The primary benefit of using universal binaries for iOS development is to have single library files within Xcode, which include compiled code for multiple architectures. The capability of a single binary to work across multiple platforms allows a developer to focus on more application specific issues rather than continuously juggling libraries within Xcode. The only real negative associated with universal binaries is the size of the final binary. Because it can contain multiple architectures, the size of the binary is a multiple of the architectures it contains. A universal binary containing two architectures would be twice the size of the original library, while three architectures would result in a binary triple the size, and so on. Xcode's iOS simulator requires compiled libraries to be available for x86, while iOS devices require libraries to be compiled for ARM. A universal binary created using the lipo command is a compound file that contains all configured architectures in a single binary. This fat binary will only need to be included once in any iOS project, allowing both device deployment and simulator testing without any additional configuration changes.
[ 67 ]
Maintainability
Additionally, because lipo's are pre-compiled, using them can save several seconds to as much as a few minutes of compile time within Xcode for larger projects or slower hardware. The basic steps to create a universal library using the lipo command are as follows: •
Configure and compile your library for x86
•
Configure and compile your library for ARM
•
Use the lipo utility to combine compiled library files into a single universal file
Once the universal library has been compiled, simply include the library in any iOS project and compile as you normally would. For more information and specifics on the capabilities of the lipo utility, simply open the terminal utility and type 'man lipo'.
Comments
Source code commenting, although simple to understand in principle and purpose, is more than likely a developer's least exercised skill. Comments throughout source code can have great purpose. They are there to provide a quick overview of the intent or purpose to a particular line or block of code. Comments help a developer or working team of developers to quickly understand the functionality of unfamiliar code without the need to read and analyze each line individually. Some might say that the time spent commenting code could surely be allocated to better use, like actually writing code. These same developers might say that commenting code is redundant, that it serves little purpose because the code itself shows exactly what is happening and what the purpose of the code is. In part, this might be true as some code is simple enough to not need or require commenting. However, a good portion of any project contains many lines of source code, which may appear cryptic if a developer is not intimately familiar or up to speed with the project. A developer that returns to source code days, weeks or months later can easily find themselves scratching their heads in confusion as to why they happened to do things in this particular manner. Team development makes commenting more critical where it can be down right frustrating to read another developer's code that lacks comments that would otherwise show insight, saving everyone's time.
[ 68 ]
Chapter 3
Poor commenting or a complete lack of commenting has the possibility of reflecting negatively upon a developer. Commenting is a necessary development skill that needs sharpening and exercising like all others. Uncommented code can be considered messy or incomplete and in many project circumstances, developers are not authorized to commit uncommented source code for this very reason. An afternoon of productivity can quickly cease, when you find yourself reading through a thousand lines of source code trying to figure out exactly what it was you or someone else was trying to accomplish. Within most languages there are two common types of accepted commenting methods, which are as follows: •
Line comment
•
Block comment
The line comment as its name suggests is used for simple, single line comments that span a single line of source code. The line comment begins with two forward slashes (//) and terminates at the end of the same line: // This is a line comment and can be used to quickly describe intent or purpose alternativeInitialization();
The block comment is used to span multiple lines of source code and is traditionally used for more detailed commenting. The block comment begins with a single forward slash followed by an asterisk (/*) and ends with the forward slash and asterisk in the reverse order (*/): /* This is a block comment and can be used to describe in great detail the intentions or purpose of a particular section or block of code. */ alternativeInitialization();
Traditional coding practices place comments above lines or blocks of code; however there are no hard and fast rules as comments can be placed just about anywhere. It is common to see comments placed on individual lines, at the end of lines, at the start of source files as well as throughout and at the end. Both line and block comments in addition to their intended purpose, are frequently used to temporarily disable a line or section of source code, as the following two examples show.
[ 69 ]
Maintainability
The following example shows line comment to disable a single line of code: //alternativeInitialization();
The following example shows block comment to disable a block or section of code: /* alternativeInitialization(); anotherMethod(); */
Not only is commenting important, but commenting effectively can be an art. Remember, the purpose of comments is to quickly depict the intentions or purpose of the commented code. Although this happens more often than we would like to admit, be careful not to comment the obvious for the sake of commenting. Comments should simplify the reading process, not duplicate its efforts like the following comment and code snippet: // Call initializeSensors [self initializeSensors];
The previous code snippet is calling the initializeSensors method and its comment provides absolutely no insight into why we are calling initializeSensors, while the following comment would be much more useful: // Prepare all sensors (local and remote) for incoming data [self initializeSensors];
With adequate commenting, we easily understand the necessity of the previous method call and can of course step through to this method to read and understand precisely what steps are being performed within the called method. When commenting it is helpful to think of your comments as simple, high level hints that will keep the reader moving through your code without getting caught up in the intricacies of each method or operation. Of course, like any good principle there are definite exceptions when detailed comments are necessary. Knowing when verbosity is required and when a simple nudge in the right direction is needed is the real art. As mentioned earlier regarding naming conventions, comments should be written in English as much as possible to ensure we are adhering to the standards and conventions of those before us. An additional note is to remember comments when re-factoring code; take the time to ensure that all comments accurately reflect the changes that have been made. Even comments describing what was changed can be helpful.
[ 70 ]
Chapter 3
Documentation
Generally, documentation comes in many forms, but the two particular types of documentation we want to look closer at in this chapter are application architecture design and technical documentation. Documentation in general is any type of written text that describes how something operates as well as how it should be used. Although documentation is critical to the overall success of almost anything in the real world, it is one of the most overlooked portions of the development cycle. Before we go too much further, I will admit that documentation for me has always been a sore spot. I have rarely ever looked forward to hours and hours of writing what would seem plain and obvious technical documentation. Of course, the primary reason the subject would appear obvious is due to the fact that I wrote the original reason and necessity for documentation in the first place. It is for this reason I understand the great importance of well documented applications and projects. Preferably, architecture and design documentation is something that should have been laid out well in advance at the start of a project. However in reality, this isn't always the case, thus the fact is that it is never too late to begin writing documentation. Architecture and design documentation is an extremely high level and detailed description of an applications larger picture. This documentation rarely ever mentions actual code or specific syntax but does cover the methods of how an application will be put together. Architecture documentation provides an overview of application flow. It demonstrates how various components of an application should work together and the needs and requirements of these specific areas. Technical documentation is a much more low level method of describing the application at its core. One step above code commenting, technical documents describe how each portion of source code should operate. Technical documentation should be quite thorough yet not so much that they become overwhelming and / or difficult to maintain. Technical documents like source code itself must be easily maintained. It isn't difficult to imagine how outdated technical documents can cause the introduction of software bugs or even mass data destruction or loss. As unfortunate as it may be, technical documentation is best written by the author of the source for which the documentation is needed. Simply put, if you write the code, you have the responsibility of writing the technical documentation to go along with it. [ 71 ]
Maintainability
This dependence on the developer increases overall workload but in whole adds a beneficial level of usability and maintainability. Unlike architecture and design documentation, which is to a great extent limited to manual creation, the capability to create technical documentation dynamically is available. Documentation generation tools are available for nearly every language and Objective-C / Cocoa is no exception. Documentation generation tools like Doxygen and autogsdoc accompany Apple's own provided documentation tool headerdoc. Automatic documentation generation tools work by reading and processing commented source code files and generating a series of text blocks as well as fully independent documentation viewing systems. For Objective-C and Cocoa applications, Doxygen is the most widely used and accepted automated documentation generator. For this reason alone, we will only cover Doxygen. There are literally dozens of other documentation generation tools available for Objective-C and a quick Google search will help identify them. Doxygen is a GNU GPL licensed suite of documentation generation utilities that are capable of reading source code from multiple languages including Objective-C, C, C++, Java, PHP, Python, C#, and more. Doxygen is capable of creating an online HTML documentation browser as well as outputting in various formats including HTML, XML, RTF, Man, PDF, and more. Doxygen and more information may be found by quickly searching Google or browsing to the following URL: http://www.stack.nl/~dimitri/doxygen/.
Let's take a quick look at the proper way to comment a simple Objective-C class to function with automated utilities like Doxygen and others, shown as follows: /** Basic Automobile class */ @interface Automobile : NSObject { /** The manufacturer name of the car */ NSString *manufacturerName; /** The model name of the car [ 72 ]
Chapter 3 */ NSString *modelName; /** The vehicle identification number of the car */ NSString *vehicleIdentificationNumber; } @end
As you would expect, the class name is clearly commented along with each of the classes instance variables. Comments are consistent and are written assuming that they will be published into documentation. Running Doxygen on the previous class and encompassing project in Xcode provides a complete HTML documentation browser, which includes our automobile class and comments.
[ 73 ]
Maintainability
As you can clearly see, Doxygen generates clean and beautiful technical documentation directly from project source code, making life just a little easier for all of us. Maintainable documentation and line comments need to be built into our development process. They should be thought of as critical as any other component of our application or project. Consistent documentation will ultimately result in more maintainable code, which will result in greater levels of performance as well as increase overall productivity and compatibility. If it isn't clearly obvious by now, I may not like the act of documentation writing but I fully understand the benefits and necessity of them. Take my advice: if you haven't included documentation commenting into your development process, do so as quickly as time permits; it is never too late and the benefits begin almost immediately.
Summary
In this chapter, we focused on the most common and critical areas of application maintainability and how they relate to performance and tuning. We covered the basics of naming conventions for both variables and methods and how poor choices in naming can lead to messy and unproductive code. Remember to select a convention that works for you and your development team and be diligent in its use. Additionally in this chapter, we paid close attention to the importance of writing code that is easy to read and not overly complex for the sake of intellectual expression. It is equally important to focus on writing code that works while keeping maintainability high on the priority list. Re-factoring is a continual process and a good portion of any development time should be dedicated to the improvement of existing source code. Take the time to update, optimize, and comment as you progress through a project. It will surely pay off in the end. Maintainability is a key factor in the success of any application project and should not be viewed as optional to the development process. Maintainability is directly related to the overall health and life of a project and the areas in which we focused are no doubt beneficial. In the next chapter, we move into project and source code reliability and how exception handling and unit testing can and do improve application performance. [ 74 ]
Reliability In this chapter, we will cover the importance of application reliability and how understanding and implementing a few basic concepts can increase the overall application performance. Specifically, we will touch upon the following: •
Exception handling
•
Error checking
•
Unit testing
•
Preparing a project for both logic and application unit testing
By definition, reliability is the ability of a system to perform routine tasks under a variety of circumstances while maintaining a desired threshold of operation. It only makes sense that the more complex a system is, the greater likelihood that reliability issues may arise. Inversely, a system or application with only a single operation is less likely to fall victim to reliability related problems. The simple fact is, with each operation we introduce new possibilities for exceptions and failures within our code. Of course, this is more than likely not our intention; it is just the nature of complexity. Although application size does relate to reliability, overall reliability isn't defined by limiting the number of operations our applications perform. Neither is it gained by limiting the number of source code lines in an application. To describe reliability simply, it is the ability of an application to appropriately handle exceptions that might otherwise lead to poor user experience or failure. Reliability begins in the design and architecture phase of application development and continues throughout the entire development process.
Reliability
The design process is incredibly important as it is related to reliability. Applications which are designed poorly, don't benefit from exception handling or error checking nearly as well as applications, which were designed with reliability in mind. Much like a carpenter likes to measure twice and cut once, taking the extra time to think through a solid design will pay off 10 fold. Let's take a look at the following few factors that help to define reliability in an application: •
Design: The majority of an application's reliability can be attributed to the design and initial architecture of an application. When poor design is a factor, developers commonly find themselves resolving reliability issues in code rather than resolving them at the source. To revisit our automotive analogies, a vehicle that is designed with reliability as an attribute is less likely to find itself in the local repair shop having diagnostic tests run. Reliable products are not accidents, just as unreliable products aren't accidental either, they are the results of our efforts.
•
Quality: Success is synonymous with quality. Almost any successful product, even outside of software development will have an element of quality that surpasses averages. Quality should be a focal point, an objective, and ultimately a characteristic we seek for every aspect of our applications.
•
Testing: It goes without saying that testing is a necessary and critical procedure when it comes to success, reliability, and quality. Without adequate testing, the success of a product is simply a gamble and even a poor one at that. Gambling the sum of design and development without testing is an amateur's mistake.
Exception handling
Exception handling is the term that describes the basic mechanisms for handling special conditions within an application. An exception is commonly defined as an event that interrupts the normal flow of program execution. Exception handling and error checking are commonly confused as being similar. When in fact, exception handling as mentioned earlier is the concept of catching and properly dealing with behavior outside the realm of normal operation, while error checking is used well within the limits of normal application operation to direct natural logic.
[ 76 ]
Chapter 4
As we touched upon within the introduction of this chapter, a significant portion of reliability is the ability of an application to adequately handle exceptions. Regardless of severity, a single exception is more than capable of bringing an application to its virtual knees. Nearly every application we use on a daily basis suffers from a lack of adequate exception handling. Case in point, just this week my son and I were enjoying a short multiplayer game on the iPad, when several unwelcome crashes interrupted our game play. This no doubt is a classic example, where exception handling could or should be used to catch and handle application exceptions more appropriately. Within Objective-C, the handling of exceptions is very much centered around four basic compiler directives. These directives are @try, @catch, @finally, and @throw. When necessary, we wrap application logic in @try / @catch blocks, when the potential for exception is apparent. Needs for exception handling range from simply accessing an out of bound array element to poor mathematical calculations that might otherwise be invisible to the compiler like divisions by zero, and so on. Most developers are not taught or instructed to rely upon exception handling and in many ways this isn't such a bad thing; however, adding exception handling to your resume will likely increase the quality of your future code, which is never a bad thing. An @try block wraps code that you might wish to test for exception, while an @ catch block will be executed to handle a potential exception. The @finally directive is used post @try / @catch, to execute code, regardless if an exception is thrown or not. @throw is as you might have guessed, used to throw application exceptions.
An example of a simple @try / @catch / @finally block is demonstrated as follows: - (void) tryCatchMeIfYouCan { Automobile *myAuto = [[Automobile alloc] init]; @try { [myAuto start]; } @catch (NSException *exception) [ 77 ]
Reliability { NSLog(@"Caught exception %@: %@", [exception name], [exception reason]); } @finally { [myAuto release]; } }
The tryCatchMeIfYouCan method begins by creating a myAuto object from the Automobile class. The @try block attempts to execute the start method on the myAuto object and in the event an exception is caught, the @catch block will be executed and logged. Lastly, the @finally directive will be executed regardless of an exception wherein myAuto will be released from memory. Proper exception handling is critical to the overall stability of any application and in almost any degree cannot be over-applied. Throwing an exception is just as important as catching, after all without @throw there can be no @catch. Sorry, I couldn't resist. NSException *exception = [NSException exceptionWithName:@"StartFailed" reason:@"Full tank is empty" userInfo:nil]; @throw exception;
In the previous exception-throwing example, we create an NSException object and raise it using the @throw directive. An important and useful tip to remember is that you are free to re-throw an exception within a @catch block without additional arguments, like the following code demonstrates: @catch (NSException *exception) { NSLog(@"Caught exception %@: %@", [exception name], [exception reason]); @throw; }
Within Objective-C, you are not limited to throwing NSException objects. You are free to throw any Objective-C object. [ 78 ]
Chapter 4
Error checking
Error checking differs from exception handling in the fact that it is used to check for errors during the normal operation of an application. Even the basic premise of the simplest of if / then statements is based upon the handling of logic or errors that take place while an application is executing. An example of error checking would be a function that expects a string, might return 0 if something other than a string is received. An additional example would be to test the value that a method returns before attempting to utilize or pass it down the logic chain. Error checking goes hand in hand with exception handling and as you can imagine is an essential component in coding reliable software. Think of error checking as an extra layer of conditional control within your application. As a developer, you get to create as many layers of error checking as you like and with every level, a greater amount of stability and predictability is gained. If you are familiar with validation and sanitization, you are aware of the benefits of error checking. If we get ultra simple here for a moment, an application is nothing more than complicated levels of if / then conditions. The more we inner-validate data as it is passed within our application, the more reliable our application will be.
Unit testing
Unit testing, for those unfamiliar with the subject, is the concept of testing individual units of an application for functionality. A unit is mostly described as a small and manageable section of code that when tested provides insight into the overall integrity of the application. Xcode provides two different options for unit testing by default, application and logic tests, which are explained as follows: •
Application unit tests: Application tests are primarily used to test an application's code, while the application is executing. An application test can be used to verify the relationships between interface components or testing the functionality of outlets and actions. Application unit tests may also be used to test data model and object controller code among other things.
[ 79 ]
Reliability
•
Logic unit tests: A logic test runs outside of your application and is designed to check the core functionality of source code operations. Logic tests are highly specific and designed to execute code at a much more granular level than application unit tests. Logic tests may be used to identify areas in your code that may be producing wide-ranging results under varying circumstances. You may use logic testing for stress testing as well as error checking purposes, in addition to countless other validation and reliability reasons. Within Xcode, logic tests are run during the target's build phase unlike application tests, which are run on the actual iOS device.
Unit tests, in general are designed to assist a developer in producing stronger and more reliable code, and a solid commitment to unit testing can easily become a developer's best friend. It is nearly impossible for a developer to anticipate every conceivable situation that may arise from a particular piece of executing code. Unit testing removes a considerable amount of guess work and allows a developer to remain confident that their changes will be fully tested against pre-determined unit testing metrics that ensure the integrity of the application is not compromised. Unit testing frees a developer to erase the inner-workings of specific areas of an application's source code from their mind, with the knowledge that reliable testing procedures will alert to potential problems. The freedom to quite literally forget about complicated inner-code and focus on the current tasks at hand is an essential part of large application production. One might argue that fully tested and operational code that includes detailed comments would not necessarily benefit from unit testing. In many cases, this might be true. However, besides the obvious differences between comments and unit tests, comments require a developer to read and re-familiarize themselves with a particular section of code, prior to modification or interaction. Unit testing frees a developer from quite honestly caring about the inner workings of specific classes and methods until a problem arises. When a project is comprised of hundreds of classes as well as hundreds of thousands of lines of code, this freedom is invaluable. Xcode's unit testing environment is based upon the SenTestingKit framework, an open source project that has been around for quite some time. Likely due to the popularity of the project, Apple has since assumed responsibility of the SenTestingKit project and integrated its functionality directly into Xcode. The benefits of using Xcode's internal unit testing mechanisms are that you write your tests just as you would write your application's source code. Unit tests live in parallel to your application's code and can be executed at will, while not necessarily being compiled into your application's release builds. [ 80 ]
Chapter 4
In my experience, the only real drawback of unit testing is that it has the potential to be a great consumer of time. In the early stages of implementing unit testing operations, you can find yourself spending significant amounts of time getting everything 'just right'. In addition, re-factoring good-sized portions of code can create the necessity to rebuild or alter unit tests, which of course adds additional overhead. It isn't uncommon for in-depth unit testing to be as much as 3 or 4 lines of testing code for each line of source code being tested. In any event, even with the overhead cost of time being a slight factor, unit testing benefits easily out weigh the draw backs and make it an essential component of any development process. Without even minor unit testing, an application must rely upon manual testing procedures as well as bug reports and significant beta testing. Of course, bug reports, beta testing, and manual testing should be part of every application's development and release process; unit testing adds a robust layer of validation that should never be overlooked. Unit testing is only effective if the results of the testing are acted upon and the entire process is integrated into the development workflow. Take the time to alter your regular workflow and integrate a routine process of testing and mitigating identified issues. In fact, a common development process, which heavily depends on unit testing, is known as TDD, test-driven development. This approach begins with a developer creating unit tests prior to the functional code. When a developer is writing actual application logic, TDD allows the developer to have insight into what lower threshold will be acceptable. The developer knows what failures they must handle and prepare for, allowing them to focus more on improving application logic.
Preparing a project for logic unit testing
Implementing application and logic unit tests within Xcode is quite simple. To demonstrate the simplicity and power of the SenTestingKit framework, we will walk through the creation and usage of both application and logic unit test cases. The steps for creation and usage of both application and logic unit test cases are given as follows:Create a new unit test bundle target for our unit tests to reside: 1. Select your project in the navigation panel and click the Add Target button.
[ 81 ]
Reliability
2. Select the Cocoa Touch Unit Testing Bundle template from the iOS Other category and click Next:
3. Name the target LogicTests, as well as add an appropriate Company Identifier, and click Finish:
[ 82 ]
Chapter 4
4. Make the newly created LogicTests target, the active target:
5. Within the newly created LogicTests group, create a new test case class for our logic unit tests. 6. Right click this newly created group and select New File. 7. Select the Objective-C test case class template from the iOS Cocoa Touch category and click Next:
[ 83 ]
Reliability
8. Name the class as ExampleLogicTest. 9. Ensure that the LogicTests target is the only target selected and click Finish:
10. Edit ExampleLogicTest.h and disable the default USE_APPLICATION_UNIT_ TEST variable, which determines the unit tests that will be executed in the classes implementation file. 11. Change USE_APPLICATION_UNIT_TEST to 0.
[ 84 ]
Chapter 4
12. Successfully test the project. 13. Save changes. 14. Ensure the iOS Simulator target device is selected. 15. Select Test from the Build menu.
16. Edit the logic unit test class ExampleLogicTest to intentionally cause the build process to fail. 17. Alter the equation code within the testMath method of ExampleLogicTest. m to the following: STAssertTrue((1+1)==3, @"Compiler isn't feeling well today :-(" );
18. The resulting ExampleLogicTest.m class should appear similar to the following: // // // // // //
ExampleLogicTest.m iOS_Project Created by Loyal Moses on 11/15/10. Copyright 2010 __MyCompanyName__. All rights reserved.
#import "ExampleLogicTest.h" #import "Helpers/BasicAutomobileClass.h" [ 85 ]
Reliability
@implementation ExampleLogicTest #if USE_APPLICATION_UNIT_TEST iPhone Application
// all code under test is in the
- (void) testAppDelegate { id yourApplicationDelegate = [[UIApplication sharedApplication] delegate]; STAssertNotNil(yourApplicationDelegate, @"UIApplication failed to find the AppDelegate"); } #else // all code under test must be linked into the Unit Test bundle - (void) testMath { STAssertTrue((1+1)==3, @"Compiler isn't feeling well today :(" ); }
#endif @end
19. Verify that the product test process fails due to our changes in the testMath method. 20. Save changes. 21. Select Test from the Build menu. 22. Review test results within source code or the log navigator.
[ 86 ]
Chapter 4
Preparing a project for application unit testing The steps for preparing a project for application unit testing are as follows: 1. Make a duplicate copy of your application's primary build target. 2. Right click the primary project build target and select Duplicate:
3. Rename the newly created duplicate target to _ Testing. In our case and for our purposes, it has been renamed to iOS_Project_Testing.
[ 87 ]
Reliability
4. Rename the automatically created scheme to match the newly created build target by clicking Manage Schemes from the Schemes drop down menu:
5. Create a new application unit test target. 6. Right click the project's Targets group and select the Add Target button. 7. Select the Unit Test Bundle template within the iOS Cocoa Touch category and click Next.
[ 88 ]
Chapter 4
8. Name the new test bundle target as ApplicationTests and click Finish:
9. Create a dependency between the newly created ApplicationTests target and the _Testing target. 10. Drag the ApplicationTests target into the _Testing Target Dependencies located under its Build Phases tab:
[ 89 ]
Reliability
11. Embed the ApplicationTests bundle into the _Testing bundle. 12. Drag and drop the ApplicationTests.octest file into the _Testing bundle's Copy Bundle Resources group:
13. Create a new test case class for our application unit tests. 14. Right click the automatically created ApplicationTests group and select New File. 15. Select the Objective-C test case class template from the iOS Cocoa Touch Class category and click Next.
[ 90 ]
Chapter 4
16. Name the class as ExampleApplicationTest. 17. Ensure that the ApplicationTests target is the only target selected and click Save:
[ 91 ]
Reliability
18. Make the newly created _Testing target, the active target:
19. Build and run the project while monitoring the Xcode console.
20. The project should have built and executed successfully along with displaying console results indicating that all application unit tests passed, such as follows: Test Suite 'All tests' finished at 2011-05-10 06:59:27 +0000. Executed 2 tests, with 0 failures (0 unexpected) in 0.001 (0.036) seconds [ 92 ]
Chapter 4
21. Edit the testAppDelegate method within ExampleApplicationTest.m to intentionally cause the application unit tests to fail. 22. Add the following line to the end of the testAppDelegate method within ExampleApplicationTest.m: STFail(@"Fail!");
23. The resulting ExampleApplicationTest.m class should appear similar to the following: // // // // // //
ExampleApplicationTest.m iOS_Project Created by Loyal Moses on 11/15/10. Copyright 2010 __MyCompanyName__. All rights reserved.
#import "ExampleApplicationTest.h" #import "Helpers/BasicAutomobileClass.h"
@implementation ExampleApplicationTest #if USE_APPLICATION_UNIT_TEST iPhone Application
// all code under test is in the
- (void) testAppDelegate { id yourApplicationDelegate = [[UIApplication sharedApplication] delegate]; STAssertNotNil(yourApplicationDelegate, @"UIApplication failed to find the AppDelegate"); STFail(@"Fail!"); } #else // all code under test must be linked into the Unit Test bundle - (void) testMath { STAssertTrue((1+1)==2, @"Compiler isn't feeling well today :(" );
[ 93 ]
Reliability #endif @end
24. Save changes. 25. Build and run the project while monitoring the Xcode console.
•
The project should have built successfully; however, the Xcode console should indicate failure during execution and show our failure message. Test Suite 'ExampleApplicationTest' started at 2011-05-10 07:06:29 +0000 Test Case '-[ExampleApplicationTest testAppDelegate]' started. /Users/loyalmoses/Dropbox/code/Xcode Projects/iOS_Project/ ApplicationTests/ExampleApplicationTest.m:21: error: [ExampleApplicationTest testAppDelegate] : Fail! Test Case '-[ExampleApplicationTest testAppDelegate]' failed (0.001 seconds). Test Suite 'ExampleApplicationTest' finished at 2011-05-10 07:06:29 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.001 (0.004) seconds
[ 94 ]
Chapter 4
Summary
From this chapter, we understand the impact that designing and coding for reliability can have on an application. Reliability is not accidental and has a great effect on the overall stability and net success of an application. The ability of an application to adequately handle exceptions on the fly should be a requirement that we focus on adhering to. Doing so will increase our application's levels of stability and ultimate performance. We introduced three factors that ultimately improve the reliability of applications: •
Design
•
Quality
•
Testing
We covered both logic and application unit testing, how they improve the reliability of an application, as well as preparing and integrating these tests into our Xcode projects. Additionally, we detailed the differences in exception handling and error checking and how they fit into the overall application design and development process. The following is a list of the topics we covered in this chapter that relate to reliability and application performance: •
Exception handling
•
Error checking
•
SenTest unit testing framework
•
Logic unit testing
•
Application unit testing
•
Project preparation for both logic and application unit testing
[ 95 ]
Performance Measurement and Benchmarking In this chapter, we will take a look at the following two tools that can have a significant impact on development efficiency and application stability: •
Static analyzer
•
Instruments
How many times have we found ourselves logging out timestamps to determine the amount of time a particular process or thread is taking to complete? More than likely if you are like me, the answer to that question is 'a lot'. As inefficient as we all know this practice is, we continue to use it because it is simple to employ in a variety of situations. Regardless, its metrics are limited, but easy to understand and perhaps this combination of positive and negative factors is one of the primary reasons it is still commonly used. Overall, manual testing techniques fall victim to a lack of consistency and the key to accurate measurement is to have a controlled baseline or known standard of performance. Any seasoned developer knows the hair pulling that takes place when trying to locate bugs or performance issues in unpredictable source code. The simple truth here is that you cannot really achieve the level of accuracy that you may require until you employ the tools designed for the job. Manual techniques are the offspring of troubleshooting skills. And many of us programmers take the ability to troubleshoot complicated programming problems for granted. Not every programmer that we interact with will measure up to our relative troubleshooting skill level. Opting for automated or systematic measuring processes over simpler manual techniques will ensure that regardless of who is sitting behind the keys, the benchmarks will remain relatively the same.
Performance Measurement and Benchmarking
For situations that require granular, more detailed inspection, and measurement we need to turn to the tools and techniques, which are designed specifically for this purpose. Automated tools bring us this greater level of consistency and allow us to more quickly locate areas of concern that might otherwise cause confusion or even go unnoticed. Or even worse, be identified in a release or production version by an end-user. Additionally, automated tools can bring decades of experience to our fingertips and allow us to not only find problems with our code, but also educate us along the way and save us sleepless nights. Calculating performance requirements as well as limitations can be a significant time expense especially without automated tools. We need to take advantage of every opportunity to shrink the amount of requirements that are placed upon our development process. Imagine trying to keep an up-to-date list of manual performance measuring tests that must be performed after each build or prior to each repository commit. A daunting task to say the least. The fact is, we all know that the list will never truly be up-to-date at all times nor will the tasks on it be performed as regularly as we may wish them to be. Automation is the answer to this problem. Not only does it provide a higher level of accuracy but it also conforms more easily to our theory of impacting our workflow the least. Another common performance and debugging tactic is the use and quite frankly abuse of code commenting. It's a pretty safe bet that I'm not the only developer who has found themselves commenting out large swaths of code in an attempt to identify the portion of an application that is performing poorly or even simply causing headaches. In a pinch or with a time constraint, this technique can be used to validate a theory or to confirm a particular deficiency; however, it is far from capable of accurately measuring wide ranging inefficiencies in code. Measuring on the scale of efficiency, this method of debugging may have significant benefits on simple classes or small applications but as an application or code base grows in size and complexity, these benefits can wear thin quickly. Our experience tells us that if we remove enough code in the correct order, we can identify the areas in our application that need attention. As true as this may be, there are circumstances in which we can use automated tools and technology to gain more granular insight without chopping our code into little bits. I am definitely not implying that this practice is entirely bad or not applicable. It's a time-honored technique and is the natural response to a situation in which deduction and troubleshooting processes are required. However, as with most tools there are limitations to its use and scenarios in which it may still work, but another tool might simply do a better job. Having a number of options for measurement and testing at our disposal only improves our workflow and results. [ 98 ]
Chapter 5
Static analyzer
As we touched upon in an earlier chapter, the clang static analyzer is responsible for performing Xcode's static analysis functionality. It was warmly welcomed into Xcode 3.2 on Mac OS X 10.6 (Snow Leopard) and more tightly integrated into the recently released Xcode 4. It is also described by Apple as being revolutionary and in fact it is, when compared to other competing analysis engines available in other languages. Static analysis is the process of identifying logic bugs and memory leaks during the build process, prior to application execution. This process has the added benefit of looking for programmatic errors or oversight in our source code and providing line-by-line descriptions of the problems discovered. This deeper level of analysis means we can spend more time coding and less time worrying about simple mistakes all the while learning as we develop by taking the guidance provided. The analyzer is capable of identifying a wide range of pre-compiler errors and is a definite value-added option for those seeking a higher level of project performance. In this example, the analyzer identifies two distinct logic issues in the following source code and provides us with a description and visual path of logic execution. To demonstrate the usefulness of the static analyzer, the following source code intentionally includes two distinct logic issues: •
myValue is not initialized with a value.
•
An invalid return value is possible if argument is a negative number: - (int)myPublicMethod:(int)argument { int myValue; if ( argument > 0) { myValue = 1; } else if ( argument == 0 ) { myValue = 0; } return myValue; } [ 99 ]
Performance Measurement and Benchmarking
When run, Xcode's static analyzer identifies both of these common problems and displays a visual representation of application logic flow to help us understand the adverse effects, this might have on our application. Following the arrows and logic flow displayed by the analyzer, can help us more deeply understand why values may be unreachable or why objects may be consuming and leaking large amounts of memory. Without deep inspection, these issues are more than likely going to be discovered during runtime and possibly well after the application has been released for production use. This is shown in the following screenshot:
Manually running the static analyzer is as simple as selecting Build and Analyze from Xcode's Build menu or using the hot-key combination Shift-CMD-B. Using either of these options builds the project and activates the static analyzer. My personal preference is to routinely build and analyze simultaneously, so I can continually keep an eye on the issues it detects throughout the progress of the project. Additionally, it helps to make me a better developer by pointing out my common coding mistakes. I can use these warnings as education and alter my development habits to prevent them entirely. If a tool or process is difficult to use, chances are it will be found gathering dust and its potential value never realized. For me, automation is the key to success. If I can't automate a task, I'll eventually choose not to use it.
[ 100 ]
Chapter 5
By including static analysis directly into the build routine throughout my development process, allows me to focus on producing quality code with the comfort and confidence that an additional layer of bug finding and quality control is overseeing. Even though I have mentioned it several times in a half-dozen ways, our ultimate goal is to finely tune our applications to perform at their highest levels of efficiency. Automation is a piece of this puzzle. In Chapter 1, we introduced the idea that there are two primary thought processes on resolving identified problems in code: resolving now and resolving later. Choosing one of these strategies will help you decide how you integrate static analysis into your development process.
Instruments
Apple's Instruments is another excellent tool provided with Xcode and used to profile applications for bug finding and performance tuning. Instruments is an application performance analyzer and visualizer with modules to monitor, record, and analyze data for nearly every aspect of an application. Collected data is recorded for playback and analysis and displayed graphically over a timeline, allowing a developer to quickly spot areas of concern. Instruments records and tracks a significant amount of data, as shown in the following list: •
CPU activity
•
Threads and processes
•
Memory
•
Garbage collection
•
File level access
•
Keyboard activity
•
Mouse movement and clicks
•
Network activity
•
Graphics analysis
Instruments is based upon the DTrace tracing framework, which has been around since late 2003 and was initially included in Mac OS X 10.5 (Leopard).
[ 101 ]
Performance Measurement and Benchmarking
Running an Instruments script on a project or application is as easy as selecting the Product menu within Xcode and choosing the Profile sub-menu. Within the following profiling selection window, we see a wide-ranging selection of available options that Instruments can use to analyze our application:
Instruments can be run on Mac OS X using the Simulator or attached to the application running on an iOS device. The available options under the Profile menu option will vary depending upon whether the target selected is the Simulator or actual iOS Device. Each performance profiling script produces a unique set of results; however, for this chapter we will take a closer look at the Leaks script by selecting the Memory category, followed by the Leaks option, and then clicking Profile. The Leaks script is designed to attach and profile the memory of a running application, giving deep insight into the overall usage and abuse of memory.
[ 102 ]
Chapter 5
As the name Leaks implies, it is specifically designed to identify memory leaks within an application. More specifically, it looks for common situations like objects that are initialized and never properly released. Memory leaks in Objective-C are of course one of the more common problems found in iOS applications due to the nature of the language regarding retain / release and native garbage collection. Of course as mentioned in earlier chapters, with the introduction of the LLVM2 compiler, ARC ("Automatic Reference Counting") dramatically simplifies the memory management process by allowing the compiler to release objects as soon as they are no longer needed, making for much smoother, automated memory management. Essentially, retaining and releasing objects is no longer a concern unless you prefer to manage memory yourself. Instruments aids in discovering memory leaks by providing a real-time graph of memory consumption, including a list of objects that Instruments believes to be responsible for memory leaks. To demonstrate the ability of Instruments to identify a memory leak within our test project, we will intentionally create an object within the didFinishLaunching method of our application and not release it, shown as follows: NSDate *date = [[NSDate alloc] initWithTimeIntervalSinceNow:86400]; NSLog(@"24 hours from now: %@", date);
With the previous code included in our didFinishLaunching method, we can now save, build, and run our application, looking for leaks using the Profile option of Xcode. This is a very common coding oversight and any project with more than a few thousand lines of code can expect to have as many as a few dozen or even more of these issues.
[ 103 ]
Performance Measurement and Benchmarking
When launched, Instruments attaches to our running application and begins immediately recording and analyzing our application. From the following screenshot, Instruments, memory leak module detects the unreleased NSDate object we created as an actual memory leak:
Once we stop recording, we can begin to review and analyze the results of the memory leak script. Instruments displays the offending object, memory address, and size of the detected memory leak. Clicking the arrow next to the memory address provides us with leak history and specific details of each memory leak, as shown in the following screenshot:
[ 104 ]
Chapter 5
The details available in the History display include the Timestamp of each memory leak, RefCt (reference count), and most importantly it provides us with the Responsible Caller, which when double-clicked opens the actual line of source code that Instruments believes to be responsible for the cause of the leak, shown as follows:
[ 105 ]
Performance Measurement and Benchmarking
On the right side of the source code display within Instruments, we can see a percentage indicating how accurate Instruments believes this is the offending memory leak. In this particular case, Instruments is 100% positive that the detected memory is in fact the NSDate object, we intentionally failed to release. Depending upon the complexity of the source code, the accuracy of Instruments in locating the offending lines of code may vary slightly, however, in my experience it has been near perfect. Regardless of accuracy, the value it adds to the development process is quite significant and it should be a regular go-to utility to enhance the stability and performance of any application or project. As we mentioned previously, this is simply a single aspect of the capabilities of the Instruments utility. Each profiling script will produce very different results; however, the interface and interaction with the data remains the same.
Summary
Accurately measuring performance in most cases requires a greater level of control than manual techniques can provide. Automated tools provide a significant advantage over manual techniques in a variety of ways. Consistency being a primary factor when we profile an application, automated tools can compensate for variables that may be outside of our immediate control and continue to provide meaningful results. In this chapter, we took a closer look at two of Xcode's performance and analyzation tools: •
Static analyzer
•
Instruments
Both of these tools have been designed and integrated directly into the Xcode development workflow to help a developer locate potential performance bottlenecks. We demonstrated how these tools could be used to identify and resolve common coding faults like logic errors and memory leaks. We discussed the positive and negative effects that manual analysis and automated testing can have on the quality of an application as well as the impact on a project and its timeline. Integrating these tools into our development process ultimately produces a higher quality product and teaches us the necessary skills to avoid making these mistakes in the first place.
[ 106 ]
Syntax and Process Performance It may go without saying that language syntax has the greatest impact on an application's overall performance level. As intended, there are as many ways to accomplish the same computational task as there are stars in our universe; however, each of these methods have their positives and negatives. Any developer with experience in more than a single language knows that once you understand the core principles of programming, they can be applied to any language. Without fear of sounding too elementary, let's take a closer look at the purpose of syntax to help understand why it has such a great impact on performance. In short, syntax is a set of structured rules that define the way we describe operations and the sequence in which we intend for them to be performed. Our implementation of this syntax is later parsed into tokens, which are interpreted and translated into a lower level language. In comparison, from as little as a few years old or earlier, as children we are being taught the importance of syntactical compilation. We learn the correct and most efficient ways to express our intentions as we continue to be exposed to language. Programming languages like spoken languages require order to be effective. The order in which we express our programmatic intentions has without a doubt, a significant impact on the results we receive in return. Simply put, poorly formatted grammar, like poorly formatted code produces equally unpredictable results. Understanding and correctly implementing the core principles of language syntax will ultimately produce a higher quality result. Just as a commanding grasp of spoken language can ease the task of expressing yourself, a deep understanding of programming language syntax will no doubt have the same effect.
Syntax and Process Performance
With regard to syntax, I am not referring to where you place your semi-colons. The compiler will punish you for those mistakes. I am instead referring to how we use syntax to our advantage to make cleaner, more efficient code. In this chapter, we take a deeper look at common programming and syntax inefficiencies and how correct implementation can lead to greater levels of application performance. Additionally, we'll also take an in-depth look at several important data sorting algorithms that are commonly overlooked that can lead to significant gains in performance in nearly any iOS application. The following areas will be covered: • • • • • • •
Iteration loops Object reuse Bitmasks Sorting Run loops Timers Semaphores
Iteration loops
Improper use of iteration loops is quite possibly the most common performance inhibitor in applications industry wide. Loops, as we all know and use them, are a core feature of any current day development language. As they relate to performance, loops play a critical role in how well and not so well an application performs. As simple as they might be, resources can be literally choked by loops that are performing unnecessary steps within each iteration. Imagine a simple while loop that iterates as many as 1000 times. Now, imagine that this loop instantiates four objects and declares three variables along with calling two methods within each iteration. Simple math tells us that 7000 unnecessary operations are taking place, not to mention the additional operations that take place within the two method calls that are included within the loop. As obvious as this might sound, it is one of the most common mistakes that developers make. Loops with limited iterations are of course less affected; however, more sizable loops can have a big impact on performance when large numbers of objects and variables are being instantiated and declared on each iteration. [ 108 ]
Chapter 6
Most modern day compilers including GCC and LLVM include performance features to help with loop performance problems. Most widely known as 'loop invariant code motion' as well as 'code hoisting', compilers will attempt to detect if variables and objects within a loop block are invariant and either hoist or sink the relevant calls to prevent them from affecting performance. Code motion is the compiler's attempt at protecting system performance and although it is a great convenience and safety net, it is important to be aware of what should and shouldn't be within a loop's code block. The following code snippet is an example of how not to use variables within a for loop: for (int i = 0; i < 100; i++) { // Variable NSNumber *bigValue = [[NSNumber alloc] initWithInt:1000]; // Equation int calc = i * [bigValue intValue]; NSLog(@"%d", calc); }
This for loop calculates and logs the product of the variable bigValue and the dynamic iterator i. However, for each iteration, bigValue is being instantiated with a value of 1000 and would better serve performance if declared and instantiated outside of the loop, prior to its execution. The integer calc would also benefit performance by being declared before the for loop as well. The following is the adjusted and correct example of the same for loop: // Variables int calc; NSNumber *bigValue = [[NSNumber alloc] initWithInt:1000]; for (int i = 0; i < 10; i++) { // Equation [ 109 ]
Syntax and Process Performance calc = i * [bigValue intValue]; NSLog(@"%d", calc); }
Looping should be closely monitored, not abused and limited to as few lines as possible. Loops can be thought of as a necessary evil that should be managed closely to prevent unwieldy behavior. I am definitely not implying that loops are to be avoided, just making it as clear as possible that when we do use them, they should be used with discretion. Process waste and memory leaks are common in loops. In an earlier chapter, we covered several methods of profiling and analyzing code for these particular problems. Take note that many of the issues you might discover will in fact be found within loops.
Object reuse
One of the most expensive aspects of an application is the birth and death of an object within the object lifecycle. Reusing objects can save significant amounts of resources especially when used in conjunction with frequent operations or lengthy loops. iOS devices like most devices are by nature limited on resources, take advantage of the ability to reuse an object and save those precious bits of performance until they are needed. If you require large amounts of frequently used objects, consider the object pool pattern. This pattern is designed to hold pre-initialized objects that are ready to be used rather than instantiated when needed. Objects pulled from an object pool are usually retrieved in a common state, used by the system and then reset before being placed back in the pool for reuse. Object pools can provide significant performance benefits simply because instantiation is one of the most resource expensive tasks that can be performed within an application at runtime. Object pooling can increase complexity; however, if implementation is done carefully the negatives are overshadowed by the positives in every facet. Simply remember that instantiation has a cost and mobile devices have limited resources to draw upon. Be quick to create an object, but slow to let it go.
[ 110 ]
Chapter 6
Bitmasks
Programmatically, a bitmask refers to the representation of multiple switches and / or values by a single integer. Bitmasks are used primarily to conserve memory and / or disk storage, when compared to more traditional value storage mechanisms. Bitmasks are frequently employed in lower-level frameworks and a healthy knowledge and understanding of how they work as well as when to employ them, will only serve as beneficial. Another benefit of working directly with binary are the availability of bitmask operations that are native to nearly every programming language in mainstream use today. These operations are highly efficient and provide powerful mechanisms to mathematically alter binary values, otherwise known as bit-twiddling. Let's take a look at a simple example to demonstrate their purpose and efficiency. The following is a short list of five pseudo-program options, which would, if stored traditionally, require five individual Boolean values as shown: BOOL BOOL BOOL BOOL BOOL
hourVisible = YES; dayVisible = YES; weekVisible = YES; monthVisible = NO; yearVisible = NO;
As you can imagine, using a Boolean value for each of these variables is overkill in terms of memory consumption and technically quite unnecessary. Although we see this quite frequently, think of your application as a finely tuned piece of machinery, where our intention is to waste as few resources as possible. With this concept in mind, a more concise and memory efficient storage method would be to store all five of these Boolean values within a single integer. We do this by representing each of our options with a unique bit value like the following: int int int int int
hourVisible = 1; dayVisible = 2; weekVisible = 4; monthVisible = 8; yearVisible = 16;
[ 111 ]
Syntax and Process Performance
The binary representation of the previous values would be as follows: •
0000 0001 hourVisible
•
0000 0010 dayVisible
•
0000 0100 weekVisible
•
0000 1000 monthVisible
•
0001 0000 yearVisible
We can create our bitmask by summing the bit values of the options, which are selected. In this example, we'll assume hourVisible, dayVisible, and weekVisible are the enabled options in our bitmask. To do this we use the following code, which will store the binary value 7 as an integer in bitmask using the bitwise OR operator: int bitmask = hourVisible | dayVisible | weekVisible;
7 represented in binary format within our bitmask is 0000 0111. Bit position from right to left, shows that positions 0, 1, and 2 are ON, which corresponds to the 3 hypothetical options we want enabled. Usually, the remaining left 0's would not be shown because they contain no meaningful data, however, I left them for the sake of clarity. This single integer value can then be stored to represent the state of five individual Boolean variables. As you can see, this is quite a significant factor of savings in memory, processing performance, and disk storage requirements. Computing devices are already thinking in terms of bits, so it's natural that bit manipulation achieves higher levels of performance. Reversing or querying the Boolean values stored within the bitmask is just as simple. To test for the existence of a particular value, the following bitwise AND operator can be used: if (dayVisible & bitmask) { NSLog(@"dayVisible enabled"); } else { NSLog(@"dayVisible disabled"); } [ 112 ]
Chapter 6
Adding an option to the bitmask can be done with the following code: bitmask |= yearVisible;
Removing an option can be done just as easily with the following code: Bitmask &= ~yearVisible;
Bitwise operators are quite powerful and although well documented, are not used nearly widely enough. There are a few things to be careful or take note of when using bitmasks for value storage. An int is 32-bit, thus the maximum capacity for stored values is 32. If you require more than 32 flags or values within a single mask, then a long integer can be used. If you require more than 64 flags, then a single bitmask is no longer capable and alternatives must be selected.
Sorting
Poor sorting implementation is another common performance inhibitor that unnecessarily plagues a good number of applications. Sorting is an essential component in product development but poor selection of sorting algorithms can lead to a wide range of performance problems. In addition to the algorithm selected for sorting, the location of the sort is just as important and is in many cases more impactful. Knowing and understanding a selection of common sorting algorithms will make you a much more versatile developer. Not only does this knowledge help affect performance, it seeps into every other aspect of general application development. We all would like to trust that the core language is selecting the most appropriate sorting algorithm when we need it; however, this is rarely the case. Most languages choose very middle-line sorting algorithms that work generally well under a good number of circumstances. The laws of "most" and "generally" never really lead to precision and precision is what we look for when we seek performance. Algorithm sorting is far from simple. It is a complex and well-researched area of analysis with a long history. Relying upon the underlying language to fulfill every sorting requirement in most circumstances is good enough; however, when we seek higher levels of performance we need to be looking at every line of code as an opportunity to do just that. [ 113 ]
Syntax and Process Performance
Having a deeper understanding of sorting algorithms and when they are most effective will come as a great advantage in almost every application we write. Sorting and ordering is such a common practice that many developers overlook them, when in truth a good portion of application performance can be lost or gained within them. Of the hundreds of sorting algorithms and their variants available, I've selected a small handful to describe in more detail as well as included example Objective-C implementations for several of them.
Bubble sort
Simplistic in concept, the bubble sort theory received its name from the principle that bubbles of air contained within water will rise to the surface by displacing the heavier, surrounding water molecules on their way. The bubble sort algorithm is performed by starting at the beginning of a data set, comparing, and swapping adjacent elements as it proceeds through to the end. The bubble sort will continue making passes across the data set from the beginning until there are no more elements swapped. A bubble sort is quite inefficient for initial sorts and is mostly used to demonstrate the concepts behind sorting algorithms. It can, however, be used effectively to find small numbers of items that may be out of order in an otherwise previously ordered list. Inefficiencies in a bubble sort can be expressed with simple numbers. A bubble sort's worst-case scenario for total possible passes is x2, meaning a data set with 10 elements would at a maximum require 100 element swaps. 10 elements swapped per 10 potential passes equates to 100 swaps. Increasing the size of our data set to 5000 elements or more makes the inefficiency of 25 million potential swaps more apparent. A bubble sort on an unordered list in theory progresses over each of the listed passes, by bubbling the greater values to the right by swapping adjacent values, shown as follows: •
Pass 0: 2 5 9 8 7 3 1 6 4
•
Pass 1: 2 5 8 7 3 1 6 4 9
•
Pass 2: 2 5 7 3 1 6 4 8 9
•
Pass 3: 2 5 3 1 6 4 7 8 9
•
Pass 4: 2 3 1 5 4 6 7 8 9
•
Pass 5: 2 1 3 4 5 6 7 8 9
•
Pass 6: 1 2 3 4 5 6 7 8 9 [ 114 ]
Chapter 6
Take a moment and read over each pass identifying the bubbling (swapping) technique. Although this principle is not overly complicated, it is an excellent demonstration of the power within linear calculations. One particular aspect of a bubble sort that should stand out immediately is that a data set that is slightly out of order will be correctly ordered in a significantly smaller number of passes than a data set that is completely random. An Objective-C bubble sort example that sorts an NSMutableArray of NSNumber's is as follows: - (NSMutableArray *)bubbleSort:(NSMutableArray *)a { // Log the contents of the incoming array NSLog(@"%@", a); // Temporary variable for holding the swap value NSNumber *swapHold = [[[NSNumber alloc] init] autorelease]; // Outer loop for each element in array for (int i = 0; i < [a count]; i++) { // Inner loop for each element except the last for (int j = 0; j < [a count] - 1; j++) { // Evaluate current element against next element if ([[a objectAtIndex:j] intValue] > [[a objectAtIndex:j+1] intValue]) { // Place lesser value in temporary variable swapHold = [a objectAtIndex:j+1]; // Swap greater value with lesser value [a replaceObjectAtIndex:j+1 withObject: [a objectAtIndex:j]]; [a replaceObjectAtIndex:j withObject:swapHold];
[ 115 ]
Syntax and Process Performance } } } // Log the contents of the outgoing array NSLog(@"%@", a); // Return array return a; }
Selection sort
A selection sort is another simple sorting algorithm, somewhat similar to the bubble sort. This algorithm works by finding the smallest value in the data set and swapping it with the first position in each pass. The selection sort algorithm, like the bubble sort algorithm, lacks efficiency for large data sets and shares a worst-case potential pass calculation of x2 as well. Both the bubble sort and selection sort remain relevant regarding performance because of their innate simplicity. Performance is relatively positive when dealing with smaller data sets or when these algorithms are used in place of overly complicated sorts. An Objective-C selection sort example that sorts an NSMutableArray of NSNumbers is as follows: - (NSMutableArray *)selectionSort:(NSMutableArray *)a { // Log the contents of the incoming array NSLog(@"%@", a); // Temporary variable for holding the swap value NSNumber *swapHold = [[[NSNumber alloc] init] autorelease]; int minIndex; // Outer loop for each element in array for (int i = 0; i < [a count]; i++) [ 116 ]
Chapter 6 { // Track index minIndex = i; // Inner loop for each element except previous for (int j = i + 1; j < [a count]; j++) { // Evaluate current element against previous element if ([[a objectAtIndex:j] intValue] < [[a objectAtIndex: minIndex] intValue]) { // Update index minIndex = j; } } // Place lesser value in temporary variable swapHold = [a objectAtIndex:minIndex]; // Swap greater value with lesser value [a replaceObjectAtIndex:minIndex withObject:[a objectAtIndex: i]]; [a replaceObjectAtIndex:i withObject:swapHold]; } // Log the contents of the outgoing array NSLog(@"%@", a); // Return array return a; }
[ 117 ]
Syntax and Process Performance
Bucket sort
The bucket sorting algorithm receives its name due to the use of logical buckets that data sets are divided up and placed into. In most cases, a finite number of buckets are selected prior to sorting; these buckets are then sorted individually using a wide variety of sorting algorithms depending upon purpose, complexity, size of data set, and performance requirements. The bucket sorting algorithm is classified as a divide and conquer style of sorting, in which smaller chunks of data are sorted independently of one another. An Objective-C bucket sort example that sorts an NSMutableArray of NSNumber's is as follows: - (NSMutableArray *)bucketSort:(NSMutableArray *)a { // Log the contents of the incoming array NSLog(@"%@", a); // Create bucket NSMutableArray *bucket = [[[NSMutableArray alloc] initWithCapacity:[a count]] autorelease]; // Fill bucket with a zero value for each incoming array element for (int i = 0; i < [a count] + 1; i++) { [bucket addObject:[[NSNumber alloc] initWithInt:0]]; } // Loop over incoming array and tally values for (int i = 0; i < [a count]; i++) { [bucket replaceObjectAtIndex:[[a objectAtIndex:i] intValue] withObject:[[NSNumber alloc] initWithInt:[[bucket objectAtIndex:[[a objectAtIndex:i] intValue]] intValue] + 1]]; } // Create new array to store ordered values
[ 118 ]
Chapter 6 NSMutableArray *newArray = [[[NSMutableArray alloc] init] autorelease]; // Outer loop over bucket with tallied values for (int i = 0; i < [bucket count]; i++) { // Inner loop to add x elements for each value for (int j = 0; j < [[bucket objectAtIndex:i] intValue]; j++) { [newArray addObject:[[NSNumber alloc] initWithInt:i]]; } } // Log the contents of the outgoing array NSLog(@"%@", newArray);
// Return array return newArray; }
Quicksort
The quicksort algorithm, like the bucket sort, is also a divide and conquer style sorting algorithm. Very popular with the potential for extreme complexity, the quicksort algorithm is commonly implemented in core programming language sort operations. A quicksort works by calculating a division of a data set, called a pivot point. Once selected, elements are evaluated and smaller values are moved to one side of the pivot point while greater values are moved to the other side. Each side of the pivot point is then recursively sorted independently of the other side before finally being merged backed together at the point of pivot. To demonstrate how a quicksort works, we'll use the following 9 digit unordered list as our data set: [9 2 6 3 4 8 1 5 7] [ 119 ]
Syntax and Process Performance
A pivot point is then calculated either intelligently or at random. For this demonstration we'll use the 5th digit, index 4 as our example pivot, shown as follows: [9 2 6 3] [4] [8 1 5 7] In order to create the two quicksort lists on either side of the pivot point, we iterate over each element in the data set and evaluate its value against our pivot point. The result is two lists ordered with values less than the selected pivot point in list 1 and greater values in list 2, shown as follows: List 1 [2 3 1] List 2 [9 6 8 5 7] Each of these lists is then sorted independently of one another using any appropriate algorithm and merged back together on either side of the pivot point to form the final ordered result as follows: [1 2 3] - 4 - [5 6 7 8 9] In real-world usage scenarios, the quicksort is heavily dependent upon the selection of a good pivot point. Poor pivot selection can result in significant performance loss, which is the reason why this algorithm can become quite complex. The quicksort algorithm is very much everywhere around us. Major search engines, social networking sites, mobile platforms, games, and gaming devices take advantage of the power and complexity of this algorithm to sort and order massive amounts of data with relative ease. An Objective-C quicksort example that sorts an NSMutableArray of NSNumber's is as follows: - (NSMutableArray *)quickSort:(NSMutableArray *)a { // Log the contents of the incoming array NSLog(@"%@", a); // Create two temporary storage lists NSMutableArray *listOne = [[[NSMutableArray alloc] [ 120 ]
Chapter 6 initWithCapacity:[a count]] autorelease]; NSMutableArray *listTwo = [[[NSMutableArray alloc] initWithCapacity:[a count]] autorelease]; int pivot = 4; // Divide the incoming array at the pivot for (int i = 0; i < [a count]; i++) { if ([[a objectAtIndex:i] intValue] < pivot) { [listOne addObject:[a objectAtIndex:i]]; } else if ([[a objectAtIndex:i] intValue] > pivot) { [listTwo addObject:[a objectAtIndex:i]]; } } // Sort each of the lesser and greater lists using a bubble sort listOne = [self bubbleSort:listOne]; listTwo = [self bubbleSort:listTwo]; // Merge pivot onto lesser list listOne addObject:[[NSNumber alloc] initWithInt:pivot]]; // Merge greater list onto lesser list for (int i = 0; i < [listTwo count]; i++) { [listOne addObject:[listTwo objectAtIndex:i]]; } // Log the contents of the outgoing array NSLog(@"%@", listOne);
[ 121 ]
Syntax and Process Performance // Return array return listOne; }
As you can see, there is greater power in knowing, building, and choosing the appropriate sorting algorithms for the specific data sets you might be manipulating. Relying upon the core language framework or libraries to choose the correct algorithm is in most cases quite naïve. The following is a short list of several additional and very common sorting algorithms that are seen frequently and worth researching: •
Counting sort
•
Heapsort
•
Shell sort
•
Merge sort
•
Cocktail sort
•
Binary tree sort
Experiment with and optimize sort routines for your exact data requirements by eliminating irrelevant factors. For instance, creating a specific sort method for known data types or lengths of data can have a massive impact on performance rather than having a generic sort algorithm that makes concessions based on worst-case scenarios. If you know you are sorting 100 integers from 0 to 99, build a sort method with this exact scenario in mind and save precious resources that would otherwise be wasted on counting and dividing at intelligent or dynamic positions. One last note regarding sort algorithms; take into account that sorting large data sets requires a great amount of memory. Profile your sorting methods and look for memory waste. As obvious as it might sound, unexpected stability issues are frequently traced to memory related trouble.
Run loops
A run loop is a processing loop that is used to manage and coordinate events within an application. By default, every application has an initial, single run loop that is responsible for the main application's threads. Additional run loops can be created as necessary; however in most circumstances, the default application's run loop is quite sufficient. [ 122 ]
Chapter 6
The primary purpose of a run loop is to effectively manage application tasks at the most appropriate time. The loop waits for events to process and sleeps when there is no work to be done. The default application's run loop is started automatically; however, any additional run loops must be started, managed, and shutdown manually. As the name implies, the run loop is in fact a loop. By default, this loop is started automatically within the application's startup sequence by the run method of UIApplication in iOS. A common misconception with developing in general and especially when relating to iOS devices, is that a developer must use multiple threads or run loops to accomplish common tasks. This is absolutely false and in fact, the majority of applications written for iOS devices are more than capable of performing every operation within the default run loop. The only true necessity for creating an additional run loop is when additional threads are absolutely required. A few select scenarios for having an additional run loop might be any of the following: •
Using a thread to perform timed tasks
•
Inter-thread or process communication
•
Using timers on threads
If you in fact find that you do need to operate with multiple run loops, ensure you take the appropriate measures of correctly shutting down additional threads rather than dealing with the unexpected results of forced termination.
Timers
iOS timers are handled by the NSTimer class and as the name implies, a timer is an object that waits until a specific amount of time has elapsed before firing an associated message. Any number of timers may be active at any given time and offer an alternative to multiple thread architectures under many circumstances. Timers are extremely useful in scenarios where one might think additional run loops and threads are necessary. When in fact, timers are more than capable of handing significant workloads within an application's main run loop.
[ 123 ]
Syntax and Process Performance
Those familiar with Objective-C timers will know that they are not real-time and not guaranteed to run at the specified intervals. This is due to the fact that the run loops check on timers when thread time permits. The run loop may be busy performing other tasks and will process timers at the next opportunity. Within an iOS application under average load, the effective timer firing window can be as wide as 50 to 100 milliseconds. This window of variance is easily manageable if timers are used appropriately and not abused with costly time-consuming operations. Timers come in two flavors: repeating and non-repeating. Both are equally useful, and when implemented correctly can easily take the place of simple operations for which many developers might think multiple run loops and threads are required. An example usage is to update a display or remotely load a data source at a specific time interval. A secondary run loop and additional thread is not necessary as we can simply create an NSTimer object and call a method to update or load the necessary data. All of this is performed in the background within the current run loop. The following is an example of an NSTimer that repeatedly fires on 15-second intervals. When fired, the method repeatingTimerFireMethod is called as follows: NSTimer *repeatingTimer = [NSTimer scheduledTimerWithTimeInterval:15 target:self selector:@selector(repeatingTimerFireMethod:) userInfo:[self userInfo] repeats:YES];
Removing or invalidating an NSTimer object is quite simple; just call the invalidate method as follows: [repeatingTimer invalidate];
When the invalidate method is called, the timer will no longer fire and it is removed from the run loop. A helpful tip when managing timers is to ensure they are invalidated when no longer necessary. As an example, if a timer is only valid within a particular view, then ensure it is removed within the viewWillDisappear method as follows: - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; [repeatingTimer invalidate]; }
[ 124 ]
Chapter 6
One important note to remember with repeating timers is that they are scheduled to fire based on the original firing time, not the actual time the previous timer was fired. This means that if the timer was delayed in firing for any reason, the following event will be fired at its scheduled time regardless of the delay, even if the delay is several seconds or more. There are several benefits here, one of which is that if the timer is delayed so far as that it passes the next firing time, the timer is only fired once and then the next firing time will be scheduled as usual. Another benefit is that if an operation takes an inordinate amount of time, the individual timers will not back up or stack events on top of one another, which might otherwise cause serious problems.
Semaphores
In simple, a semaphore is a flag. An abstraction variable that stores a trigger or state to be accessed by one or more processes. Semaphores are useful in a wide range of scenarios, from signaling state changes to facilitating inter-process and inter-view communication. An example scenario would be for an application to be monitoring for the existence of a file or value in a database. When this file or value is detected, the application performs the operations associated with the conditions. Essentially, this is performing functions when a signal is received, this signal being the semaphore. Semaphores are commonly used in systems in which access restrictions or race conditions are possible. The semaphore can indicate that a process or control is in progress, allowing other processes or application functions to avoid operations that might cause collision. There are near limitless usage-cases for semaphores and they are as powerful as they are simple. In fact, most developers are using semaphores and are unaware of the principle behind them.
[ 125 ]
Syntax and Process Performance
Summary
As we covered in this chapter, the importance of having a solid command over syntax and process management can have a significant impact on an application's performance level. Understanding the basic principles behind some of the operations that many developers take for granted can lead to big performance gains, when we take the time to select and customize them for our needs. Each line of code presents an opportunity to squeeze greater levels of efficiency from an application, and relying upon the language to make critical performance related decisions could be costly. Take advantage of the profiler to measure the success or failure of your optimizations and look for potential performance gains on every line of code. Specifically, the areas of syntax and process efficiency that we covered in this chapter were as follows: •
Iteration loops
•
Object reuse
•
Bitmasks
•
Run loops
•
Timers
•
Semaphores
Additionally, we took a deeper look at the following sorting algorithms and included examples of writing specific methods to obtain greater levels of focused performance: •
Bubble sort
•
Selection sort
•
Bucket sort
•
Quicksort
Data sorting is a fundamental development skill that every developer needs to have a healthy grasp of. Taking the time to analyze our data structures and choosing appropriate sorting and ordering algorithms will help us find and remediate performance inefficiencies that might otherwise go unnoticed if we left them to core language operations. We also covered the importance of using timers rather than opting to increase overhead and complexity with unnecessary additional run loops or added threads. [ 126 ]
Network Performance One of the more prominent features of mobile devices today is their ability to integrate with the interconnected world we live in. Apple's entire range of iOS devices are designed with interconnectivity in mind. Wi-Fi, Bluetooth, and carrier network access are the featured networking mediums available on these iOS devices. A single poorly performing network task has the potential to bring an entire application to an instant halt. Network performance is not only a necessary component to a healthy application; it is an essential skill that is unfortunately not more widely honed in the iOS App development world. As is the way development goes, when an aspect of an application is implemented and executed well, it goes unnoticed. Inversely, when a basic feature or function is implemented equally as badly, it can unfortunately become the one single feature that users remember. More unfortunate, is that a single application design flaw can be an enormous indicator that there are greater issues just ahead. We've all experienced iOS applications which download significant amounts of data immediately upon launch, causing us to stare at some uninteresting splash page wondering when we can move on with life. Developers are frequently choosing between pre-loading remote data upon initial launch and loading data upon need or request. The pros and cons of each can be quite lengthy and we'll discuss several of these as we proceed through this chapter. Another frequently made choice during the development process that has a great impact on an application's performance level is how much data is transmitted and received within each remote data operation. More often than it should, the wrong decision is made and single requests are abused with inappropriate datasets that are either too large or too small. Good design architecture will determine when network communications should take place and exactly how much data should be batched, transferred, or cached.
Network Performance
Of course, not every application requires device interconnectivity or for that matter even network and Internet communication. Even if any of your current iOS projects do not currently use or access remote network resources, chances are that if you continue to develop for mobile devices, this chapter will better prepare you if or when you do venture into this realm. Additionally, let us also keep in mind that not every iOS device will have network access at all times. From simple poor signal quality to intermittent carrier access in remote locations, poor connectivity is rather common. Large buildings, tunnels, mountains, and valleys as well as undeveloped service areas can easily interrupt normal network communications and have adverse effects on applications, which are not prepared for these interruptions. As an over-simplified example, imagine a series of network operations to replace a contact record in a remote server database. The operations could be quite simple and perform the following tasks in order: •
Server deletes existing contact record from database
•
iOS device sends new contact details to server
•
Server inserts new contact record into database
With these operations, for the sake of the example let's continue and imagine that network access was dropped momentarily between steps A and B. It is quite possible that in this scenario a record could be removed from the database and the following operations never complete. Although this example is quite obvious and highly unlikely, we must always be conscious of circumstances that might be outside of our immediate control and handle them appropriately. This type of error handling is never truer when dealing with mobile network access and communications. The nature of mobility is that connectivity is not something that can be relied upon. Another consideration is a complete lack of network access, such as an iOS device in airplane mode or without network service of any kind. An application should be prepared for this scenario, logically handle it appropriately, and notify the user of the circumstances. In this chapter, we cover the various methods of iOS device connectivity and the role performance plays with each aspect.
[ 128 ]
Chapter 7
More specifically, we cover the following areas of network performance: •
Sockets
•
Streams
•
Protocols
•
Bandwidth
•
Compression
•
Façade pattern
Sockets
In regards to networking, when a developer thinks of sockets they are more specifically thinking of BSD or Berkeley sockets. BSD, which stands for Berkeley Software Distribution is a UNIX operating system that dates back as far as 1977. BSD was the first UNIX operating system to include IP networking libraries; this complete network library is most commonly referred to as the Berkeley sockets API. Today this API continues to stand as the de facto standard for network socket abstraction. Almost every programming language in mainstream use today, including Objective-C, provides a socket interface similar to the BSD sockets API. When working with sockets, one of the first concepts we deal with is understanding the difference between two core socket-reading principles. These two principles are as follows: •
Blocking
•
Non-blocking
When we read or write to any particular socket, the response we receive might not immediately be available, which of course is not necessarily what we want, especially if our application cannot continue or needs the response to facilitate other functions of our application. When the response to our operations is not immediately returned, causing our application to wait, this is referred to as blocking.
[ 129 ]
Network Performance
Non-blocking is also commonly known as asynchronous socket communications and is accomplished by our application being notified when the state of a socket is altered or when an event takes place on a socket for which we are interested in. Asynchronous socket communication is the preferred method of socket usage as it allows the application to continue functioning and be alerted when socket data is sent or received. When the alert arrives, our application can process the data as it might usually do. Non-blocking socket operations are achieved by using file handles or streams to manage the socket within the application's run loop, essentially pushing the network operations into the background allowing the foreground to continue. If you have more than a few dozen apps on your iOS device that access network or Internet resources, you will most definitely be able to recognize the option they selected by the obvious symptoms. An application that waits / blocks for socket data causes the entire user interface of the application to freeze while it waits for a response. In some cases, this might even go unnoticed if the read / write is somewhat small and the network resource is more or less widely available. However, the chances of bandwidth and network resource availability always being in an application's best interest are quite rare and more than likely your application will suffer with frequent and frustrating pauses while blocking occurs. Obviously, an application's user-interface that freezes while network transmissions take place is not ideal; however, it doesn't take too much thought to come up with a short list of existing applications on a few of my iOS devices that do just this. Not only are these applications quite frustrating but also the lack of attention to such a simple detail is a powerful indicator of poor application design. Applications which take advantage of asynchronous socket reading and writing will continue to send and receive data in the background, while the user continues to interact with the interface with little or no knowledge that data is being read or written at all. The following is an example of a simple socket connection that blocks while waiting for a connection. If the host or port is unavailable, the application will freeze / block and the user-interface will remain unusable until the operation is canceled or it times out, shown as follows: - (void)socketConnectExample { struct sockaddr_in sin; // Host the socket will connect to [ 130 ]
Chapter 7 NSString *socketHost = [[[NSString alloc] initWithString:@"10.1.2.3"] autorelease]; // Port the socket will connect to NSNumber *socketPort = [[[NSNumber alloc] initWithInt:80] autorelease]; // Socket NSInteger sock = socket(AF_INET, SOCK_STREAM, 0); sin.sin_family = AF_INET; sin.sin_port = htons([socketPort intValue]); sin.sin_addr.s_addr = inet_addr([socketHost UTF8String]); NSLog(@"Attempting to connect to: %@ on port: %d", socketHost, socketPort); if (connect(sock, (struct sockaddr*)(&sin), sizeof(struct sockaddr_ in)) != 0) { NSLog(@"Connection could not be established"); } else { NSLog(@"Connection established"); } }
For non-blocking socket communications, we will take a closer look at NSStream and how we can use the delegate design pattern to allow us to attach a socket stream to the application's run loop. Lastly, it is rather important to remember to close any sockets that you open as soon as you are finished with them. As with everything else we have demonstrated throughout this book up to this point, we want to be as precise and specific as possible to limit inconsistency.
[ 131 ]
Network Performance
Streams
One particular method of asynchronously working with sockets is by using streams, provided by NSStream. A stream by definition in the programming arena is the serialization of data transmitted between two points. For the purposes of this chapter, we will be discussing streams as they relate to socket programming and network performance rather than the full range of NSStream capabilities, such as reading and writing to files and memory and so on. Streams provide us with an abstraction layer to read and write data while taking advantage of the benefits of the delegate pattern that Objective-C developers are comfortable with. Essentially, this means wrapping the low-level socket development that might otherwise be needed in a nice little package that is simple and easy to use. There are dozens of both heavy-handed and lightweight socket libraries available for Objective-C and under many circumstances these might be the go to libraries of choice. However, when it comes to performance we look to choose a solution that performs the necessary operations we want to achieve, while keeping code bloat to a minimum. Heavy libraries that provide an over abundance of functionality can introduce a wide range of issues that we may not necessarily be interested in. Having a better grasp of the core functionality of a language in this instance, keeps code optimized for our specific purposes as well as helps to educate in those areas where we may not be so comfortable. More often than not developers are choosing the mash-up style of development, in which they stack library upon library, creating enormously bloated and ridiculously oversized applications. Personally, my preference is to use libraries that are standard to the industry as well as being small and efficient. I prefer, when time permits, to build application functionality from start to finish for ultimate control over functionality and performance. For the sake of performance, we'll take a closer look at the classes involved with streams, and walk through an implementation of NSStream to read and write over a simple socket interface.
[ 132 ]
Chapter 7
NSStream provides the following three classes that we use when working with streams: •
NSStream: This provides a uniform method of reading and writing data to and from various types of media including file, memory, and network sockets.
•
NSOutputStream: As its name implies, it is used for managing the output or writing of data to a stream object.
•
NSInputStream: This is the class designated for reading data from a stream object. Please note that NSOutputStream and NSInputStream are write-only and read-only, respectively.
Utilizing these classes will allow us to rapidly and uniformly build applications that communicate remotely utilizing the underlying operating system's BSD sockets implementation. Of course as mentioned earlier, the most significant benefit of using NSStream to work with sockets is the availability of the delegate pattern. This allows us to receive notifications when our socket is ready, data is available, and errors have occurred, all without requiring us to wait or poll the socket and cause blocking like we discussed earlier. The delegate pattern is a powerful approach to the model-view-controller paradigm, in which an object can assign operations or transfer control to another object (delegate), freeing the object to continue to conform to its original perspective. The following example method creates an NSStream socket connection to a remote host on port 25: - (void)socketStreamExample { // Host the socket will connect to NSString *socketHost = [[[NSString alloc] initWithString:@"10.1.2.3"] autorelease]; // Port the socket will connect to NSNumber *socketPort = [[[NSNumber alloc] initWithInt:25] autorelease];
[ 133 ]
Network Performance // Create two CF stream references that will be cast to NS stream objects CFReadStreamRef readStream; CFWriteStreamRef writeStream; // Create the read / write socket stream to the host and port defined above CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)socketHost, [socketPort intValue], &readStream, &writeStream); // Cast the CF stream references to NSInputStream and NSOutputStream iStream = (NSInputStream *)readStream; oStream = (NSOutputStream *)writeStream; // Set delegate for input and output streams to self [iStream setDelegate:self]; [oStream setDelegate:self]; // Schedule input and output stream operations on the current run loop [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode: NSDefaultRunLoopMode]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode: NSDefaultRunLoopMode]; // Open both input and output streams [iStream open]; [oStream open]; }
The socketStreamExample method registers the delegates for both NSInputStream and NSOutputStream to self. Once this is complete, we can now implement our NSStream delegate method, which will be responsible for handling all messages related to this stream and socket combination, including all operations for reading and writing.
[ 134 ]
Chapter 7
The interface for this delegate method is as follows: - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCod e;
A functional implementation of the delegate method is as follows: - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { // NSStream delegate method for handling stream events / notifications switch(eventCode) { // Event when space in the stream is available for writing case NSStreamEventHasSpaceAvailable: { if (stream == oStream) { NSLog(@"Stream: space available"); // Build simple string for writing to stream NSString *writeString = [[NSString stringWithFormat:@"SEND THIS STRING TO SERVER\r\n"] autorelease]; // Encode string for stream / socket writing const uint8_t *rawString = (const uint8_t *)[writeString UTF8String]; // Write raw encoded string to ouput stream [oStream write:rawString maxLength:strlen(rawString)]; } break; } // Event when data is available in the stread for reading case NSStreamEventHasBytesAvailable: [ 135 ]
Network Performance { NSLog(@"Stream: bytes available"); uint8_t buf[1024]; unsigned int len = 0; len = [(NSInputStream *)stream read:buf maxLength:1024]; if (len) { // NSMutableData to hold incoming stream buffer NSMutableData *data = [[[NSMutableData alloc] initWithLength:0] autorelease]; // Load data with buffer contents [data appendBytes: (const void *)buf length:len]; // Decode read buffer NSString *buffer = [[[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding] autorelease]; NSLog(@"Stream: content '%@'", buffer); } else { NSLog(@"Stream: buffer empty"); } break; } case NSStreamEventOpenCompleted: { NSLog(@"Stream: open completed"); break; } [ 136 ]
Chapter 7 case NSStreamEventEndEncountered: { NSLog(@"Stream: end encountered"); break; } case NSStreamEventErrorOccurred: { NSLog(@"Stream: ERROR!"); break; } } }
In the previous stream delegate method, we use a switch statement to process each of the following stream events that take place while the socket stream is in use: •
NSStreamEventHasSpaceAvailable: This is dispatched when the socket is
•
NSStreamEventHasBytesAvailable: This is dispatched when data is
•
NSStreamEventOpenCompleted: This is dispatched when the socket is open
• •
available for writing.
available to be read from the socket.
for reading and / or writing.
NSStreamEventEndEncountered: This is dispatched when the socket
is closed.
NSStreamEventErrorOccurred: This is dispatched when an error occurs.
Keep in mind that sockets and streams are far more likely to come in handy when you are working with uncommon network transactions. Sending and receiving data over ports 80 and 443 (HTTP, HTTPS) are widely supported in various classes throughout the iOS SDK and are not covered in this chapter for this reason. However, a few small and helpful tips on using HTTP and HTTPS can't hurt.
[ 137 ]
Network Performance
Always remember that HTTP is insecure and that you should always assume that everything you send and receive over HTTP is being intercepted. HTTPS is a solution to this; however, it adds a layer of performance complexity, as data needs to be encrypted and decrypted at both ends for each chunk of data. Also, unless you are using Tomcat or serializing data to disk on the server side, 95% of all HTTP services are not persistent. This means that each connection to the server is completely unique and separate from the previous and next request. Of course, session data can be used to bridge this gap, but do keep this in mind when designing architectures for HTTP communications and limiting the amount of resources spent building up and tearing down each of these connections. An example of poor HTTP usage would be a game that is requesting user details or statistics throughout gameplay, where upon each connection to the server, the server is building up a database connection, performing several queries, returning the results, and shutting down the database connection. Imagine several hundreds or thousands of these connections coming in and what type of impact this can have on the server as well as your application, when communications begin to slow down and become less timely. The solution to this problem is to have the server cache this data in a file on disk and allow the server-side code to determine when it is appropriate to update the disk cache and make the database connection. Several thousands of connections in every few seconds is much easier handled, when the HTTP server is returning cached or semi-updated static results. Caching data within your iOS application goes without saying, don't abuse transport protocols for the sake of lazy coding on the iOS side. All too often I read source code that uses HTTP as a transport protocol when it is completely unnecessary and even when it negatively impacts performance. Choosing the correct transport mechanism for the data type and architecture of your application is very much necessary for achieving the highest levels of performance.
Protocols
Understanding the most commonly used network protocols is the key to optimizing an application's network usage and performance. Selecting the wrong protocol or misunderstanding and implementing a chosen protocol will have a severe impact on performance. When we think of network protocols, there are several things to keep in mind. A protocol is simply a set of guidelines or rules. Therefore a network protocol is nothing more than a set of instructions. [ 138 ]
Chapter 7
Network protocols come in a variety of types and flavors for each layer of the communication's stack. IP, or Internet Protocol, is the foundation protocol which communication on the Internet is performed within. IP is the foundation protocol responsible for routing messages through and across interconnected networks and the Internet, as we know it today. Within this protocol, there are several common transport and communications protocols that we use every day that make up the well-known IP protocol suite. Of the literally thousands of protocol mutations available, the most well known and core IP suite protocols are as follows: •
TCP
•
UDP
•
ICMP
We are only lightly touching on network protocols in regards to performance, so an in-depth coverage of every protocol is not essential. Each protocol was as you can imagine designed for a distinct purpose, features, characteristics, and behaviors, which make them ideal for these specific situations. The protocols are explained as follows: •
TCP (Transmission Control Protocol): TCP is the most well known of the core IP suite protocols and is designed specifically for transport stability and reliability. TCP is of course the protocol used by various web and Internet related services like HTTP (web browser), SMTP (e-mail), and FTP (File Transfers). TCP's stability comes from the ability of the protocol to retransmit lost packets as well as correctly assemble packets, which may have been sent and received in an incorrect order.
•
UDP (User Datagram Protocol): UDP is a fire-and-forget protocol, designed to be stateless; it is most commonly used in situations in which data reliability is not required but speed and efficiency are preferred. UDP takes advantage of a more simple transmission mechanism that forgoes the formal handshaking that TCP is most well known for. Applications which often use UDP, select it simply because they prefer packet delivery speed over the potential waiting that accompanies more reliable protocols that spend time ordering and re-transmitting data.
[ 139 ]
Network Performance
•
ICMP (Internet Control Message Protocol): ICMP is the Internet maintenance protocol of the bunch, designed specifically to deliver messages for status, errors, and maintenance requests. The most familiar and well-recognized operation utilizing ICMP is the echo-request and echo-reply utility, known more familiarly as ping. Although valid ICMP implementations are limited to a defined list of usages, ICMP should not be forgotten. It should in fact be the first protocol looked at when routing, messaging, and network diagnostic needs are required.
The operating system's implementation of these protocols is of course under the hood; however, it is important to understand the fundamentals of network protocols, especially if your application will rely heavily upon their usage.
Bandwidth
Internet and network bandwidth is a concern for any networked device; however, with mobile devices and limited network resources, the need to limit network usage and conserve bandwidth is far greater. With unlimited data plans on carrier networks all but gone, not only do we need to think about how network communication affects performance, we need to consider the effect of bandwidth usage on the pocket book of the end user. Let's take a look at this simple example of how quickly application data can add up over a month of moderate usage. 100 kilobytes every 10 seconds for 45 minutes, every day for 30 days = 791 megabytes. 791 megabytes on a Wi-Fi connection doesn't sound so bad and in fact really isn't an issue at all. However, let's assume that a user of yours spends an hour commuting by train each morning while your application consumes as much as 26 megabytes of bandwidth or more. Now, some might argue that it is ultimately the responsibility of the end-user to be aware of overall data usage, and I can quite easily agree. However, as the developer we are in the hot seat to control precisely how much network data our application is capable of consuming on average. While it may not necessarily be our job to police bandwidth usage by our users, we can do our part and optimize our applications as much as possible, so that we aren't the wasteful and guilty application that gets kicked to the curb when the wireless bill arrives.
[ 140 ]
Chapter 7
As you can see, a developer without a firm grasp on the effects of bandwidth usage can quite easily and somewhat unintentionally abuse resources and impact more than just the currently active application. Regarding bandwidth performance, we'll touch upon the following areas and how they directly impact an application, and detail several common bandwidth optimization concepts that can improve upon network performance: •
Data cost and value
•
Bandwidth usage
•
Bandwidth abuse
As a developer, we have near limitless options for managing the consumption of network bandwidth. All data regardless of where it originates has an associated cost and value. Data created locally and remotely requires processing power, memory, bandwidth, and time to generate. As with anything of value, considerations should be taken before data is thrown away or destroyed. To help convey this principle, let's imagine that every 30 seconds we download a list of 1000 items from a remote server. This list of items may or may not have changed between each of the data requests; however, it is necessary to update our application in the event these items are altered on the server side. Basically, this means that every 30 seconds we pull down the same list of 1000 items regardless of whether or not a single item or the entire list has changed. In terms of network utilization, this is simply a waste of bandwidth and server-side processing power. In this scenario, a developer could simply choose to transfer the list of items only when data has changed or sent only new or altered items if applicable. Asking the server to query a database and build the same list of 1000 items every 30 seconds over and over again, multiplied by the number of active remote devices can have a significant impact on the performance of the server as well as every remote device. Remember, data has both a cost and a value and we shouldn't be so quick to throw it away. If you are going to spend the digital resources to query and build data, then take the extra few lines of code to hold on to that data. Dozens of more intelligent methods exist to keep the device and server sides in sync with each other. Server-side data caching is always an option for maintaining performance on the server side, but has little to no effect on overall network performance. [ 141 ]
Network Performance
Client-side data caching is one of the ultimate weapons against tossing out valuable data with the proverbial bath water. Once you have the data, hold on to it and decrease the amount of server round-tripping. Caching data within a data model or local storage will help decrease the amount of data required from the server side, in turn reducing bandwidth consumption and usage. Additionally, data caching has an overall perceived performance boost as well. Application users who are quickly navigating throughout an application, which takes advantage of advanced caching principles will be much more satisfied than those users who are constantly waiting for a loading popup to disappear. As we touched upon lightly a little earlier in this chapter, another important architecture choice to be made is where and when we load remote data if necessary. Do we take a few extra seconds during the initialization and loading of our application to pull down remote data models and populate our application, or choose quicker initialization and request remote data based on user interaction? An argument for simplicity might be that an application should load all necessary data during initialization and not worry about additional network communications until absolutely necessary. However, the obvious trade off here is that a user must wait for data to be downloaded before they are allowed to continue. An additional negative is that during initialization, we cannot know specifically what data our users may be interested in receiving, thus we must download and consume bandwidth for all data regardless of necessity. A negative argument for loading data upon need is that it requires a deeper communication's interface for loading specific data sets as a user requests them. However, the benefits of loading data upon request are that a user is free to move about the application and only consumes network resources that are relevant to their actions. If a user never visits a particular view or display within the application, then any remote data associated with that application is not unnecessarily transferred across the network/Internet. The pros and cons of each are far greater than covered here; however, neither of them should be tossed aside as they each have their appropriate place in the development of an application. It takes them time to decide how and when remote data will be loaded within your application at the earliest of stages in the design and architecture phase. It is much more difficult to refactor data models and network communication's architectures when an application is in full development.
[ 142 ]
Chapter 7
Important and very much overlooked is the ability of an application to detect available network resources and properly handle each circumstance gracefully. It would be a somewhat safe bet to say that most iOS device users have experienced an application that doesn't have a clue whether or not Internet or network access is available or not. Let's not let that happen to us. Apple has graciously provided a Reachability class along with sample code to help developers easily implement network resource detection. As with most objective-C code, the Reachability class is based on the delegate design pattern making it quick and easy to implement. The Reachability classes and sample code can be found at the following URL: http://developer.apple.com/library/ios/#samplecode/Reachability/ Introduction/Intro.html
The classes themselves are somewhat simple, but this uniform method of network resource detection is invaluable for any application that requires remote network or Internet resources. As with each chapter up to this point or focus, should not solely be on identifying the low hanging, performance gaining fruit. Our ultimate focus is on increasing performance application wide, limiting the amount of waste with logic and implementation.
Compression
Compression is quite simply the process of encoding a specific portion of data to consume fewer resources. From that simple description, it fits quite well that if you are working to achieve a greater level of network performance that the compression of transmitted data can help. Simple compression, also known as deflation, works by replacing frequent or commonly used symbols with shorter more concise representations, while lesser common symbols may be replaced with longer representations. As an example, consider the following string of numbers: 1234512345123451234512345 Instantly, we can see that it is the string of numbers 12345 repeated five times. This particular string is extremely wasteful because of the repetition of the string of numbers, 12345. [ 143 ]
Network Performance
In pseudo compression language, it would make much more sense to store the previous lengthy string as the following, more concise string: 12345x5 In relative terms, transferring a string of seven characters across a network will be fundamentally quicker and consume less bandwidth than the uncompressed, 25-character string. Placed next to one another, it is quite easy to understand how compression can and does affect network performance, shown as follows: 1234512345123451234512345 12345x5 Of course, the previous examples are quite elementary and were used only to represent the direct relationship between compression and performance. For applications which rely heavily on communications with a web application server, it is highly recommended that gzip compression be enabled. This compression algorithm is the industry standard for HTTP and HTTPS deflating. The iOS SDK supports automatic decompression of gzip data when accessing web content using NSData objects. For users designing and building custom network communications operations, compression can be much more involved, but is well worth the effort. Dozens of compression libraries exist to handle the complex inflating and deflating functions that are necessary for various data types.
Façade pattern
Let's revisit the car analogy for a moment here with an example of the façade pattern in a common, everyday application. Automobile manufacturers over the years have done an incredible job of separating the complex aspects of a vehicle's inner-working from drivers and passengers: unlocking doors remotely, starting the engine with as little as a button push or key turn, and even adjusting seats, pedals, and steering components with the flip of a switch. Most of this separation is done of course as a convenience for the users of the vehicle; however, none of this is possible if not for the theory or design pattern that lies quietly beneath the dashboard of the car.
[ 144 ]
Chapter 7
Auto manufactures have designed the components of these vehicles to have instructions and operations grouped together and controlled by single, greater, or master instructions. A good example is the unlocking of the car. When you press the unlock button or turn the door lock with the key, an unlock operation is executed that performs several tasks, including of course, the unlocking of the doors but also side operations like turning on interior lights, or even disabling a vehicle alarm. Similarly, when the lock operation is executed, doors are locked, the alarm is activated, interior lights are turned off, the horn beeps, and external lights briefly flash. Auto manufactures are using a design pattern that has over the years been given various names and described in possibly hundreds of different ways. But regardless, it all comes down to an interface that provides a single request to perform multiple operations. The façade pattern is a common design pattern that facilitates this theory. It is an additional layer that allows you to create lock, unlock, startup, and shutdown methods that in turn execute any number of underlying operations. The façade pattern is important when it comes to network performance, mostly because network operations can be very resource intensive and if you are going to spend the time and resources to make the connection, you may think about bundling your operations together to reduce communication requests. To further illustrate the principle, we have a pseudo application that upon entering a username and password, needs to log in to a remote server, upload changes that may have occurred offline, download a new list of widgets, and finally update some general statistics. This pseudo application might perform as many as four or five separate network communications to perform these steps in addition to logging the user out, when finished. Each of these operations could be executed linearly, and it is not all that uncommon to see this type of application behavior. The better and more performance appropriate method would be to create a server-side interface that performs steps in groups, otherwise known as the façade of the façade pattern. This façade interface might have a loginOperation method that performs the following pseudo code steps all at once: function loginOperation() { authenticateUser(); updateWidgetDatabase(); [ 145 ]
Network Performance retrieveWidgetList(); generateUserStatistcs(); }
As simple as this looks, the façade pattern does play a critical role in reducing network operations as it pertains to performance. Rather than four or five individual requests, each performing build up and tear down operations, implementing the façade pattern gives us a clean opportunity to perform all of these tasks linearly within the same remote request. Lastly, the façade pattern is not a complicated design pattern but a simple theory that many developers are in fact using now, but may be just unaware of its assigned name.
Summary
If not the most important feature of mobile devices, network and Internet access is certainly near the top of the list. Applications which rely upon remote services, are dependent on many factors that may be outside of their immediate control. Limited bandwidth, network latency, and even network connectivity are the direct enemies of mobile devices. As we covered in this chapter, network performance is achieved by understanding the fundamentals of network and Internet connectivity with iOS devices and building application architectures that protect their integrity. Remember that a single, poorly designed network operation can be responsible for a wide range of negative issues. We covered the various aspects of network sockets as well as using streams to send and receive data using the familiar delegate design pattern. We discussed the importance of managing bandwidth consumption and remembering that all data regardless of its source has a relative cost and value and shouldn't be thrown away until it is no longer valid. Cache data when possible to decrease resource usage and improve user experience. Grouping operations together using the façade pattern can simplify complex operations and help to reduce network performance abuse. Lastly, we touched upon the use of compression to help with network performance and limit the amount of network resource usage whenever possible.
[ 146 ]
Chapter 7
Specifically, the areas in this chapter that we covered were the following: •
Sockets and streams
•
Protocols
•
Bandwidth
•
Compression
•
Façade pattern
[ 147 ]
Memory Performance In this chapter, we take a much more detailed look at performance as it relates to memory management within iOS applications. Of course, when we use the term memory throughout this chapter, we are referring to volatile memory, that is, memory that requires a power source in order to retain data. More commonly referred to as RAM (Random Access Memory) or SRAM (Static Random Access Memory) throughout the computing industry, this volatile memory is the workhorse of any computing device, in which nearly every operation's result will be stored at some point in time, even if only for a short time. As we have covered in earlier chapters, available resources limit iOS devices like any other mobile device. Memory is of course one of these limitations and proper management is essential for basic operation. Memory leaks are the most well known memory abusing issue that developers find themselves struggling with. Orphaned objects or objects that are no longer necessary, yet left to hang around consuming what little memory the device has available. An application's level of performance is directly affected by how well an application manages its memory consumption. Memory related issues can account for a significant amount of application crashes within iOS and these crashes are among the top reasons for App Store rejection. In simple terms, an application's efficiency and quality of code can be measured by how well it manages memory. Optimizing an application to use as little memory as possible, demonstrates programmatic proficiency and lends significantly to performance and stability. Our goal should be to achieve each of these factors.
Memory Performance
Memory management is the concept of allocating, using, and deallocating memory throughout the life of an application. This is done by creating objects as they are needed and destroying them when they are no longer relevant. Of course, it goes without saying that objects that may still be in use or necessary should not be destroyed prematurely, and therein lies the reason behind understanding and implementing proper memory management. More specifically throughout this chapter, we will cover the following areas of memory performance: •
Garbage collection
•
Alloc
•
Dealloc
•
Copy
•
Retain
•
Release
•
Autorelease
•
didReceiveMemoryWarning
Garbage collection
The term 'garbage collection' in programming terms refers to the process of managing memory in an automated or semi-automated fashion. Memory management is of course a critical aspect of performance and garbage collection is one of a developer's more powerful weapons against misused and abused memory allocation. Garbage collection can simply be explained as a process that a particular system uses to reclaim memory that is occupied by objects which are no longer in use by an application or system. In theory, objects that are orphaned and no longer accessible or necessary can be swept up and discarded by an automated garbage collection process, freeing valuable memory for use once again. Most languages available today have some form of garbage collection. From fully automated processes that remove every single bit of memory liability from the developer, to hybrid manual and automated processes that require a little more attention to memory consumption details.
[ 150 ]
Chapter 8
However, most by definition does not mean all, and in this regard, most does not include Objective-C on iOS. iOS does not include traditional garbage collection and memory management must be performed somewhat manually. This makes it an even more important task that cannot be overlooked or minimized in any way. Objective-C on iOS is not the same Objective-C animal that some might be thinking of. It is quite common for developers to be confused when reading about the capabilities of Objective-C, then reading what appear to be conflicting statements on what Objective-C can and cannot do on the iPhone. The simple and most straightforward answer is that iOS uses a streamlined version of Objective-C that plays towards the strengths of the device. If you were under the impression that Objective-C did include garbage collection operations, you are most definitely thinking of Objective-C 2.0, which includes a garbage collection facility, but this is confined to OS X and more specifically Snow Leopard and onward. iOS with Objective-C, C, and C++ - unfortunately due to battery and processing overhead does not provide automated garbage collection functionality. The process of routinely sweeping system resources looking for unused objects would be an undue burden on iOS devices according to Apple. The decision not to include automated garbage collection operations within iOS, means developers must pay the highest level of attention to memory usage within their applications or pay the ultimate toll of application instability. Performance in general is the number one reason that Apple chose to omit garbage collection from its devices. At one point or another Apple was faced with a critical decision, to either include garbage collection and sacrifice a small percentage of performance and with it user experience, or simply require developers to be more diligent in their memory management efforts. The argument to exclude garbage collection for performance reasons is quite valid, due to garbage collection as it relates to performance being rather unpredictable. As a developer, you are traditionally not in control of when garbage collection takes place and because of this, Murphy's Law, "Anything that can go wrong, will go wrong" will surely find a way to impact you and your application. This is mostly because garbage collection is a monster unto itself, which operates like a rogue object assassin that never takes orders, but only accepts suggestions or hints on what you might want cleaned up. You will not know precisely when garbage collection is going to take place and of course this will be at the most inopportune time for your users and will affect the perceived performance of your application.
[ 151 ]
Memory Performance
To illustrate the unpredictable effect of garbage collection, an example would be a list of 1,000 items in an iOS application that a user might be in the middle of scrolling through, when an exhaustive garbage collection process is kicked off in the background by the operating system and scrolling momentarily becomes jerky or unresponsive, because the CPU gives priority to necessary collections' operations. No doubt this example is one that Apple considered and even experienced, when making the decision to exclude garbage collection from iOS. Without garbage collection, a user can flick through a lengthy list of items in iOS and object creation, destruction, and re-use has already been determined and performance is maintained at the level of the rendering operations. There are many arguments for and against garbage collection and they are most definitely outside the scope of this chapter; however as it pertains to performance, we should not have a negative view on garbage collection's exclusion from iOS. In fact, as developers we pride ourselves on creations that perform as we intend them to and palm our faces when they don't. As useful and powerful as garbage collection is, ultimately more control and greater levels of performance can be achieved and maintained, when this responsibility is solely ours. A common prediction was that iOS would eventually include garbage collection, much like OS X; however, when Apple announced LLVM 2, we were surprised to learn about ARC (Automatic Reference Counting), which at first glance appears to be an elegant solution for decreasing memory waste while not giving away precious CPU cycles. ARC is a form of automated memory management included in iOS 5 by way of LLVM 2. When enabled, it frees the programmer from using retains and releases as this is processed and taken care of by the compiler. ARC is performed at the compiler level and therefore has absolutely no negative impact on iOS. ARC can be enabled and disabled as necessary within Xcode, when using LLVM2. Developers are cheering ARC and celebrating the demise of manual memory management; however, we shouldn't be too excited to wash our hands just yet. Memory management is a critical aspect of application development in general and regardless of ARC, garbage collection, or the next iteration years from now, understanding how memory is managed is important. Increases in processing power and the eventual inclusion of multi-core processors in iOS devices means that any form of garbage collection will be executed with a less negative impact.
[ 152 ]
Chapter 8
With garbage collection in the rear-view mirror for the time being, we turn back to memory management in general. Managing specific object memory will be very much limited to the following class and instance methods, which we will cover individually in this chapter: •
alloc
•
dealloc
•
copy
•
retain
•
release
Before we dive into the specifics of each of these methods, we'll touch upon the object lifecycle and how object memory is managed from a language perspective. Objective-C uses a retain count system to identify when an object is no longer being referenced and is safe to be released. Retain counting is more commonly known as reference counting to the non-Apple world and can be used interchangeably to describe this same simple principle. An object's retain count is incremented whenever a reference to the object is created and decremented when a reference is removed. When an object's retain count reaches zero, it is marked for deallocation where memory is eventually reclaimed for use. NSObject is responsible for reference counting and as long as the retain count of an object is greater than zero, the object will remain in memory and available for use.
Reference counting is handled by two methods defined in NSObject and subclasses of the following: •
- (id)retain
•
- (oneway void)release
retain increments the receiver's reference count, while inversely and as expected, release decrements the receiver's reference count.
When an object's retain count reaches zero for any reason, the object's dealloc method will be automatically invoked, giving the object an opportunity to clean up resources before it is finally deallocated. Reference counting is a balanced system, meaning that for every retain (increment), there must be an opposite and equal number of release (decrement) method calls in order for an object's reference count to equal zero and be deallocated. [ 153 ]
Memory Performance
Objects left standing with a retain count greater than zero will of course remain in memory regardless of whether or not this was the intention. Objects that prematurely reach a retain count of zero, will be deallocated and any future calls to the object will cause an application to crash. Once the deallocation process is started, there is no turning back and the object will be destroyed.
Alloc
When an object is allocated using alloc, its retain count is automatically set to 1. In fact, if a method's name begins with or includes alloc, copy, or new, the retain count is automatically set to 1 and you are responsible for managing its retain count. Every other method that returns an object will return an autorelease object and you should never call its release method unless you have specifically retained it. In the following example, myObject is allocated with the class method alloc, which creates the appropriate memory space for myObject and sets its retain count to 1: MyObject *myObject = [[MyObject alloc] init]; // myObject's retain count equals 1 when allocated
Dealloc
Implementing a dealloc method provides an object with an opportunity to release any ownerships it might have accumulated throughout its existence. dealloc will be invoked automatically when an object's retain count reaches zero. It is nearly guaranteed that it will be invoked, but exactly when is not something that can be accurately determined. There are instances when the dealloc method may be delayed or even skipped entirely, because of bugs or application tear down.
The dealloc method should never be called directly unless you are specifically invoking the [super dealloc] implementation within a custom dealloc method.
Copy
copy makes a copy of an object and returns it with a retain count of 1. As will be
covered shortly, when you copy an object you become the owner and are responsible for relinquishing your ownership when you are finished with the object.
[ 154 ]
Chapter 8
Retain
An NSObject instance method, retain increments the reference count of the receiving object by 1. The following code demonstrates the simplicity of the reference / retain counting system and how an exact balance over retain and release method calls must be achieved: MyObject *myObject = [[MyObject alloc] init]; // myObject's retain count equals 1 when allocated [myObject retain]; // myObject's retain count is incremented by 1 and now equals 2 [myObject release]; // myObject's retain count is decremented by 1 and now equals 1
An additional call of the release method would decrement the retain count to 0, where the dealloc method would be automatically invoked and the object's memory released.
Release
An NSObject instance method, release decrements the reference count of the receiving object by 1. Calling a method on an object with a retain count of zero will cause an application to crash. In my experience, this is one of the more common mistakes that developers make in Objective-C. Developers simply losing track of an object's retain count and over, and calling the release method, which ultimately crashes the application. If you are not yet familiar with over-releasing an object, the result as mentioned previously is a sudden crash that at times can be difficult to locate, especially when dealing with autorelease objects that live until the end of the event cycle, which we'll touch upon later. Knowing when and where to release an object is simple, once you understand the theory behind the principle.
[ 155 ]
Memory Performance
Apple uses the Object Ownership Policy concept to describe proper implementation of the object's retain and release principle. Essentially, if you take ownership of an object by incrementing its retain count, then you are responsible for releasing this object when you are finished with it. Additionally, you should never release an object for which you do not have an ownership claim. Multiple owners may exist on the same object and of course without an owner, the retain count of an object would be 0, in which the runtime would then deallocate and destroy the object automatically as we covered earlier. Apple's Object Ownership Policy is as follows: •
You have ownership in every object you create with alloc, copy, or new and permutations of these
•
You take ownership of any object with the retain method
•
An object can have multiple owners and regularly does
•
You relinquish ownership of an object with the release method
•
You are responsible for releasing ownership on objects when you are finished with them
Following Apple's own Object Ownership Policy is good practice and highly encouraged as it creates a standard for how we interact with objects as well as how we fundamentally architect them and their behavior. To demonstrate this policy in action, we'll take a simple look at a social context code example that creates two objects and performs some very basic retain and release operations. This example will show how adhering to the Object Ownership Policy can create sanity out of an otherwise wild, wild west scenario: - (void)objectOwnershipPolicyExample { /* Creates the tod object using alloc Retain count is 1, we are the sole owner */ MyClass *tod = [[MyClass alloc] init]; /* Creates the elena object using alloc [ 156 ]
Chapter 8 Retain count is 1, we are the sole owner */ MyClass *elena = [[MyClass alloc] init]; /* Invokes the setSpouse method
In this exercise the setSpouse method retains the passed object for its own purposes, however in general we may not ever really know this is happening and should really have no concern as long as we follow the object ownership policy The tod object is now an owner of the elena object and the retain count is now 2 */ [tod setSpouse:elena]; /* Release the elena object, decrementing the retain count and reliquishing our ownership and interest in the object The retain count of the elena object is now 1 */ [elena release]; /* Releasing the tod object will decrement its retain count to 0
A retain count of 0 will cause the object to be marked for deallocation, where the dealloc method will be invoked and it will clean up and release any ownerships that it might have
[ 157 ]
Memory Performance
The retain count of the elena object will then be 0, where it will be marked for deallocation and the dealloc method invoked appropriately */ [tod release]; }
The setSpouse method of the tod user object is simply retaining the object for its own purposes, whatever they may be. In this example, let's assume that additional operations may need to take place within the tod user object and the tod user object cannot be sure that the passed object will still be around when it is needed, thus it takes an ownership stake in the object for certainty. Essentially, specific retention is necessary to ensure the object is not released back to the system before it is completely finished, such as follows: - (void)setSpouse:(MyClass *)value { /* In this example the passed object is retained locally within the instance Retain count is now 2, we are a shared owner */ if (spouse != value) { spouse = value; [spouse retain]; } /* NSLog the retain count of the spouse object using %d, retainCount is an integer The retain count will be 2 */ NSLog(@"Spouse retain count: %d", [spouse retainCount]); } [ 158 ]
Chapter 8
Within the dealloc method of the tod and elena objects, we perform the following necessary steps to clean up any ownerships that may have taken place before they are destroyed: - (void)dealloc { /* This method will be invoked when the retain count of this object reaches 0 We perform cleanup operations by relinquishing ownership on all objects which we have retained before we are deallocated */ [spouse release]; [super dealloc]; }
Objects that are created that do not use alloc, copy, or new are returned as autorelease objects and therefore should not be manually released, otherwise the retain count will end up unbalanced and an application crash will likely occur. Releasing objects without ownership is unfortunately quite a common practice and leads to severe application instability, as the following example demonstrates: NSNumber *myNumber = [NSNumber numberWithInteger:5]; // Perform various operations here [myNumber release];
As you can see, myNumber was created without using alloc, copy, or new as an autorelease object, therefore ownership is not implied and the object's retain count will be improperly balanced when it is released and deallocated, ultimately causing a crash. We can avoid this, by simply paying closer attention to object ownership and adhering to Apple's suggested Object Ownership Policy guidelines.
[ 159 ]
Memory Performance
Autorelease
The autorelease method adds the receiver to the current autorelease pool, where it will be released and deallocated when the pool is destroyed. autorelease is more than a simple convenience method and is extremely useful in
situations where a method returns a newly created object, where releasing before returning would otherwise be problematic. Using autorelease in this manner allows the returning method to fulfill its release obligation, while allowing the object to be retained long enough for the calling method to perform its actions before it is destroyed.
When an autorelease pool is released, the pool will iterate over each object and call their release methods to deallocate them. Autorelease is not garbage collection. It is simply a mechanism provided to allow a developer to keep an object alive until the end of the current event cycle or run loop. The following code example demonstrates the INCORRECT way of returning an object from a method. Methods that create objects for the sole purpose of returning them must have a way to properly release them without involving more overly complicated patterns, shown as follows: - (MyClass *)returnSomething { // INCORRECT /* Creates the object myObject for return using alloc, giving it a retain count of 1 */ MyClass *myObject = [[MyClass alloc] init]; /* Returning this object without properly releasing ownership will cause a memory leak */ return myObject; } [ 160 ]
Chapter 8
The myObject object will be returned, but will remain in memory with a retain count of 1. A very poor coding decision at this point would be to call release from the calling method after receiving the returned object. This is highly undesirable and very poor coding practice. This is precisely where autorelease comes into play, where you might need an object to live long enough for a method to return it, but not so long that it becomes a memory management issue later on. - (MyClass *)returnSomething { // CORRECT /* Creates the object myObject for return using alloc, giving it a retain count of 1 */ MyClass *myObject = [[MyClass alloc] init]; /* Sending myObject the autorelease message adds the object to the autorelease pool, where it will be destroyed properly at the end of the next event cycle when its retain count reaches 0. */ [myObject autorelease];
/* Returning this object is now memory leak safe */ return myObject; }
[ 161 ]
Memory Performance
An alternative to [myObject release] would be to wrap the instantiation of the myObject class in an autorelease message, such as the following: MyClass *myObject = [[[MyClass alloc] init] autorelease];
This is simply user preference and has no effect on performance or efficiency. However, personally, I prefer this method, so that I can quickly see at a glance which objects must be manually managed and which I can forget completely about. An additional alternative would be to simply return the object with the autorelease message on a single line, such as the following: return [myObject autorelease];
Again, user preference. The event cycle as we've mentioned several times throughout this chapter, is the logical run loop that is performed within a running application. As an application waits for events, it is essentially looping through the event cycle performance tasks as they occur in order. The application and component lifecycle will be covered in more depth later on; however, as this pertains to memory performance the following is important to understand here and now. The autorelease pool, which stores the names of objects that have been sent the autoreleae message, are processed at the end of each event cycle. When processed, the autoreleae pool iterates over and calls release on each object within the pool, releasing, deallocating, and destroying objects as necessary. It's important to note that the autorelease pool is not an all-powerful pool of ultimate object destruction; it only decrements the retain count of an object once for each time the object exists in the pool. An object can appear multiple times in the pool.
didReceiveMemoryWarning
Knowing exactly when your application is about to run out of memory can be quite valuable. Of course, not running out of memory would be quite a bit more valuable, but we'll take what we can get. Implementing the didReceiveMemoryWarning method on a view controller can be an additional safety net and although its not a guarantee that your emergency memory cleanup measures will prevent a crash, it is one additional layer of stability that should not be overlooked.
[ 162 ]
Chapter 8
- (void)didReceiveMemoryWarning is called prior to an out of memory crash
and gives the developer an opportunity to free up memory to avoid the condition as follows: - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; }
Within the iOS Simulator, you can manually trigger a memory warning under the Hardware menu as shown in the following screenshot:
Summary
Memory management, as we've covered, is far more important on mobile devices with limited resources as it is with their desktop cousins, which by comparison have memory resources dripping from every seam. An application's quality and success can be measured by how well it manages memory and it wouldn't be too much of a stretch to say that a developer's ability can be measured in much the same way. A developer should strive to become as intimate as possible with every aspect of a programming language and the deployment environment. Understanding memory and the management process is critical for success. [ 163 ]
Memory Performance
iOS developers must pay special attention to memory allocation, use, and destruction due to iOS's lack of garbage collection. That being said, the lack of garbage collection should not be looked at negatively, but more as an opportunity for finer control over the memory management process. As was covered in this chapter, garbage collection is not an operation that can be routinely relied upon to work properly and not interfere with performance in its current state on current device hardware. Apple chose performance and user experience over developer convenience, and after a good amount of exposure to the iOS memory management process, it might be safe to assume that we will end up on the same page, that finer memory controls equal stability and performance. We can cheer ARC in LLVM 2 along with other iOS developers world wide, but should keep our memory management skills sharp for maximum performance, when we can't rely on automated systems. We covered the proper procedures to alloc, retain, and release ownership of objects according to Apple's Object Ownership Policy guidelines, which effectively maps out a standardized method for ensuring balanced usage of manual reference counting. In addition to object ownership and reference counting, we covered object destruction and the deallocation process, implementing proper dealloc methods to ensure objects clean up after themselves. Lastly, we took a deeper look into the autorelease method and how it impacts simplicity and necessity, especially when returning newly created objects that must live long enough for the receiver to retain if necessary. Specifically, we covered garbage collection, automated reference counting, and memory management with alloc, dealloc, copy, retain, and release. We detailed the purpose of autorelease, as well as when and how to properly take advantage of it. In the next chapter, we break down and detail the lifecycle of objects as well as application build up and tear down methods.
[ 164 ]
Application and Object Lifecycles It's been a few chapters since we used an automotive analogy, and right about now seems like a great opportunity for another one. Let's imagine we're standing at the local automotive center, speaking with a performance specialist about increasing horsepower in our brand new classic American sports car. On top of that, the specialist confidently recommends installing a super-charger along with intake modifications and a new high performance shift knob. Puzzled, we might ask the specialist how exactly a 'high performance' shift knob will increase performance. The specialist hesitantly replies that he's not quite sure how it really works or why, but he just knows that when he screws one on everything gets quicker. At this point it wouldn't be a bad idea to get the keys back and research another auto shop. An answer like that doesn't inspire too much confidence and isn't acceptable in any industry, including ours. When we ask the opinion of an assumed expert to explain the details of a particular subject, we anticipate they know every last detail. We expect that level of knowledge simply because without it we know they don't possess the necessary skills to finish the job or for that matter finish it correctly. If we don't feel comfortable with others performing complicated tasks without a deep understanding of the subject matter, then why should we feel comfortable taking on similarly complicated tasks with the same relative lack of understanding? The answer to that question should be quite obvious. We shouldn't.
Application and Object Lifecycles
Earlier we covered the concept that performance isn't an accident; it takes dedication to understand the intricacies of our programming language and how these affect the final application. We don't measure performance by what feels fast or what appears to be quicker; we measure performance with knowledge and detail. We learn and gain an understanding of the individual processes and components and build performance one piece at a time. A more intimate knowledge of programming lifecycles is very significant when working to achieve high levels of performance. Knowing each and every process that takes place during startup, allocation, initialization, usage, and destruction is the key to finding those otherwise hidden performance gains. We have a choice to either float along in the middle of the pack, or dig a bit deeper and increase our understanding to the benefit of every line of code we create hereafter. Keeping with our earlier analogies, sports cars are designed and built with focused attention on the intricacies of every component. Performance loss and gain happens with nearly every decision made throughout the design and build process. Each component is fully understood individually, before eventually being combined into the greater product. Every attribute of a vehicle component is thoroughly analyzed and its weaknesses and strengths are tested and measured before being selected and assembled into the larger result. The automotive industry has specific reasons for their high levels of component understanding, a few of these being safety, reliability, performance, and cost. Application development has its own relative reasons for adherence to standards. Reliability, performance, integrity, and security are a few that come to mind, yet there are dozens if not more additional reasons to increase our own understanding of every aspect of our chosen development languages. It's no surprise that the same programming task can be done in a multitude of ways; however, every programming language has a preferred vector for performing tasks as efficiently as possible. Objective-C objects and components were designed to be implemented in specific ways following standardized conventions. Learning and adhering to these conventions creates uniformity, as I have been preaching from my programming soapbox for a few chapters now. Adherence will result in cleaner code that will no doubt organically perform better. [ 166 ]
Chapter 9
In this chapter, we focus more on the technical aspects of our project application and its objects and components. More specifically, we dig deeper into each of their lifecycle processes and how we can take advantage of this knowledge to increase performance. Each of the following areas will be covered in detail: •
Mise en place
•
Application lifecycle
•
Application startup sequence
•
Application execution
•
Application termination sequence
•
awakeFromNib
•
application: didFinishLaunchingWithOptions
•
applicationDidBecomeActive
•
applicationWillEnterForeground
•
applicationWillResignActive
•
applicationDidEnterBackground
•
applicationWillTerminate
•
Object lifecycle
•
Object init
Mise en place
Literally meaning "putting in place", mise en place is the French term used widely throughout the culinary world as having everything in its place, or a place for everything. A professional chef prepares ahead of time for the scenarios that might present themselves. They know which ingredients to prepare and where they need to be in order to efficiently produce quality products at mind numbing speed. I won't be the first person to compare cooking to development and surely not the last, but there are direct correlations between almost every aspect of cooking and programming.
[ 167 ]
Application and Object Lifecycles
As developers, mise en place or having everything in its place is something that is commonly overlooked. Although a million or more ways exist to perform any given function, there are more right and more wrong ways of doing something. With cooking, it doesn't matter so much which side of the counter the scallions are prepared or whether the front or back burner is used for reductions. However, it does matter significantly whether the ingredients for the reduction are prepared and measured out ahead of time and in what order these ingredients are included. Knowing where code belongs and the correct order in which it is executed is essential to maximize efficiency and performance, while messy and convoluted source code is a cheap recipe for nasty little bugs. Let me share a frequently occurring experience that demonstrates the point I am trying to make. Routinely, I am asked to review source code by colleagues for varying reasons and unfortunately more often than I'd like, I receive a single or small number of project files where the line numbers are exceeding 10,000 or more in each individual file. Of course, these projects are in some sort of working order other than the specific bug or posed question, however, class names, method names, functions, and their placement are hardly predictable and extremely difficult to follow. This type of non-structural structure is chaotic and as far from order and efficiency as source code can get. Even simple maintenance tasks become cumbersome, even with the best source code management tools available. Even in basic terms, for myself it is much easier to select a specific encapsulated class for review rather than scroll through 10,000 lines of code looking for a familiar method signature. Without making a blanket statement about a finite limit on the appropriate number of lines of source code in a single file, there are of course exceptions. What I am attempting to draw attention to, is that source code is designed to be segmented; the core principles of every current programming language are to build structure within what would appear to be chaos. Including everything in a single file is a recipe for disaster and is a sign of poor project design. Assets, classes, libraries, and frameworks are intended to be modular and support this architecture of structure and order.
[ 168 ]
Chapter 9
Single classes that do everything are a nightmare to debug and troubleshoot and have absolutely no place in a design or architecture that seeks performance. Application performance is achieved by following strict guidelines and development practices. Interface design and nib content is no different. Single nib files that contain every view of an interface are excessive and require massive amounts of resources to load and manipulate throughout the lifetime of an application. A single application wide nib packed with views will need to be loaded at application initialization, and of course will be the cause of loading delay and resource waste. As a developer, there is more granular control over functionality and performance when we modularize our code and interface design is much the same. Ideally, each view controller should have its own nib file, allowing for more control over application requirements and resource consumption. Being able to load and unload specific nib content at will is an almost critical requirement for any project. Be alert and don't trap yourself in monolithic resources that leave little maneuverability. In addition to making me happy by putting every piece of code in its place, a user's perception of performance has a great impact on the success of an application due to proper project organization. An application that includes a lengthy launch delay is already affecting the perception or quality of an application. As a user waits, they become less enchanted and more impatient and ever more likely to look for an alternative solution. Take a moment and grab the nearest iOS device you have and launch a dozen or more Apps and take note of launch times for each of them. When in fact you do reach an App that takes more than an instant to become responsive, you might immediately feel a sense of urgency or anxiety. These feelings are not uncommon; we have been conditioned on iOS devices to see near immediate results with almost every action. This is not accidental; it was the intention of the device from conception to perform extremely well in nearly every situation. No to be overly dramatic, but this emotional response to the software is in fact very real; iOS users are presented with some of the most advanced applications and user interface designs our world has yet seen and when they see something that is sub-par they know it and are quick to share their discontent. We should strive to take advantage of every opportunity that is presented to maintain or increase the perception of quality and performance of our application.
[ 169 ]
Application and Object Lifecycles
Work to impress your users and leave them scratching their heads asking, "How'd they do that?" In earlier chapters, we discussed the importance of having a strong architecture design for an application. Without this, an application will succumb to mess and disarray and ultimately performance and stability will suffer. Take the time to decide early on in your project where your code should really be implemented. Determine which initialization, load, appear, and unload methods will serve your specific purposes the best and stick with them. Take the time to appropriately organize your source code, separate classes, nibs, and the like into small manageable components that can be read in, utilized, and disposed off properly, while limiting iOS resource waste.
Application lifecycle
Commonly referred to as the application lifecycle, it is the process that an application goes through from execution to termination. Execution of an application in the sense we are interested in begins with the user tapping or launching our application. Once this process begins, iOS passes foreground control to our application for the duration of our application runtime. This startup process is defined by a series of steps and monitored extremely closely by iOS to ensure the integrity of the operating system and device as a whole is not compromised. As an example, iOS has strict time limits imposed on applications to either perform their operations within or be forcefully terminated and removed from memory. This type of enforcement within the application lifecycle has a few benefits. It forces developers to pay much greater attention to the architecture of their application as well as how they consume and manage resources. Additionally, it creates a comfortable and predictable environment for iOS users that have become accustomed to applications launching and becoming responsive almost immediately.
[ 170 ]
Chapter 9
After all, this is Apple's sandbox and if we want to build a castle, we have to play with their tools and by their rules. I don't want to venture into the debate on development freedom and closed versus open platforms, but I will say this: iOS is and has always been dedicated to the user experience. Users and more specifically their pocketbooks are what Apple is working towards, and Apple is not in the business of impressing developers or open / closed platform enthusiasts. The success of iOS devices isn't because of one single application or the device design itself. It is the combination of every factor including the software we build. Even creativity needs boundaries and some may disagree, but Apple has a difficult job of allowing developers to be as creative as possible while still protecting users from poorly developed applications and security risks. Regardless, as developers we need to understand as much as possible about the core operating system, the language, and its implementation to use this creative license as effectively and efficiently as we can. The application lifecycle can be broken down into the following three major phases: •
Startup sequence
•
Execution
•
Termination sequence
Each of these major phases has a set of ordered operations that take place and ultimately make up the body of the lifecycle. We cover both the startup and termination sequences in great detail and leave the execution phase to the other chapters in this book as this phase is less specific.
[ 171 ]
Application and Object Lifecycles
Application startup sequence
Now that we have covered each of the primary startup steps, let's look at the following diagram that shows the order in which they take place: Application launched by user
If already in background
No
Yes
main() function called
applicationWillEnterForeground is called
UIApplicationMain is called
applicationDidBecomeActive is called
application object is created (singleton)
Application running
Main interface nib file is loaded
init is calling
awakeFromNib is calling application: didFinishLaunchingWith Option is called applicationDidBecomeActive is called
Application running
An iOS application that is launched by the user and not currently suspended in the background by way of iOS multi-tasking will proceed through the following launch steps when started: 1. The iOS application is launched by the user. 2. The main() function is called. 3. The UIApplicationMain function is called. [ 172 ]
Chapter 9
4. The application object is created (singleton). 5. The application's main interface nib file is loaded. 6. The init method is called. 7. The awakeFromNib method is called on all objects that the nib initialized. 8. The application: didFinishLaunchingWithOptions method is called. 9. The applicationDidBecomeActive method is called. 10. The application is now running. An iOS application that is launched by the user and exists in the suspended multi-tasking state will execute the following shorter list of launch steps when started: 1. The iOS application is launched by the user. 2. The applicationWillEnterForeground method is called. 3. The applicationDidBecomeActive method is called.
Application execution
The second part of the three major application phases, application execution theoretically begins directly after the application startup sequence sends its final messages and the application is now free to proceed at will. In iOS applications, the final message we receive is not in fact application: didFinishLaunchingWithOptions as many think, but applicationDidBecomeActive because the state change from inactive to active is still quite relevant. At this point in the application progression, you are free to perform any and all tasks according to your application, project, and design needs. This phase is covered quite heavily throughout the other chapters of this book, thus there is no reason to duplicate our efforts. We will continue on to the last phase with the details of the termination sequence.
Application termination sequence
The termination process is just as important to understand. With the introduction of multi-tasking, performance is no longer limited to startup and execution. Our applications must terminate properly and perform the transition to background suspension with the same discipline and attention that our startup process demands. [ 173 ]
Application and Object Lifecycles
iOS does not automate the process of slowing down animation frame rates or automatically pausing a running game. Neither does it save our application state or ensure that database nor network connections are properly disconnected. These operations are the responsibility of the application to have appropriate delegate methods available and configured to tie up these loose ends when a user presses the home button to quit or switch applications. The application termination process like the startup sequence, depends upon whether the application supports multi-tasking or background suspension. Two logic paths are available and are determined by whether or not the Boolean value UIApplicationExitsOnSuspend exists in the application's Info.plist file. The following diagram of the application termination process shows the difference in steps performed, depending upon whether the application is placed into a suspended state or terminated fully: Application user requests exit
If multi-tasking is supported
Yes
No
applicationWillResignActive is called
applicationDidEnterBackground is called
applicationDidEnterbackground is called
applicationWillTerminate is called
Application suspended
Application terminated
An iOS application that is terminated while multi-tasking is supported, will perform the following steps: 1. The user requests to terminate the iOS application. 2. The applicationWillResignActive method is called. 3. The applicationDidEnterBackground method is called.
[ 174 ]
Chapter 9
By default all iOS applications support the background suspend state or multi-tasking feature. A developer can choose not to support multi-tasking, and opt to have their application fully terminate when the user quits the application by pressing the home button on the iOS device. As mentioned previously, this is achieved with the optional UIApplicationExitsOnSuspend setting within the application's Info.plist. An iOS application that is terminated by the user and does not support multi-tasking will not enter the background state and will perform the following steps: 1. The user requests to terminate the iOS application. 2. The applicationDidEnterBackground method is called. 3. The applicationWillTerminate method is called. When the application terminates, the delegate method applicationWillTerminate is called and shortly thereafter the application is removed from memory completely.
Application init
init is the initialization instance method that sets initial values and performs other
build up operations specific to an object, component, or application.
The application's init method is called during the loading of the application's nib file, while init methods for specific objects and components are loaded the more familiar and traditional way that we are accustomed to. When overriding the init method, always remember to call super to ensure the applications are properly built up before any other operations are performed. An example of application init override method is as follows: - (id)init { // Call super to initialize the application and prepare it for use if (self = [super init]) { // init any values here x = 0; y = 10; } return self; } [ 175 ]
Application and Object Lifecycles
An important point of interest regarding the init method is its rampant abuse. More often than they should be, application init methods are overloaded with initialization operations that inappropriately delay the startup of an application. Do not abuse the init method; it is used specifically for initializing needs and not to load every asset your application requires. It should go without saying that during application launch, we want to avoid the practice of overloading and slowing down launch. However, initialization methods are frequently packed with everything that an application might need throughout its runtime. A better practice is to use each of the initialization and loading delegates as they were intended, to modularize application startup. Many times the entire application as a whole is much better served when data is loaded within other, more appropriate buildup or access methods, like those we will discuss shortly. A common strategy to combat overloaded initializations is to use specific accessors to load data the first time we request it, as the following example demonstrates: - (NSDictionary *)myDictionary { // Good practice if (myDictionary == nil) { myDictionary = [self loadGiantDictionary]; } }
In the previous example, rather than loading myDictionary within the init method of the application, essentially delaying startup, we can load this giant amount of data when it is actually going to be needed and of course do that only once or whenever necessary. Please note that instance variables are by default initialized as nil, and that simple test in the previous code allows us to know if myDictionary has been previously loaded or not and prevents us from continuing to waste resources by repeatedly loading it if it isn't necessary.
The extremely common and poor development practice alluded to the previous code is to load the giant dictionary in the init method of the application or object such as the following code demonstrates: [ 176 ]
Chapter 9 - (id)init { // Bad practice if (self = [super init]) { myDictionary = [self loadGiantDictionary]; } return self; }
I like to call this practice on demand loading, a principle that can help alleviate the confusion that surrounds when and where operations should take place. It is sometimes more generally known as lazy loading. It is the concept that data will be loaded at or near the time it is needed rather than be loaded too early and squatting on valuable resources that could be put to better use. I prefer calling it on demand, simply because lazy loading sounds negative and seems to scare off the uninformed too quickly before they realize what the actual benefits of the principle are. To summarize application init, avoid the poor practice of overloading the init method and initialize only the essentials that are required for immediate startup. Give an application time to breathe upon startup and allow users to begin interacting with your application at the earliest opportunity. Applications that load significant amounts of data at application initialization are more than likely a victim of poor architecture or data modeling and we can do better than this. Apple has spent significant time and effort building both performance and the perception of performance into its operating systems and devices and iOS users have become accustomed to launching an application and seeing it materialize nearly immediately. Apple recommends that developers avoid initialization loading screens or splash screens that cause or force a user to wait while an application loads. This very act goes directly against the core principles of iOS devices and will have an impact on application usage and adoption rate. In short, keep initialization methods short and use on demand loading techniques to keep users happy and limit launch time and resource waste. [ 177 ]
Application and Object Lifecycles
awakeFromNib
The awakeFromNib method is called on every object that is initialized within a given nib file while that nib is loading. Do note, the actual awakeFromNib message is sent after every object has been initialized, thus they are guaranteed to have all outlet instance variables available. There are of course many uses relevant to the awakeFromNib method; however, it is generally used to make connections between interface objects and other objects within your application. An example implementation of this delegate method is as follows: - (void) awakeFromNib { NSLog(@"awakeFromNib"); }
In relation to performance, do be aware that overloading this method with unnecessary operations can significantly impact performance and stability, as you might expect.
application: didFinishLaunchingWithOptions
The delegate method is called when an application has completely finished launching but before it is in an active state. The main application nib will have been loaded at this point. Post iOS 3.0 Apple highly recommends that developers begin using application: didFinishLaunchingWithOptions in the event you are still using applicationDidFinishLaunching even if you are not sending any additional options. Options are passed within an NSDictionary and contain the specific reason the application was launched, which is shown as follows: - (void)application:(UIApplication *)application didFinishLaunchingWit hOptions:(NSDictionary *)launchOptions { // Called when the application has completely finshed launching. }
[ 178 ]
Chapter 9
An additional note to keep in mind is that an application that takes longer than approximately 5 seconds to initialize or reach the application: didFinishLaunchingWithOptions method will be killed automatically by iOS and the delegate method applicationWillTerminate is called just prior to termination taking place as its name implies. There are very few if any reasons at all for an application to legitimately require more than just a second or two to reach the application: didFinishLaunchingWithOptions delegate and those would be the exceptions, not the norm. Focus on using the on demand or lazy loading principle to prevent this type of scenario from occurring.
applicationDidBecomeActive
This method is used to communicate to your application that has moved from the inactive state to the active state. An example implementation of applicationDidBecomeActive is as follows: - (void)applicationDidBecomeActive:(UIApplication *)application { NSLog(@"applicationDidBecomeActive"); }
applicationWillEnterForeground
This method became available in iOS 4 and specifically relates to applications that are transitioning from the background suspended state to the foreground. This delegate should be used to reverse any operations that took place during calls to applicationDidEnterBackground as the application entered the multi-tasking suspend state. Preparations for returning to the active state should be included here. applicationDidBecomeActive is the delegate that will be called when the
application has completed the transition to the active state.
[ 179 ]
Application and Object Lifecycles
An example implementation of applicationWillEnterForeground is as follows: - (void)applicationWillEnterForeground:(UIApplication *)application { NSLog(@"applicationWillEnterForeground"); }
applicationWillResignActive
This delegate is the opposite of applicationDidBecomeActive and is used to communicate that the application is moving from the active state to the inactive state. Use this delegate as an opportunity to prepare for the state switch. An example implementation of this delegate method is as follows: - (void)applicationWillResignActive:(UIApplication *)application { NSLog(@"applicationWillResignActive"); }
applicationDidEnterBackground
Similar to applicationWillTerminate save for actually exiting, this delegate method is the last method called when an application enters the background state in a multi-tasking application or just prior to applicationWillTerminate in applications that do not support multi-tasking. Use this method to save and prepare for your application to enter the background state, regardless of multi-tasking support. Use this method to save application state, network disconnection, disk cleanup, closing database connections, and generally preparing for a proper shutdown of the application. As with previous buildup and teardown delegates, iOS provides approximately 5 seconds for this method to return or the process will be terminated and purged from memory. An example implementation of applicationDidEnterBackground is as follows: - (void)applicationDidEnterBackground:(UIApplication *)application { [ 180 ]
Chapter 9 // Called when background execution is supported and the application is about to be terminated. }
As performance relates, be aware not to overload this method or risk having iOS kill your application's process and severely impact operations that may be critical to your application.
applicationWillTerminate
This delegate method will be called when an application is about to be terminated and removed completely from memory. This is the final method call when an application does not support multi-tasking. This method is specifically designated for cleanup and teardown operations, prior to an application being completely removed from memory. As with applicationDidEnterBackground, use this method to save application state, disconnect network resources, perform disk maintenance, close database connections, and generally prepare for eventual termination. An important note regarding the applicationWillTerminate delegate method is that iOS provides a window of about 5 seconds to complete teardown operations and return before all application processes are terminated immediately.
An example implementation of applicationWillTerminate method is given as follows: - (void)applicationWillTerminate:(UIApplication *)application { // Called when the application is about to terminate. }
An additional and equally important note regarding this method is that multi-tasking applications traditionally do not have their applicationWillTerminate delegate called, because they are never actually terminated and only placed in the suspend state. However, if the application is running in either the foreground or background and if iOS determines that it needs to be terminated for whatever reason, this method will be called regardless, so it is equally important to implement and maintain this delegate regardless of multi-tasking preferences. [ 181 ]
Application and Object Lifecycles
Object lifecycle
Much like the lifecycle of an application, objects go through very much the same allocation and initialization steps followed by deallocation and eventual removal from memory. This is the process that objects step through from creation to destruction or buildup and teardown, depending on your school of programming. In the previous chapter, we covered in depth the process of memory allocation and object retention; however, in this chapter we will look more at the individual processes that an object steps through during its life within an application. The primary steps of the object lifecycle are as follows: •
Allocation
•
Initialization
•
Usage
•
Deallocation
Object creation happens in two simple steps, the allocation of memory and the initialization of the object itself. alloc is the allocation class method that makes the appropriate memory allocations
in preparation for the initialization of the object. In Objective-C, we rarely ever see these two methods separately but it is important to understand the difference between the two.
alloc is a class method; in other words, it can be accessed without an instance of the class available, while init is an instance method and is accessible only after an object
has been allocated.
Following usage, an object at some point or another has outlived its purpose and needs to be released. It's destruction is determined by its retain count, that is the number of owners that an object has at any given time. Following convention, ownership is released when an object is no longer needed and thus the retain count should eventually fall to zero. An object remains alive as long as its retain count is greater than zero. When the retain count reaches zero the object has fulfilled its purpose, its lifecycle is coming to an end, and it is eventually deallocated and destroyed.
Object init
As discussed earlier, the init method of any object sets initial values and performs buildup operations for general usage. [ 182 ]
Chapter 9
It is the location where object variables and essential object operations should be performed. However, it is important to note that it is not the only facility for object data and operation build up. In most current day programming languages, creating an object is the process of allocating and initializing an object. In Objective-C, the init method traditionally follows the alloc method to bring an object to life. Within the init method it is always recommended to perform a call out to the object's super init method, to ensure that any buildup necessary at this higher level is performed before we make any additional changes in data or behavior. Like the application init method, the example init override method is identical, shown as follows: - (id)init { // Call super to initialize the object and prepare it for use if (self = [super init]) { // init any values here x = 0; y = 10; } return self; }
The return value of any init method should be id; the object that you are initializing and the compiler will throw an error as expected should an invalid return type be provided. Within Objective-C, an object can have as many init methods as you prefer or require. Initialization methods of varying types and for every specific situation are a highly encouraged practice. Multiple init method example is as follows: - (id)init; - (id)initWithX: (NSNumber *)x; - (id)initWithX: (NSNumber *)x y:(NSNumber *)y; [ 183 ]
Application and Object Lifecycles
initWithX and initWithY are referred to as designated initializers and are
alternatively used to initialize an object in a more specific fashion with injected variables. Less specific init methods are more than likely calling more specific designated initializers, but with default or base values. To help describe this practice, look at the following example of an object's default
init method:
- (id)init { return [self initWithX:[NSNumber numberWithInt:5]]; }
This default init method is calling the instance init method initWithX to set the specific instance variable x to 5. When creating classes that contain more than one initialize method, a good rule to follow is that the simple initializers should be making calls into the more advanced designated initializers. Additionally, avoid the common mistake of init method abuse as we discussed in the application init section of this chapter. Use init methods as lightly as possible or pay the performance toll each time they are instantiated.
Summary
A powerful principle in performance-driven development is understanding the intricacies of the programming language and the operating system in which it runs. Without an in-depth knowledge of each step in the application and object lifecycle, it is virtually impossible to create an efficient and highly performing application. We covered and compared to programming the principle known as mise en place, the French term used in the culinary world to describe that everything has a place. Similar to development, every class, method, and asset has a preferred location, and preparation and knowledge are key in ensuring this higher level of order is achieved. Although the iOS startup and termination processes for iOS are relatively simple, if not known they might as well be a course requirement for rocket science. Without an understanding of these steps, the device would appear as a black box and it would be extremely difficult to locate errors in code placement or identify bottlenecks based upon improper use of startup and shutdown procedures. [ 184 ]
Chapter 9
This chapter provided startup and termination diagrams that show the precise delegate and messaging that takes place as an application is started and terminated, as well as when it changes states between active and inactive for multi-tasking purposes. Our in-depth look into these processes, identified how structure creates power and control and ultimately this leads to predictability and performance, while inversely chaos and messy code breed bugs. Apple has created an environment for creativity to thrive while balancing user experience and security at the same time. Arguments exist for and against the semi-closed Apple platform, but as developers we choose to provide solutions for users not for Apple. And as mentioned earlier, this is Apple's sandbox and if we want to stay we have to play with their tools and by their rules. Lastly, we covered the importance of avoiding init and other method overloading, where one might be tempted to load all data within a single method or begin loading everything required throughout the lifetime of the application at once. It is highly recommended to use on demand loading or lazy loading to achieve better performance results and have more granular control over iOS resources. The specific areas in this chapter that we covered in great detail are the following: •
Mise en place
•
Application lifecycle
•
Application startup sequence
•
Application execution
•
Application termination sequence
•
awakeFromNib
•
application: didFinishLaunchingWithOptions
•
applicationDidBecomeActive
•
applicationWillEnterForeground
•
applicationWillResignActive
•
applicationDidEnterBackground
•
applicationWillTerminate
•
Object lifecycle
•
Object init
[ 185 ]
Animation, View, and Display Performance A great factor in the overall success of iOS devices in the hands of consumers is the ability of the device and software to respond nearly instantaneously to user input as well as produce consistent visual stimulation. Animation, or the illusion of motion paired with tactile input, is the key architecture for a successful mobile, interactive device. Dozens of similar devices have appeared on the market since the first version of the iPhone was released back in 2007. All of them at some level attempt to reproduce not only the functionality but also the experience that iOS users have become so accustomed to. Touch with instantaneous response, quality graphics, smooth animation, and high video frame rates are only a few of the characteristics that iOS users have come to expect. None of this happened by accident and is the reason that the original Apple iPhone was the success that others had attempted and failed to achieve time and time before. With mobile phones already so prevalent, how was Apple able to literally re-define the market? This question has in no doubt been asked in dozens of corporate competitors' offices throughout the world and there surely isn't one specific answer. Much like everything Apple creates, the iPhone was not simply a device loaded with an operating system; it was a cohesive solution that the industry was restlessly seeking. The iPhone altered the landscape of the mobile phone industry forever, by simply treating mobile phones like personal computers; an iPhone inherently had more value than any others that existed on the market. An operating system designed specifically for and matched with hardware that both receives regular updates and enhancements that add function and value, were the core features that made the iPhone a success.
Animation, View, and Display Performance
Other vendors for years have constantly altered the form factor of their line of mobile devices and unimaginatively loaded it with the same uninspiring operating systems that were on previous models. The idea was that the hardware was the primary focus of consumers and that the software was less important. The hardware, as impressive as it was when the iPhone launched, and as impressive as it is today, is of course not the sole reason the iPhone is the success it is today. It is purely the combination of these two facets that users were not expecting. Hardware aside, users have come to expect an extremely high level of refinement from iOS and among other obvious factors of a quality user experience, performance ranks up there near the top. Within iOS interface, performance is achieved by understanding the implementation and capabilities of its view, display, and animation frameworks. The iOS SDK has been designed to lift much of the responsibility for interface performance off the shoulders of iOS developers; however, it's not something we can simply avoid or turn our attention away from. A lack of attention to display performance details will eventually pile up and pull an application down and effect the perception of an application's quality, regardless of fact or function. In this chapter, we cover the importance of managing animation, view, and display performance as well as touch upon techniques that make these processes simpler in the following areas: •
View performance
•
Animated content
•
Core Animation
•
Item renderers
View performance
iOS applications are made up entirely of layered views, stacked one upon another. Each of these views is a rectangular area on the screen that is solely responsible for drawing content and handling touch and interaction events. Views are a subclass of UIResponder, the core class that handles touch events and has been designed to be arranged hierarchically, in which views atop of others obscure the views beneath.
[ 188 ]
Chapter 10
As a hierarchy implies, each view has one superview and zero or more subviews. All views live within the application window UIWindow, which is actually just a view in itself and is the only view within iOS that does not have a parent superview. Managing views is very similar to manipulating the elements of an array as far as ordering is concerned; in fact, it is an array of views that we add, remove, and exchange views within. Adding a new subview to the stack is rather simple and is accomplished with the addSubview method: - (void)addSubview:(UIView *)view;
Removing a subview is equally as easy and is achieved with the removeFromSuperview method. Note that when you remove a view from the stack, you are not actually telling the superview to remove the view, but telling the view to remove itself from its superview. Conceptually, both paths would lead to similar results; however, when you ask the view to remove itself from the superview you provide an opportunity for the view to clean up and prepare for removal. - (void)removeFromSuperview;
The following is a listing of the methods that are used to manipulate the view hierarchy: -
(void)insertSubview:(UIView *)view atIndex:(int)index; (void)insertSubview:(UIView *)view belowSubview:(UIView *)view; (void)insertSubview:(UIView *)view aboveSubview:(UIView *)view; (void)exchangeSubviewAtIndex:(int)index withSubviewAtIndex:(int) otherIndex;
As performance is concerned, it is important to note that when you add a view to a superview, the superview automatically retains the newly added view. Its retain count is incremented by 1 and you are free to release this view and free its resources if it is no longer needed. All views owned by a particular superview will be released when the superview is destroyed; therefore it is important to ensure that you release any ownership on views you create, so that when the superview is destroyed, unreleased views are not leaked. Another important aspect of iOS views and displays is understanding the relationship of a view and its parent window or frame.
[ 189 ]
Animation, View, and Display Performance
A view is a relative and confined scope of the larger window or frame that it sits within. In other words, the view is the area that the user is seeing or interacting with while the frame is the out of bounds area that the user knows is there but cannot yet see. This concept is more easily experienced and understood if we think of zooming in and scrolling around on a picture or map within iOS. Our focus (view) can be scrolled around within the bounds of the content (frame) or zoomed out at, which point the frame and view can be equal in size. The following image depicts this simple concept with a view placed within a frame. Conceptually, the view is the screen of the iOS device and the frame is the area that the view can be manipulated within that is shown as follows: Frame View
Animated content
Animation is the illusion of motion, stimulating the brain with a sequence of still images that are presented in rapid succession. Each of these still images vary slightly from one to another and when played at a constant rate produce the illusion that the objects within these images are moving. In recent years, animation has graduated from mere eye-candy and visual entertainment to full blown interactive feedback that provides the user with context for what is happening before their eyes. When a user is first introduced to an iOS device they are presented with the lock screen, which requires a sliding mechanism to be manipulated before the phone can be used. [ 190 ]
Chapter 10
Without visual feedback, a user might swipe several times in different locations and at different speeds, attempting to reproduce a motion that may or may not unlock the device. However, animation directly connected to a user's sense of touch allows them to more fully interact with the unlocking experience. Animation tied to tactile functions of iOS goes far beyond the unlocking of the device. These experiences have engraved themselves deep into our newly natural processes for manipulating two-dimensional content. Tapping, swiping, pinching, rotating, and flinging have all become so natural for iOS users and are almost always tied directly to some form of animation. With the new craze of mobile touch devices sweeping the industry, every one of us has more than likely watched an online video demonstration or two of upcoming or newly released competition for an existing iOS device. While I myself enjoy the industry and am always interested in the latest technology announcements, one particular issue that I find frustrating to watch or even experience for myself is to have a device not immediately responding to my tactile input. We are now conditioned to expect software and hardware to respond instantaneously to every gesture and motion we make with little to no delay. Swiping once, then twice, and finally a third time before a user's response is recognized, is something that is likely to prevent many competitors from achieving the high levels of market penetration they seek. As we previously covered, a user expects a touch or gesture to have a visual effect and possibly alter the context of the device they are using. This combination of visual stimulation and the sense of touch creates a connection between the virtual and physical worlds that we live in. On iOS platforms the primary source of animation functionality is performed with Core Animation, Apple's high performance, two-dimensional animation framework. Every standard component animation or effect that is performed within iOS is done with Core Animation. Core Animation handles the deep and dirty bits of animation and allows the developer to simply start and stop animation operations without worrying about loops, timers, or synchronization. As with most two-dimensional animation frameworks, Core Animation uses layers to achieve its effects and performance. A layer is an extremely simple model object, very similar to that of a view within Objective-C. Layers encompass all that is necessary to draw, display, and animate objects within the Core Animation Framework. Layers are created and manipulated with Objective-C's CALayer class, which handles positioning, size, transformations, and the layers, timing and coordinates properties. [ 191 ]
Animation, View, and Display Performance
Core Animation
Core Animation is a framework of classes designed solely for the purpose of animation, projection, and rendering. Core Animation classes have been designed to align themselves with other familiar Application Kit and Cocoa Touch architectures for ease of use. Object animation by nature has a rather high skill requirement and Core Animation was designed to lower this requirement and increase adoption across the programming spectrum. Essentially, Core Animation has been designed to simplify many of the common but difficult animation related tasks that we use on iOS devices every day. Apple defines the purpose of Core Animation as the following: •
High performance compositing with a simple approachable programming model.
•
A familiar view-like abstraction that allows you to create complex user interfaces using a hierarchy of layer objects.
•
A lightweight data structure. You can display and animate hundreds of layers simultaneously.
•
An abstract animation interface that allows animations to run on a separate thread, independent of your application's run loop. Once an animation is configured and starts, Core Animation assumes full responsibility for running it at frame rate.
•
Improved application performance. Applications need only redraw content when it changes. Minimal application interaction is required for resizing and providing layout services layers. Core Animation also eliminates application code that runs at the animation frame-rate.
•
A flexible layout manager model, including a manager that allows the position and size of a layer to be set relative to attributes of sibling layers.
Each Core Animation class falls into one of the following categories: •
Layer and display classes
•
Animation and timing classes
•
Layout and constraint classes
•
Transaction and grouping classes
[ 192 ]
Chapter 10
Layer and display classes as detailed earlier are the basis of Core Animation and provide the mechanism for which all animation is performed. Layers like views are arranged in a hierarchy called the layer tree and are drawn one atop another in order until reaching the highest foreground layer. Animation and timing classes define the properties of a layer that will be automatically animated by Core Animation. Layout and constraint classes such as the category suggests include the necessary functionality to position layers within their superview. Transaction and grouping classes are responsible for defining the actions that take place as part of the animation effects as well as grouping objects together to allow animation to affect more than one object at a time.
Item renderers
A good measurement of almost any mobile platform is how well it handles the sequential rendering of dozens, hundreds, and even thousands of objects at any given time. Within iOS, UITableViews are used quite frequently to display just about any type and amount of data. From short multi-section table views to tables with as many as a few thousand or more rows, UITableViews represent a good portion of iOS application displays and account for a sizable amount of performance related problems as well. The UITableView class was developed specifically for devices with small displays and provides features that help developers ensure high levels of performance almost regardless of row content. Table views are limited to a single column of cells, again due to device display limitations, and allow a developer to provide custom cell objects for each or all of its table views cells. However, often we see table views that skip, jump, and appear anything other than smooth as we scroll through the data contained within these views. To fully understand the impact that a poorly implemented UITableView can have on performance, we need to understand in greater detail how an iOS UITableView works.
[ 193 ]
Animation, View, and Display Performance
Let's start by taking a look at a simple UITableView with 20 rows, numbered from 0 to 19, as shown in the following screenshot:
In this particular screenshot, a user is only able to see a little more than 9 cells at any given time within this table view. Obviously, it would be a waste of device resources to have already rendered cells, which are not yet visible to the user. This may not necessarily be an issue with 20 rows, but imagine a few hundred or even a few thousand rows of data and we can begin to see how resources might be significantly impacted. UITableView takes advantage of this obvious perspective and in fact, only renders
the cells that are visible to the user, plus a few extra for performance when a user begins to scroll the table view. The following depiction of a table view and its cells demonstrates a purpose and performance-based architecture for creating, displaying, and recycling table view cells:
[ 194 ]
Chapter 10
Scrolled out of view
continuous recycle loop
cell
cell
cell
cell
cell
cell
Scrolling into view
As a user moves through a table view, new cells are scrolling into view and existing cells are scrolling out of view. UITableView performance is determined by how efficiently this process can be performed. As the table view depiction shows, cells that scroll out of view are not destroyed but enter a queue for cell reuse. If a user has a table view containing 1,000 items and quickly scrolls through this content, theoretically, as each cell comes into view an object is created, while inversely each cell that scrolls out of view would require an object to be destroyed. As we have covered previously, object creation has an associated cost and value to an application, and we shouldn't be so quick to assume that throwing away or destroying an object is the most efficient way of handling common situations. UITableView was designed specifically with this issue in mind and provides a mechanism in which we can re-use cell objects for big performance gains.
[ 195 ]
Animation, View, and Display Performance
In reality, using the same user scenario as the previous one, cells that are scrolling out of view are recycled for reuse and used for cells that are scrolling into view. Essentially, if nine rows are visible at any given time while a user scrolls through a UITableView, as few as 10, 12, or 13 cell row objects will be in use throughout the entire scrolling process because cells are shuffled off one end, recycled, and brought in on the other end in what could be described as an infinite loop. An analogy would be to think of UITableView cell rendering as the window in a house, which faces a set of train tracks. Imagine standing in this house, looking out of the window as train cars pass by. The window restricts the number of train cars that we can see at any given time; however, train cars are continuously entering from one side of the window and exiting the opposite side. For the purposes of our application it doesn't help performance or system resources to have every single train car rendered if only a small selection of them are visible to the user at one time. Continuing with the train analogy, it would be a significant waste of resources to destroy each train car as it passed out of view, only to spend resources rebuilding a new train car as it enters the view. It would make much more sense and increase efficiency if we could recycle train cars as they passed out of our window's view and use them for those that are entering our view. UITableView cell rendering methods follow this on-demand or lazy loading
principle, in which items are loaded only when necessary and are re-used to prevent undue burden on the processing and display systems.
In the following example code, each cell that is viewable to a user is processed through logic to determine if a new cell object needs to be instantiated or if a cell object is available for reuse: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIn dexPath:(NSIndexPath *)indexPath { // Unique cell identifier string static NSString *CellIdentifier = @"Cell"; // Acquire a reusable cell UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; // If a cell is not available, a new cell object will be instantiated, otherwise the cell object is reused [ 196 ]
Chapter 10 if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set cell label [[cell textLabel] setText:[NSString stringWithFormat:@"%d", indexPath.row]]; // Return cell return cell; }
As we discussed in earlier chapters, most things worth the resources to create have some sort of value. With item renderers this is especially true as rendering cells can be a rather expensive task when a user scrolls quickly through a long list of UITableView items. A helpful tip is to NSLog the item rendering process and monitor the creation and recycling of table view cells to ensure proper reuse is taking place. Additionally, pay close attention to object creation and releasing within table view cells as memory and resource leaks can quickly mount and become serious performance issues as users quickly scroll through table view lists.
Summary
iOS users have come to expect highly responsive user interfaces. Interfaces that not only respond instantaneously to user input, but interfaces that respond contextually to this same user input through view changes, animation, and effects. We covered the architecture of iOS and display views and how to effectively and properly manipulate the view stack to ensure views are added, removed, and destroyed correctly without leaking system resources. Core Animation was introduced as the high performance two-dimensional animation and transformation framework that is responsible for all iOS animation.
[ 197 ]
Animation, View, and Display Performance
We covered the importance of understanding UITableViews and how table view cell recycling increases performance by reducing the amount of system resources that are needlessly wasted. The specific areas in this chapter that we covered in great detail are the following: •
View performance
•
Animated content
•
Core Animation
•
Item renderers
[ 198 ]
Database and Storage Performance Almost every application today, regardless of platform, takes advantage of at least some type of storage mechanism. Whether it is user and application preferences stored locally or even remote disk and database storage facilities, applications frequently require data to persist across usage instances or in many cases from device to device. Storing data in theory is routinely simple; however, storing data correctly and efficiently is a completely different animal with both optimizations and pitfalls scattered about. Over the years I've witnessed and worked with large, complicated storage architectures engineered and deployed with little or no knowledge of the technologies employed and as you can imagine, these solutions leave little to be proud of, are saturated with bugs, and of course performance is very much non-existent. Several of these systems that I have had the extreme displeasure of working with remain in this same unforgiving state even years after the issues have been identified and the creators made aware of the problems. The reason; change isn't always possible. Once storage architectures are in use, the fluid ability to dynamically alter the way in which data is stored and retrieved is no longer simple or even possible. Versioning of storage mechanisms, backwards compatibility, complicated polymorphic solutions, integrity, and even data loss are only a small selection of the issues that arise when storage architectures change mid-stream. As a matter of fact while writing this particular chapter I opened up an extremely common word-tile game that my wife and I regularly play on our iPhones, and experienced a data migration that required me to wait with a warning not to exit the application while the process completed. Although this migration was quite successful and required only a handful of seconds to complete, the risks of data loss because of failed migration logic or even users not heeding warnings are an almost sure bet.
Database and Storage Performance
The key is to completely avoid these situations as best as we can, to design and lay out a storage and retrieval architecture that will serve for the lifetime of the application while at the same time allowing for feature addition. The ideal storage solution would be designed, tested, and deployed once, allowing an application to focus on features, content, and performance and leave behind the concern for persisting data. However in reality, this rarely ever is the case. The world of software is plagued with migration and conversion operations that either work as they should or fail miserably, destroying data, coloring hair gray, and shortening our lives one incompatibility at a time. Choosing the correct technology for data storage is critical, as you might already well know. Be warned that no matter what you have heard, there is no single solution for storing every type of data for every application under every circumstance. An additional aspect of data storage that may well be outside the scope of this book but is worth mentioning is liability. As developers, we are responsible for the integration and possibly even the design of data storage architectures, and as such we have a duty to ensure the solutions we create are capable of protecting the data we are trusted with. In addition to the predictable wrath of a user whose data has been lost or destroyed, we must take into account that we may be affected by civil or even criminal actions for improperly storing or losing user data. Local, state, and even federal regulations exist to protect various types of private and sensitive data within various sectors including, but not limited to finance, health care, education, and government. At the risk of sounding whose dramatic, data storage is an awesome responsibility that rarely receives kudos when done well, but will surely bring trouble should it function improperly. In this chapter, we discuss each of the database and storage options available to iOS applications and their respective performance gains and losses. More specifically, we cover the following areas: •
Disk
•
Cache
•
Compression
•
SQLite
•
Core Data
[ 200 ]
Chapter 11
Disk
Disk storage within iOS is important to understand, as it is the basic storage mechanism that nearly every application will use at one point or another. In fact in most situations, iOS applications will take more advantage of simple disk storage mechanisms and forgo more technical, native storage solutions such as Core Data and even server-side storage. Within iOS there are various locations that can be used for the storage of application data, whether permanent, temporary, or even semi-temporary cache, and so on. A selected few of these storage locations are detailed as follows, while a full listing of these locations is available within Apple's iOS Reference Library under Files and File System: •
Documents directory : NSDocumentDirectory
•
Cache directory : NSCachesDirectory
•
Temporary directory : NSTemporaryDirectory
•
Application support directory : NSApplicationSupportDirectory
Each of these directories has specific purposes and use cases that a developer should take advantage of when disk storage is necessary. The documents directory is stored within NSDocumentDirectory and is the location for all permanent application data. Additionally, this directory is automatically backed up when backup and syncing operations on the iOS device are performed. The cache directory is stored within NSCachesDirectory and is the location for all cache data as the name implies. Data within this disk directory is not backed up or synced but persists across application launches, meaning it is the preferred location for an application to store data that may be relevant the next time an application launches, such as state, status, or temporary details. The temporary directory is stored within NSTemporaryDirectory and is the location for all data that is only necessary for the current running instance of the application. This data will not be backed up, nor will it persist across application launches as it is cleared when an application exits. The application support directory is stored within NSApplicationSupportDirectory and is the location for all data and resources that support an application but are not required for it to run. An example of support data would be statistics, templates, or other details that can be shared amongst launches and users of the application.
[ 201 ]
Database and Storage Performance
Apple recommends a preferred way to retrieve paths within iOS and that is to use the NSSearchPathForDirectoriesInDomains function. Designed specifically to use with the previously listed directory constants, results returned by NSSearchPathForDirectoriesInDomains can be manipulated with all path-related methods of the NSString class for the alteration or creation of new paths. When storing local disk data, keep in mind, the facilities that you use can impact not only application performance, but also the amount of time required to backup and sync an iOS device to its tethered computer if applicable. Just as memory and processor utilization must be cautiously utilized, the disk is no different and should not be abused by storing unnecessary or wasteful amounts of data that might clutter up the application's home directory. Keep things clean, tidy, and organized, or else expect confusion with unexpected consequences.
Cache
It's nearly impossible to have a discussion relating to application performance and optimization without including cache at some point or another. The basic idea behind cache is to increase resource access efficiency by loading and temporarily storing necessary data. Cache provides an opportunity to stretch network and system resources much further than traditional means, by limiting required retrieval resources that might otherwise be tied up in traditional operations. The premise that every operation performed has an associated cost and value is something that I have mentioned several times throughout this book and it may in fact be the perfect use-case example for this principle. Loading or accessing data has an associated cost on system resources and this cost is a significant portion of its value. Data caching operations provide an opportunity for developers to squeeze every last ounce of value out of application data. Data can be used again and again, until it has been deemed 'expired', whereupon it is then discarded and replaced with current more relevant data. One of the more common cache scenarios the average user experiences is with webrelated content. For example, an HTML table listing of statistics might be generated from a server-side database and require that upon each load of the HTML page, a database connection be made, queries be performed, and the display results returned to the users web-browser. Obviously, this solution can become quite expensive as far as system resources are concerned if there are dozens, hundreds, and even thousands of simultaneous requests for this database-generated data. Caching the database results to a temporary file and serving these results generated from the cache file [ 202 ]
Chapter 11
limits resource waste by preventing unnecessary and potentially overwhelming requests to the backend database. However simple and obvious cache solutions appear to be, they are in no way used as often as they should. For example, a very well known server-side blogging application at the time of this writing did not include cache functionality in the base installation. Imagine for a moment, the unbelievable amount of system resources that could be saved around the world if the default installation of this blogging software included even a moderate or lightweight caching solution. In fact, in almost every scenario, loading both local and remote data affects overall application performance and can be significantly improved by taking advantage of simple and easy to use cache and data modeling principles. Unless it is purely throw away data, if we are going to spend the resources to retrieve application data, then we should keep it around long enough to benefit from its value. In other words, as I have said several times up to this point, don't be so quick to waste resources by throwing data away. Data is power, and almost always has a significant amount of value. Throwing away and wasting resources is a poor practice that many application developers aren't even aware they are guilty of. Regardless of operating system and platform, this poor practice is spread generously throughout every sector of development including mobile, desktop, web, and the like. Imagine a simple iOS application in which upon launch, required preferences are remotely loaded and used to alter the initial view and state of the running application. With this scenario in mind, imagine how unnecessary it would be to do this upon each application launch, or worse upon each navigation change within the application. As it should, this scenario seems silly and wasteful; however, it is extremely common and in the perfect environment, without ever measuring performance requirements, a developer may never have experienced the negative effects of this type of resource waste. A better implementation for the previous situation would be to load the remote application preferences once and store them locally for each successive application launch, only refreshing and synchronizing this data as necessary. Now, as obvious and seemingly simple as the previous scenario was, it is in fact equally common and unfortunate.
[ 203 ]
Database and Storage Performance
Caching operations are not limited to those involving simple data storage but can include object caching, which is the local storage and management of general objects as the name implies. The benefits of object-caching are the same as any other type of cache solutions, system resources are conserved simply because the expensive task of regenerating an object and loading it with data is bypassed, ultimately increasing performance. Cache solutions as mentioned earlier, range from simple object and value storage to extremely complicated data integrity, synchronization, and modeling architectures. Every implementation of cache will be different and should be designed specifically for the scenario at hand. To further demonstrate, an iOS application I wrote in 2010 required client-side caching along with custom server-side data manipulation to ultimately conserve user bandwidth. One of the application's views contains a UITableView that continuously loads a listing of security events from a remote server using a repeating timer. Upon each timer firing, only new security events are retrieved from the remote server and injected into the table view. If no new events exist, no remote data is transmitted and bandwidth is conserved significantly. In contrast to remotely loading only new events, the wasteful approach would have been to simply load N number of events upon each timed interval and replace the contents of the table view with the remote results. Simple in execution, but highly wasteful regarding system resources, choosing the more appropriate method prevents resource abuse on the server side and conserves bandwidth, which could be a significant issue with users who happen to be bandwidth metered.
Compression
When you think of compression, naturally optimization comes to mind. However, performance isn't necessarily related to compression, simply because compression requires an additional layer of resources to both compress and decompress data. The obvious benefits of compression are reduced network bandwidth and disk usage; however, it is important to note that there is an equal trade-off in performance that should be measured when considering compression operations. As an example of a compression trade-off using encryption, frequently I see individuals copying or moving large amounts of system data between hosts on a local network using SSH, when this action is much better served through a more efficient channel such as FTP or HTTP. The fact is that SSH is encrypting the bit stream, transferring, and then decrypting on the receiving hosts system, wasting
[ 204 ]
Chapter 11
unnecessary resources, and simply slowing down the data transfer process entirely. Of course with iOS devices, this read/write performance problem isn't as much of an issue, but the processor utilization to perform the cypher operations is and we need to be aware and keep these seemingly small performance bottlenecks under control.
SQLite
SQLite was originally released in 2000 as a lightweight, self-contained database library that does not require a running server process for client interaction. Its database is in fact a single cross-platform file that contains an entire relational database including tables, indexes, data, and more. Because SQLite is a library included in the base SDK, access for reading and writing is performed natively and does not require inter-process communication channels, which of course increases performance over traditional process-oriented database client/server systems. SQLite conforms closely to current SQL standards and its syntax is not identical to that of SQL, but similar enough that any experience in SQL can be easily translated to SQLite. An SQLite database (the file), allows multiple clients to simultaneously read data, while allowing limited write access by only one instance at a time. It does this by locking access to the database file while write operations are taking place. SQLite was fully integrated into the core of Mac OS X 10.4 in 2005 and into iOS 3.0 in 2009 by way of Core Data, Apple's data storage and serialization API. An interesting note is that SQLite is the most widely adopted SQL implementation in the world. Since its creation, it has been integrated into nearly every web-browser, mobile device, and operating system platform on the market. Within both OS X and iOS, SQLite intricacies are hidden from the developer within the Core Data API, making its integration and use trivial and comfortable to use for those with experience in Cocoa development. The performance gains of using SQLite via Core Data are significant and a great baseline for everything from small amounts of user preferences to even larger data sets that contain as many as several million rows of data or more, depending upon indexing and query skill.
[ 205 ]
Database and Storage Performance
Core Data
For those unfamiliar with Core Data, it is a Cocoa data storage and serialization API available within both the Mac OS X and iOS SDK, as of OSX 10.4 and iOS 3.0 respectively. Core Data's basic purpose is to abstract the serialization and persistence operations of objects and data. Core Data supports the serialization of objects into XML, binary, SQLite, and even allows for implementation of custom storage types. As we covered earlier in this chapter, the importance of designing and deploying static and stable storage architectures is extremely important. Modifying Core Data storage schemas after they have been deployed and are in use is a difficult task that requires in many cases, complicated migration and conversion code.
Summary
In this chapter, we covered the importance of selecting, designing, and deploying storage architectures that specifically compliment the requirements of individual applications. Not every application has the same data storage requirements and an all-in-one data storage system does not exist. Regardless of what you hear or read, there is not a silver-storage bullet that is the perfect solution for all data storage needs. Storage solutions exist that may well, in fact, work in most situations; however, we are not interested in just 'working', we are focused on squeezing as much performance out of an application's architecture as possible. Additionally, we discussed the importance of deploying a storage solution once, as it can be quite difficult if not nearly impossible to update or alter data storage architectures after they have been deployed and are in use by end users. Not only is this a technical challenge but also, we risk destroying data and causing almost limitless, unforeseen issues that can significantly impact the future success of a project. As we discussed, cache is an important aspect of application development that can be quite easily overlooked, yet can produce huge gains in performance and conserve great amounts of system resources. Cache operations can be implemented nearly everywhere, from object-cache and local storage to remote service and database caching layers. Take the opportunity to map out areas within an application that can benefit from caching by identifying where objects or data are frequently or repeatedly loaded or retrieved, and build custom solutions that are specific enough to take advantage of these local performance potentials. We covered the benefits of Core Data and the integration of the SQLite relational database that makes storing and retrieving data extremely simple. [ 206 ]
Chapter 11
The specific areas of database and storage performance we covered in depth were the following: •
Disk
•
Cache
•
Compression
•
SQLite
•
Core Data
[ 207 ]
Common Cocoa Design Patterns A design pattern in its most simple description and as it relates to programming is a blueprint for solving a common problem. With most design patterns, the solution to the specific problem is provided by laying out an architecture of classes, objects, and logic that have been time and situation tested. In addition to solving common problems, design patterns can be quite effective in guiding a developer through unfamiliar territory. For instance, a developer might be unfamiliar with the industry standard methods necessary to create a class that exposes accessors for an indexed list or array. Frequently, situations arise when classes need limited knowledge of one another, or when tight coupling becomes problematic; fortunately, design patterns are available to educate and increase programming fortitude. Design patterns create a sense of order where chaos might otherwise reign. Order and structure are key components where efficiency and performance are found. A project's future can be limited by how well its foundation blocks are interconnected. A well planned and solid application structure will live far beyond the initial feature timeline and be capable of near limitless iterations. Design patterns play a critical role in this foundation as they are designed with uniformity in mind. They are designed to solve common programming problems while conforming to leading practice development standards. Additionally, design patterns allow knowledgeable professionals to communicate complex architectures will ease. Simply mentioning that a particular class is singleton or that a particular implementation relies upon the Mediator pattern is enough for a knowledgeable developer to grasp a particular development situation.
Common Cocoa Design Patterns
One might think of design patterns in much the same way as a framework. Developers use common frameworks such as pieces of a puzzle. Design patterns have been time-tested to solve these common problems while remaining extensible and reusable. Design patterns are influenced by industry leading programming principles and create a powerful level of consistency and structure to the applications which take advantage of them. One might think of design patterns like building blocks that can be assembled and linked together to create larger more complex designs. Like building blocks, the majority of design patterns are created with integration in mind, designed to interoperate with leading methodologies like interface programming and encapsulation. It goes without saying that applications which are built upon a foundation of good design patterns, allow for greater levels of control, efficiency, and expandability. Design patterns inherently create flexibility and freedom to alter and change code simply because components developed with these patterns will be less tightly coupled, limiting dependency and intimacy with one another. Object abstraction is a powerful ally in object-oriented application development and most current day design patterns and combinations of patterns reflect this. Design patterns allow teams of developers to focus on features and functions of the overall application rather than being bogged down with the intricacies and debate of problems, which were solved 20 years ago. Expressing complex and complicated programming techniques in as little as a few words enables efficiency far beyond lines of code. Design patterns allow development to follow an efficient and predictive path that can be plotted out on paper from design and development to testing and deployment. Without design patterns, many developers are doomed to repeat history. Figuratively, re-inventing the wheel, developers are continuously stumbling into the same programming traps that we've all experienced and overcame. These traps, although necessary from an educational or learning perspective, are not required curriculum and are just as easily understood by reading and understanding the design pattern for the problem in which it was designed.
[ 210 ]
Chapter 12
Cocoa developers that may have design pattern experience with roots or foundations in other languages may find that occasionally. Cocoa implements patterns in unique and distinctive ways. This is not a flaw in Cocoa or any specific design pattern, but demonstrates an important point. Design patterns are flexible, in that they are all contextual patterns that solve a particular problem with theory and concept, not always a series of finite operations. Many design patterns can be altered to adapt to a specific programming language and its needs without compromising the integrity of the design pattern itself. Design patterns are not limited to singular instances and are frequently combined to form deeper, more complex structures that enhance existing patterns or even create completely new design patterns with their own problem solving purpose. A compound pattern is a design pattern that comprises two or more basic or elemental patterns. The most common compound pattern in today's object-oriented development world is the Model-View-Controller ("MVC") pattern, which defines the overall architecture and development style of an application. Compound patterns are found all throughout Objective-C and many developers will find that they are more familiar with these patterns than they initially thought. Regarding performance; it's almost a guarantee that an end-user will not be directly impacted by a developer's choice of design patterns and in a few circumstances even without design pattern knowledge or implementation an application may operate just as well. However, tuning and performance as mentioned in earlier chapters is not solely measured by how quick and speedy an application executes. Development and project efficiency has just as much to do with performance as any other aspect we've covered in this book, and design patterns go a long way towards architectures that breed consistency and efficiency, ultimately resulting in increased performance. In this chapter, we will discuss the most common design patterns available to iOS developers, their implementation, and how they can be used to increase not only performance, but also project and development efficiency. More specifically, the following areas regarding design patterns will be covered: •
Why design patterns are critical
•
Singleton
•
Mediator
•
Delegate
•
Adaptor
•
Decorator
•
Model-View-Controller [ 211 ]
Common Cocoa Design Patterns
Why design patterns are critical
Imagine for a moment that you are a construction contractor or homebuilder, tasked with architecting and building a large two-story family home from bare earth to completion. Now, imagine for a moment that your contracting and homebuilding experience is rather limited, that you've only dabbled in small structures and are very much unfamiliar with the intricacies of building a family dwelling or the requirements of such. Where do you begin? Chances are, if you lack the skills to design this structure, you would first seek assistance from an architect or possibly purchase an off-the-shelf or pre-approved architectural drawing or blueprint of a home that fits your needs. Within the blueprint for any home, there are combinations of components that solve re-occurring problems or dangers. For instance, construction code throughout most of the civilized world requires a specific structure above doors and entryways, commonly referred to as a header. This header structure was designed to solve several problems that no doubt arose from safety and operational concerns. Any specific design pattern is very much the same as this header structure. The pattern solves a common problem, and allows a developer to quickly and efficiently produce consistent results while minimizing trial, error, and the bumps and bruises that come along with them. In construction, if you fail to use history as your guide you are sure to have a giant mess on your hands when complications arise or a catastrophe takes place. Application development is no different; design patterns are the construction code and regulations of the programming world. Developers should be encouraged to leverage design patterns to not only solve the problems they come across, but as viable components in the design and planning stages of development. Even at the very least, every developer should take the time to familiarize themselves with design patterns relevant to their particular language. Design patterns are a critical aspect of performance-driven development.
Singleton
The Singleton design pattern, as its name implies, has been designed to ensure that a class can have only a single instance of itself at any given time.
[ 212 ]
Chapter 12
The singleton class is self-aware; upon instantiation the class performs a simple check to verify if it has already been instantiated. If the object already exists in memory, the class returns an instance of itself, otherwise it creates the singleton instance, as you would normally expect. This simple logic path is the basis of the Singleton design pattern and as you might have concluded, a Singleton class can be used when an application requires that no other instances of the same class can exist. Singleton objects are usually accessible in some form of global scope; otherwise their full power is never truly realized. It is quite common for database objects, network queues, data models, and the like to be Singleton designs. Many of these objects would have adverse consequences should another instance of the object be created, thus the reason for the Singleton pattern. Within Cocoa, several core classes of the framework are in fact Singletons. These include classes such as UIApplication, which provides the globally centralized point of access for applications running within iOS. It is worth mentioning that a current day argument exists for the use or over-use of Singleton classes. As with all things in life, moderation and purpose are key ingredients in success and failure. The Singleton is in fact, a time-tested and quite necessary design pattern that can be abused. Avoid the over-usage of globally scoped Singletons; however, do not let this hamper your adoption of one of the most powerful design patterns in modern day object-oriented programming.
Mediator
The Mediator design pattern is a simple logic-controlling pattern that dictates how two or more objects interact. The intent of the Mediator pattern is to support loose coupling by preventing objects from having intimate knowledge of one another and additionally adding the ability to control their level of interaction. Within modern object-oriented languages, the Model-View-Controller pattern relies heavily upon the Mediator pattern to support the controller portion of the design architecture. Within the MVC architecture, controller or mediator objects are responsible for all communications between views and models. Under most circumstances the controller would inform views of data model changes as well as update the data model when view interactions results require it. [ 213 ]
Common Cocoa Design Patterns
The Mediator pattern protects code reusability and extendibility by creating an extra layer of separation between objects allowing such things as static view and data components to remain unmodified while the Mediator class continues to be developed.
Delegate
The Delegate design pattern is a direct descendant of the Decorator design pattern. The primary purpose of both of these patterns is for the extension of function without the need for subclassing. The Delegate pattern allows a developer to optionally alter the behavior of a class dynamically. The Delegate pattern allows a developer to choose whether or not they will implement methods that may alter an object or component's behavior. Any class that implements the Delegate design pattern allows an object to be assigned as its delegate to perform work operations on its behalf. Unlike a traditional interface with required methods, Delegate class methods are optional. The Delegate pattern is used quite abundantly throughout Objective-C and is a powerful means to implement optional extensions to core framework classes. iOS developers should have immediately recognized the pattern name if of course, they were not already aware of the pattern and its implementation within Objective-C. The Delegate pattern is an incredibly powerful and versatile design that can be implemented within new custom classes as well as used to extend the functionality of older or even migrated code.
Adaptor
The Adaptor design pattern provides a mechanism to allow objects with what might be incompatible interfaces to communicate with one another. The pattern does this by exposing an alternative interface to the original class, while translating and adapting operations between the adaptor and original object. The Adaptor pattern supports the loose coupling design principle by creating an additional and sometimes required level of abstraction between two classes. In real world terms, adaptors are used just about everywhere. It is a fundamental principle that stretches far beyond software development. Power receptacles, audio / video cables, and even mechanical tools take advantage of the capabilities of adaptors to bridge interfaces for functionality. [ 214 ]
Chapter 12
Adaptors within Objective-C can be implemented with protocols. A protocol is an Objective-C feature that allows incompatible interfaces to work together by using the definitions within a protocol as the blueprint for commonality. Protocol definitions are made up of required and optional method declarations that a class can adopt or conform to in order to communicate between incompatible objects. If the implementation of optional protocol methods sounds familiar, it should, as it is the basis for Objective-C's Delegate design pattern. The Adaptor pattern, as with any other design pattern, has appropriate implementations as well as inappropriate. Like real world usage, software adaptors should be small and efficient, designed with specific translations in mind and not overly complicated or bloated. Outside of software development, more often than not adaptors can be blamed for performance losses and if they can be avoided it is recommended. Chaining audio and video adaptors in lengthy strings of this-to-that can degrade power, signal, and efficiency and of course is not recommended for these rather obvious reasons. Performance losses and gains are adaptor implementation specific and guidelines for achieving performance or avoiding performance losses are not something that can really be nailed down to a list of do's and don'ts. In effect, keep adaptors simple and specific.
Decorator
The Decorator design pattern is designed to wrap existing objects with enhanced functionality without modifying the original object. The Decorator pattern implements the same interface as the wrapped object but can inject custom or dynamic operations before or after the wrapped object's operations are performed. The original intent of the Decorator pattern was to implement the core object-oriented design principle that classes be extendable but not modified. The Decorator pattern, as you can imagine with most other patterns, promotes reusability along with loose coupling.
[ 215 ]
Common Cocoa Design Patterns
Model-View-Controller
As mentioned earlier, the Model-View-Controller design pattern is a compound design pattern that dictates the overall architecture of an application. MVC , in short, classifies each underlying application component into one of the three core MVC roles, such as the following: •
Model objects
•
View objects
•
Controller objects
Dividing classes into these core object roles allows an application to separate its basic components from one another. As an example, classes and components specific to the user interface can be completely separated from application logic, to increase productivity and ease of code management. Additionally, when view, model, and logic code is separated, it becomes more reusable and allows interface design and manipulation to have no effect on application logic while the inverse remains true as well. Model objects contain an application's data as well as any logic that dictates its access or control. It can be essential to an application's health to have a data model that is logically separated from the rest of an application. The abstraction of data modeling objects create an important layer between data that may or may not change frequently as well as interface views and application logic that could be updated often. View objects are responsible for presenting the interface elements of an application. Unlike traditional development outside of MVC, view objects do not contain application data nor do they contain the logic that may be used to access or manipulate application data. The purpose of the view role was designed to fully separate user interface design from application logic. This separation allows developers to change application logic as needed without adversely affecting the overall design of the application. In other words, interfaces are free to be altered and even redesigned, while the logic and data behind the scenes remains unaffected. In most cases, application views are designed to be reusable, and the MVC design pattern was created with this concept in mind. Controller objects are the glue that binds views to models. Controller objects act as the conduit for views and models to communicate with one another. Controllers contain the methods that a view uses to access or manipulate application data models.
[ 216 ]
Chapter 12
Controller objects are based upon the Mediator design pattern, which acts as a proxy between two classes, allowing them to remain loosely coupled and independent from one another. Controllers are responsible for making the view aware of available data models as well as ensuring a view is notified when a model change takes place. Controller objects are powerful proxies designed to alleviate the complexity of an application's components, and to support code reusability. It is not uncommon to find a controller object that is reusable; however, the preference for reusability usually lies in view and data modeling objects. The overall separation of programmatic components creates a deeper level of order that will have a significant impact on the application as a whole. Another important aspect of MVC is the optional combining of roles within an application. As an example, a view object can include methods that might otherwise be separated into a controller object. This combination would be referred to as a view controller and should sound quite familiar to anyone who is currently working with Objective-C. Furthermore, you can include controller logic within a data model and effectively have a model controller. A model controller would be dedicated to the logic of access and manipulation of its own data rather than delegating these responsibilities to a dedicated controller object. Combinations of core MVC objects are quite common; however, it is important to remember that combinations should only be used for situations in which development simplicity is necessary over order. Combining views and data models is never a good practice and should be avoided completely. As mentioned earlier, it breaks the principles of loose coupling and reusability by merging user interface designs with application data. The MVC pattern creates an extremely powerful design architecture for applications and allows for greater levels of adaptation and extensibility. It creates a structure or framework for developers to work within, while not limiting creativity and development freedom. With continued MVC experience, developers will begin to think more readily in the sphere of separation, loosely coupled classes, and reusable components, and about taking advantage of access controls to effectively increase application and project stability.
[ 217 ]
Common Cocoa Design Patterns
Summary
This chapter was focused on the importance of design patterns in modern object-oriented developing and how their use will increase flexibility and reusability, which ultimately has a significant impact on both development and runtime efficiency. Objective-C makes heavy use of design patterns and all developers are encouraged to take advantage of these built-in patterns as well as implement other industry standard design principles as necessary. Design patterns are a critical component of modern object-oriented programming and their usage creates cohesive solutions that take advantage of time-tested designs that will surely have a greater chance at success. We covered the Model-View-Controller design architecture and how its use defines the structure of an application as well as increases efficiency, reusability, and extendibility and overall code maintenance. We touched upon the importance of loose coupling and code reusability through the use of design patterns. Abstraction is a key component to extendable code and almost all modern design patterns have been designed with these important principles in mind. Additionally, we covered the core design patterns, which make up many of the common class and object architectures found within Objective-C. Having a greater familiarity of design patterns, in general, goes a long way to having a deeper understanding of Objective-C. This same understanding extends past lines of code and can create even greater levels of communication efficiency between developers. Expressing complicated concepts in just a few words means less time discussing or debating the unimportant intricacies of a frequent situation and more time producing real results. We detailed and discussed each of the following design patterns and how they affect the efficiency and performance of Objective-C projects and applications: •
Why design patterns are critical
•
Singleton
•
Mediator
•
Delegate
•
Adaptor
•
Decorator
•
Model-View-Controller [ 218 ]
The Xcode Advantage Xcode is Apple's notorious integrated development environment ("IDE") that all OS X and iOS developers are very much familiar with. If you have successfully deployed an application to Apple's App Store for either OS X or iOS, you may be quite familiar with the basic concepts and details we lay out in this chapter. However, in addition to the basics of Xcode, we will visit a few key principles that can affect project performance. Like other common IDEs, Xcode is a suite of tools and utilities that allow a developer to create, develop, compile, and deploy an application for execution on Apple related products. Xcode has traditionally always been a full featured and well-rounded development environment and has always included a wide range of advanced programming and debugging related features. From the early versions of the original Xcode release through final versions of Xcode 3, developers were exposed to a multiple window approach to development that fit well with most other OS X application usage concepts. It will come as no surprise that there is not a Microsoft Windows compatible version of Xcode available from Apple. This lack of availability is no doubt intentional, requiring anyone who has even the slightest interest in iOS development to break down and buy into the Apple program.
The Xcode Advantage
For those of us that have made the conversion to Apple, this makes good sense and in the end it's all for the better.
Xcode 4, with its first production release in the March of 2011, is a complete overhaul of the Xcode environment that we had all become accustomed to. This new iteration of Xcode brings a unified approach to programming on OS X in which all major programming aspects are provided in a single interface with a high level of integration. Xcode 4 is a completely redesigned development environment that now includes Interface Builder into the core Xcode application. Source code and interface editing are now seamless and interconnected with one another, creating a more cohesive development and project management experience. According to Apple representatives, the redesign and primary goals of Xcode 4 were the following: •
Better project navigation
•
Povide more contextual information
•
Increase programming efficiency
Prior to March of 2011, Xcode's Interface Builder was a separate application with limited ties to Xcode's source code editor. This loose separation between tools was for many years uncomfortable and awkward for developers. Changes in either [ 220 ]
Chapter 13
interface were by design not wholly recognized, creating a sense of fragmentation and frustration. Within the newly redesigned Xcode 4, an entirely new user interface along with an improved project workflow has been engineered. This tighter integration between every facet of Xcode makes for a dramatically improved development workflow process that we all are extremely happy to see. The redesign and workflow process changes may include a minor learning curve that could cause developers to spend a few more minutes familiarizing themselves with the new interface; however, in nearly every aspect the changes are very much welcome. Even with an attempt to be as non-biased as possible, Xcode 4 now ranks up there among the most advanced IDEs available in the industry. Apple had an opportunity with Xcode 4 to completely redesign the way in which applications are developed and to affect the entire process from start to finish. Taking advantage of this opportunity, it appears Apple has borrowed the lessons learned from the industry over the past 10 years and applied them to the newly redesigned system. Xcode 4 feels more polished, stable, and purposeful than any previous release of Xcode. Xcode has never been truly a one-language development environment and supports a wide range of compiled and interpreted language support. In addition to ObjectiveC, Xcode supports C and C++ as you might expect as well as Java, C#, AppleScript, PHP, Python, Ruby, Perl, and more. Xcode 4 maintains the same project structure as Xcode 3, and is fully backwards compatible, meaning the structure is similar enough that you can open an Xcode 3 project within Xcode 4 and if desired take that project back to Xcode 4 without adverse effects or compatibility issues. In this chapter, we touch upon the efficiencies and effectiveness of the Xcode integrated development environment. We touch upon a few selected areas of the new Xcode 4 interface itself, as well as a handpicked selection of features to increase programming effectiveness. More specifically, the following topics will be covered: •
Distributed builds
•
Dead code stripping
•
Compiler
•
Debugger
•
Source code management [ 221 ]
The Xcode Advantage
Distributed builds
Apple, Mac OS X, and more specifically Xcode, are well known for their built-in distributed computing capabilities. Within Xcode, Apple has provided technology to distribute the building of source code across multiple computers. Any computer that has a compatible version of Xcode installed is available to participate in the distributed building of Xcode projects. Distributed building within OS X relies upon the Bonjour protocol to automatically discover network computers providing distributed compiling services. OS X natively supports these distributed services and other operating systems can take advantage of these same service options with the open-source GNU project distcc.
Enabling distributed building is as simple as much of anything else within Xcode. Once enabled, distributed services are provided at the OS level, meaning that Xcode is not required to be running nor does a user need to be logged into the system for the services to operate.
[ 222 ]
Chapter 13
To enable distributed builds, open the Xcode preference panel and select the Distributed Builds tab to display distributed build options and settings. Before any changes can be made to this preference display, select the lock in the lower left and authenticate to allow changes to be made. Once you have successfully authenticated, you can now view and select machines that are capable of participating in your distributed build process. As mentioned earlier, local network machines discoverable by Bonjour will also be listed. Once you have made your changes, simply close the preference window, change tabs, or select the lock once again to ensure that no unauthorized changes can be made to this feature.
Dead code stripping
Quite often, projects contain code that is never or rarely used or include extra or alternative methods and functions that we may not necessarily want to remove from our source code. This code by default, remains included in our final project builds where it ultimately wastes resources. Dead code stripping is a feature of Xcode that slices out unused methods and functions from project source files, including libraries. As you can imagine, libraries include numerous amounts of functions and methods that are available for completeness but rarely do we ever use every single one in a project. This stripping of dead code can significantly reduce the size and complexity of resulting project executables. Dead code stripping is a great value where libraries and numerous unused functions exist, however, outside of this, a project will likely go unaffected regardless of this option. Within Xcode, dead code stripping can be enabled and disabled within the project's build settings. In Xcode 4, simply selecting the root of the project, and searching all build settings for the term "dead", will return options for configuring 'Dead Code Stripping' and 'Don't Dead-Strip Inits and Terms'. Dead code stripping is a feature that is most advantageous when used for final builds or release builds of applications. Stripping dead code from large projects can have a negative impact on compiling efficiency and unless you have plenty of time on your hands to wait each time you build your project, it is recommended that it be only enabled on these final or release targets. [ 223 ]
The Xcode Advantage
Compiler
Xcode 4 now includes Apple's next generation LLVM 2 compiler, which will ultimately replace GCC. GCC is aged, but by no means is it in need of being tossed out with the bath water. GCC is the world's most widely used compiler and will surely be around for many more years, thus the reason why Apple is providing both options to developers within Xcode 4. LLVM 2's performance is where it truly shines, as it is up to twice as fast at compiling as GCC and according to Apple, produces code that is up to 60% faster in execution than the same code compiled by GCC. LLVM 2 is capable of identifying deeper and greater numbers of code efficiencies during compilation, which have a direct effect on performance within the final executable. Developers are now able to simply select the new LLVM 2 compiler preference within Xcode 4 projects to take advantage of the increased performance and efficiency benefits that it encompasses. Along with performance and efficiency, LLVM 2 error and debug messages are more precise and detailed than traditional GCC messages that we are familiar with. LLVM 2 is now capable of displaying the exact piece of code that was the cause of the violation or warning, meaning less time researching cryptic messages when something fails. LLVM 2 is not just a basic compiler option, but is fully integrated into Xcode 4 and is now the primary engine behind source code completion, syntax highlighting, and the new Xcode Assistant. LLVM 2 is now fully context aware and includes powerful new fix-it features that provide real-time analysis of source code to highlight and even provide automated solutions to common programming mistakes, much like the common word processor spell-check features we are used to. LLVM highlights and provides solutions for misspelled objects, accidental assignments, missing semicolons, mismatched parentheses, and more. Lastly, LLVM 2 now fully supports C, Objective-C, and C++ as first class citizens of the compiler and environment.
Debugger
Within the newly released Xcode 4, general application debugging has been made a significant priority, with a brand new debugger and debugging experience that is capable of debugging multiple threads, in addition to all that you would expect from any current day debugger. [ 224 ]
Chapter 13
LLDB is the name of this new faster and more efficient debugging engine. LLDB is extremely efficient, with a much smaller memory footprint than previous debuggers; LLDB parses only what is necessary and skips entirely data that is not required for debugging, which has an ultimate positive impact on debugging efficiency. One of the most valuable features of the new LLDB is the support for multithreading. LLDB is integrated directly with LLVM. This coupling allows LLDB to view and interpret source code in the exact same way that the compiler does. Ultimately, this means more accurate error reporting and basic core language support. Additionally, Apple notes that because of the integration between the two systems, any new features introduced into the LLVM compiler are automatically available to LLDB, which is of course an excellent benefit. Lastly, one more additional feature that will prove useful is the new context aware filters that only display variables that are within the current debugging scope.
Source code management
In addition to subversion support, which has been in Xcode for some time, Xcode 4 now fully supports and is integrated with Git, the version management protocol behind the recently popular cloud-based source code and version management system github. Xcode 4 now supports branching and merging right along with all of the standard operations that you would expect like checking out, committing, and updating repositories. Adding a Git repository to Xcode 4 for cloning is rather simple. For this example, we will be using the iOctocat Github project, which can be found at the following URL: git://github.com/dbloete/ioctocat.git
iOctocat is an open source Github application that provides access to view Github project details and information. Its Xcode project source is available for download directly from Github and the project is also available on Apple's App Store.
[ 225 ]
The Xcode Advantage
To add any Git repository to Xcode, use the following steps: 1. Open the Xcode 4 Organizer and select the Repositories tab. Next, click the plus (+) symbol in the lower left corner to Add a repository:
2. Once the Add a Repository dialog window has appeared, give the repository a name, enter the Git repository address in the Location field, and click Add to continue.
[ 226 ]
Chapter 13
3. Once the project repository has been successfully added, you can clone the project with the Clone icon in the lower middle portion of the Organizer:
[ 227 ]
The Xcode Advantage
4. Once you have cloned the Git project completely, switch to Xcode where you can open the Xcode project file:
Xcode 4 also brings an incredible new timeline feature to Xcode's version management system. Very similar to OS X's Time Machine function, Xcode's version management timeline feature allows a developer to go back in time to view, compare, and even retrieve committed code from any time period within the project's history.
[ 228 ]
Chapter 13
Summary
Xcode is an extremely powerful and versatile IDE for iOS and Mac OS X application development. Xcode 4 is a completely redesigned interface that focuses on single window development with integrated source code and interface design. This single window design is a large break from previous major versions of Xcode, in which the majority of tools within the Xcode suite were provided in separate applications that were not directly tied to one another. Apple's Interface Builder is now fully integrated into the core Xcode IDE and allows developers to edit classes and code right along side the user interface and views that they relate to. Xcode 4 introduces LLVM 2, Apple's next generation compiler, which vastly improves efficiency and performance and will ultimately replace GCC. Additionally, Xcode 4 brings the new, more advanced debugger, LLDB, which brings support for multi-threaded debugging in addition to greater performance levels throughout the entire debug process. Xcode 4 brings full support for Git, allowing developers to integrate with the widely popular cloud-based Github service. This chapter was focused on the importance of design patterns in modern objectoriented developing and how their use will increase flexibility and reusability, which ultimately has a significant impact on both development and runtime efficiency. We detailed and discussed each of the following design patterns and how they affect the efficiency and performance of Objective-C projects and applications: •
Distributed builds
•
Dead code stripping
•
Compiler
•
Debugger
•
Source code management
[ 229 ]
Index Symbols - (void)didReceiveMemoryWarning 163 @ 60 @catch directive 77, 78 @finally directive 77 @implementation class structure 47 @implementation source code 47 @throw directive 77, 78 @try directive 77
A abstraction variable 125 Adaptor design pattern 214, 215 addSubview method 189 algorithm sorting 113 alloc method 154, 182 animation 190 animation and timing classes, Core Animation 193 Apple about 26, 156 Object Ownership Policy 156 Apple's App Store 11 application: didFinishLaunchingWith Options 173, 178 application architecture about 28, 29 design patterns 29 application design 28, 29 application designing about 35, 36 code structure 47-51 file structure, creating 42-46 project, organizing 38, 39
project, preparing 36-38 project structure 39-41 application development lifecycle 30 applicationDidBecomeActive 173, 179 applicationDidEnterBackground method 174, 180, 181 application execution 173 application init 177 application leaking memory 17 application lifecycle about 170 application: didFinishLaunchingWith Options 178 applicationDidBecomeActive 179 applicationDidEnterBackground 180 application execution 173 application init 175-177 application startup sequence 172, 173 application termination sequence 173-175 applicationWillEnterForeground 179 applicationWillResignActive 180 applicationWillTerminate 181 awakeFromNib method 178 didFinishLaunchingWithOptions 179 phases 171 startup process 170 application maintainability about 53, 54 camel case 57-59 comments 68, 70 documentation 71-74 dot syntax 63 library bloat 66 Lipo 67, 68 method naming conventions 56, 57 re-factoring 64, 66
readability, versus compactness 61 software maintainability 53 syntax efficiency 60, 61 variable naming conventions 54-56 application performance about 30, 31 achieving 31 areas, identifying 36 performance issues, resolving 31 resolve later tool 30 resolve now tool 30 application project. See project application reliability about 75 design 76 error checking 79 exception handling 76, 77 quality 76 testing 76 unit testing 79-81 application startup sequence about 172 flowchart 172 application support directory, disk storage 201 application termination sequence 173, 175 application thread 14 application unit testing project, preparing for 87-94 application unit tests about 79 project, preparing for 87-94 applicationWillEnterForeground 179 applicationWillResignActive 180 applicationWillResignActive method 174 applicationWillTerminate method 175, 181 ARC 18, 152 ARM 67 asynchronous socket communications. See non-blocking socket operations autogsdoc 72 Automatic Reference Counting. See ARC automation 98 autorelease method 160-162 awakeFromNib method 178
B balancing performance 30 Berkeley sockets API 129 Berkeley Software Distribution. See BSD beta testing 81 binary tree sort 122 bit-twiddling 111 bitmasks 111 bitwise AND operator 112 bitwise OR operator 112 block comment 69 blocking socket operations 129 Bluetooth 127 Bonjour 223 BSD 129 bubble sort about 114 worst-case scenario 114 bubble sort algorithm 114 bucket sorting algorithm 118 bug reports 81 Build and Analyze feature 20
C C 72 C# 72 C++ 72 cache 202-204 cache directory, disk storage 201 cache solutions 204 caching operation 204 CALayer class 191 camel case about 57, 59 DB, demonstrating 59 FTP, demonstrating 59 XML, demonstrating 59 camel case, maintainability 57 carrier network access 127 case 57 case-crazy pseudo code 58 CISC (Complex Instruction Set Computing) 15 Clang Static Analyzer 20
[ 232 ]
client-side data caching 142 cocktail sort 122 Cocoa developers 211 code commenting 98 code hoisting 109 code motion 109 code structure organizing 47-51 comments about 68 block comment 69 line comment 69 compact code about 61 example 62 compilers about 109, 224 GCC 109 LLVM 109 compression about 25, 143, 204 benefits 204 content animation 190, 191 Controller objects 216 copy method 154 Core Animation about 26, 192 animation and timing classes 193 categories 192 layer and display classes 193 layout and constraint classes 193 transaction and grouping classes 193 Core Data 21, 206 counting sort 122
D database and storage performance about 199 storage mechanisms, versioning 199 storage options 200 data caching 25 data caching operations 202 data sorting algorithms 113 data sorting techniques bubble sort 114, 115 bucket sort 118, 119
quick sort 119-122 selection sort 116, 117 dead code stripping 223 dealloc method 154 debugger about 224 LLDB 225 Decorator design pattern 215 default project directory about 42 App 42 Controllers 42 Helpers 42 Models 42 Resources 42 deflation 143 Delegate design pattern 133, 214 design pattern about 209, 212 Adaptor design pattern 214, 215 benefits 209, 210 compound pattern 211 Decorator design pattern 215 Delegate design pattern 214 Mediator design pattern 213 Model-View-Controller design pattern 216, 217 performance 211 Singleton design pattern 212, 213 design phase 35 didFinishLaunching method 103 didReceiveMemoryWarning method 162, 163 disk storage about 201 application support directory 201 cache directory 201 documents directory 201 temporary directory 201 display performance 188 distributed builds, Xcode about 222, 223 enabling 222, 223 documentation about 71 consistent documentation 74 maintainable documentation 74 [ 233 ]
technical documentation 71 documents directory, disk storage 201 dot syntax about 63, 64 example 63, 64 Doxygen about 72 reference link 72 running 73 DTrace tracing framework 101
E error checking about 76, 79 example 79 exception 76 exception handling about 76, 77 example 77, 78
F façade pattern 145 favoriteCarName 56 flash memory 22 for loop 109
G garbage collection about 18, 150-152 arguments 152 GCC 109, 224 Github project 225 Git repository adding, to Xcode 226-228 adding, to Xcode4 225 GNU GPL 72 good neighbor theory / concept 27, 28 Groups & Files panel 42
H headerdoc 72 heap sort 122 HTTP 137 HTTPS 138
I ICMP 140 if / then conditions 79 Info.plist file 174 init method 175, 176 init override method 175 instruments ability, demonstrating 103-106 about 101, 102 available options 102 Instruments script running 102 running, on Mac OS X 102 int 113 Internet Control Message Protocol. See ICMP iOctocat 225 iOctocat Github project URL 225 iOS 4 16 iOS 5 19 iOS application about 172 bitmasks 111, 112 data sorting algorithms 113 iteration loops 109, 110 memory management 150 network performance 127, 128 objects, reusing 110 run loops 122, 123 semaphores 125 timers 123, 125 iOS applications storage options 200 iOS devices memory configurations 19 storage configurations 22, 23 technical display specifications 27 with integrated processor architecture 16 iOS SDK 21, 144, 188 iOS simulator 67 iOS timers about 123 benefits 123, 125 IP protocol suite 139 item renderers 193-197 [ 234 ]
iteration loops 109, 110
J Java 72
L layer and display classes, Core Animation 193 layout and constraint classes, Core Animation 193 Leaks script 102 library bloat 66 library heavy mash-ups issues 66 line comment 69 Lipo about 67 command-line utilities basic functions 67 LLDB about 225 features 225 LLVM 109 LLVM 2 152 LLVM2 compiler about 103, 224 error and debug messages 224 features 224 performance 224 LLVM compiler 18 localization 44 logic unit tests about 80 project, preparing for 81-86 loginOperation method 145 long-handing code 63 loop invariant code motion 109 loops 108
M maintainability theory 54 maintainable software 53 manual techniques 97 manual testing techniques 97 Mediator design pattern 213 memory 149
memory leaks 103, 110, 149 memory management, iOS application about 150 alloc method 154 autorelease method 160-162 copy method 154 dealloc method 154 didReceiveMemoryWarning method 162, 163 garbage collection 150-152 methods 153 release method 155-159 retain method 155 memory performance about 17-20 memory leak 17 merge sort 122 method naming conventions, maintainability about 56, 57 class methods, naming 57 potential method names 57 verbNoun convention, for naming methods 57 mise en place 167-169 mobile processing architecture 15 Model-View-Controller design pattern 21, 216, 217 Model objects 216 MVC roles Controller objects 216 Model objects 216 View objects 216 myDictionary 176 myFavoriteCar 56 myString object 60
N network bandwidth 140-142 network communication 23 networking 23 network performance 23, 24 network performance, iOS application about 127, 128 bandwidth 140-142 compression 143 [ 235 ]
façade pattern 144, 145 protocols 138, 140 sockets 129-131 streams 132-138 network protocols about 139 ICMP 140 TCP 139 UDP 139 non-blocking socket operations 129, 130 NSApplicationSupportDirectory 201 NSCachesDirectory 201 NSDate object 104 NSDictionary 178 NSDocumentDirectory 201 NSInputStream class 133 NSObject 63 153 NSOutputStream class 133 NSSearchPathForDirectoriesInDomains function 202 NSStream class 133 NSStreamEventEndEncountered stream event 137 NSStreamEventErrorOccurred stream event 137 NSStreamEventHasBytesAvailable stream event 137 NSStreamEventHasSpaceAvailable stream event 137 NSStreamEventOpenCompleted stream event 137 NSStream socket connection 133 NSString class 202 NSString object 60 NSTemporaryDirectory 201 NSTimer class 123
O object animation 192 object init about 182, 183 init method 183 init override method 183 initWithX 184 initWithY 184
Objective-C 72 about 129, 151 compiler directives 77 Objective-C 2.0 18 Objective-C bubble sort example 115 Objective-C bucket sort example 118, 119 Objective-C code about 63 handling 63 Objective-C quicksort example 120, 122 Objective-C selection sort example 116, 117 Objective-C square bracket notation 63 Objective-C timers 124 Objective-C variable names 55 object lifecycle about 182 allocation 182 deallocation 182 initialization 182 steps 182 usage 182 Object Ownership Policy, Apple 156 object pools 110 object reuse 110 objects reusing 110 orphaned objects 149, 150
P perception of performance 9, 11 performance approaching 12 design pattern 212 perception 9, 10 poor performance symptoms 9 success 7, 8 performance-driven development fundamentals 7 performance bottlenecks about 13, 36 application architecture 28, 29 application design 28, 29 application performance 30, 31 good neighbor theory / concept 27, 28 memory performance 17-20 network performance 23, 24
[ 236 ]
performance, approaching 12, 13 process management performance 14-17 storage performance 20-23 user interface performance 25-27 performance fundamentals 11 performance measurement 97, 98 performance requirements calculating 98 performance threshold 8 PHP 72 pivot point 120 poor HTTP usage example 138 poor network communications design example 24 poor network data management 24 poor performance 13 poor performance symptoms 9 process management performance about 14-17 thread management 15 process waste 110 project preparing 36-38 preparing, for logic unit testing 81-86 project organization 38, 39 project structure 39-41 pseudo compression language 144 Python 72
Q quicksort algorithm about 119 demonstrating 119, 120
R RAM 17, 149 Random Access Memory. See RAM re-factoring 64, 65 re-factoring code about 64 benefits 64 Reachability class 143 readability versus compactness 61
readable code 61 readable working example 62 Reduced Instruction Set Computing. See RISC reference counting 153 release method 155-159 reliability 75 Resources directory about 42 Animations 43 Audio 43 Databases 43 Images 43 NIBs 43 retain counting 153 retain method 155 RISC 15 run loop about 122 need for 123
S SDK 205 SDK updates 65 selection sort 116 selection sort algorithm 116 semaphore about 125 example 125 uses 125 SenTestingKit framework 80 server-side data caching 141 setSpouse method 158 shell sort 122 Singleton design pattern 212, 213 Snow Leopard 151 socket-reading principles about 129 blocking 129 non-blocking 129, 130 sockets about 129-131 reading principles 129, 130 socketStreamExample method 134 sorting 113
[ 237 ]
source code management about 225 Git repository, adding to Xcode 226-228 Git repository, adding to Xcode4 225 source code structure creating 42-46 SQLite 21 about 205 features 205 SQLite database 205 SQL standards 205 SRAM 149 static analysis 99 static analyzation 20 static analyzer about 99 usefulness, demonstrating 99, 100 Static Random Access Memory. See SRAM storage options, iOS applications cache storage 202, 203 compression 204 Core Data 206 disk storage 201 SQLite 205 storage performance about 20-23 benefits 21, 22 streams about 132 working 132-138 syntax 107 syntax efficiency about 60, 61 examples 60
T table view cells creating 195 displaying 195 recycling 195 TCP 139 TDD 81 temporary directory, disk storage 201 test-driven development. See TDD testAppDelegate method 93
threading 15 tokens 107 transaction and grouping classes, Core Animation 193 Transmission Control Protocol. See TCP tryCatchMeIfYouCan method 78
U UDP 139 UIApplication 213 UIApplicationExitsOnSuspend 174 UITableView 204 UITableView handling 29 UITableViews 193 UIWindow 189 unit 79 unit testing about 79-81 application unit tests 79 benefits 80 drawbacks 81 logic unit tests 80 universal binary 67 User Datagram Protocol. See UDP user interface 25 user interface performance 25-27
V variable naming conventions, maintainability about 54-56 descriptive and well-formed variable names 56 non-descriptive and poorly formed variable names 56 rules 55 version management system, Xcode 228 View objects 216 view performance 188, 189 views about 188, 190 managing 189 placed, in frame 190 subview, removing 189 volatile memory 149
[ 238 ]
W while loop 108 Wi-Fi 127
X Xcode 20, 152 about 20, 36,152, 219, 221 advantages 219 compiler 224 dead code stripping 223 debugger 224, 225 distributed builds 222, 223
distributed builds, enabling 222, 223 source code management 225-228 unit testing 79 version management system 228 Xcode 3 219 Xcode 3.2 20, 99 Xcode 4 about 37, 220, 221 overview 220 Xcode interface 40 Xcode project organizing 39-41 Xcode project source 225
[ 239 ]
Thank you for buying
iPhone Applications Tune-Up
About Packt Publishing
Packt, pronounced 'packed', published its first book "Mastering phpMyAdmin for Effective MySQL Management" in April 2004 and subsequently continued to specialize in publishing highly focused books on specific technologies and solutions. Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today's systems, applications, and frameworks. Our solution based books give you the knowledge and power to customize the software and technologies you're using to get the job done. Packt books are more specific and less general than the IT books you have seen in the past. Our unique business model allows us to bring you more focused information, giving you more of what you need to know, and less of what you don't. Packt is a modern, yet unique publishing company, which focuses on producing quality, cutting-edge books for communities of developers, administrators, and newbies alike. For more information, please visit our website: www.packtpub.com.
Writing for Packt
We welcome all inquiries from people who are interested in authoring. Book proposals should be sent to
[email protected]. If your book idea is still at an early stage and you would like to discuss it first before writing a formal book proposal, contact us; one of our commissioning editors will get in touch with you. We're not just looking for published authors; if you have strong technical skills but no writing experience, our experienced editors can help you develop a writing career, or simply get some additional reward for your expertise.
iPhone JavaScript Cookbook ISBN: 978-1-84969-108-6
Paperback: 328 pages
Clear and practical recipes for building web applications using JavaScript and AJAX without having to learn Objective-C or Cocoa 1.
Build web applications for iPhone with a native look feel using only JavaScript, CSS, and XHTML
2.
Develop applications faster using frameworks
3.
Integrate videos, sound, and images into your iPhone applications
4.
Work with data using SQL and AJAX
Xcode 4 iOS Development ISBN: 978-1-84969-130-7
Paperback: 432 pages
Use the powerful Xcode 4 suite of tools to build applications for the iPhone and iPad from scratch 1.
Learn how to use Xcode 4 to build simple, yet powerful applications with ease
2.
Each chapter builds on what you have learned already
3.
Learn to add audio and video playback to your applications
4.
Plentiful step-by-step examples, images, and diagrams to get you up to speed in no time with helpful hints along the way
Please check www.PacktPub.com for information on our titles
Core Data iOS Essentials ISBN: 978-1-84969-094-2
Paperback: 340 pages
A fast-paced, example-driven guide guide to data-drive iPhone, iPad, and iPod Touch applications 1.
Covers the essential skills you need for working with Core Data in your applications.
2.
Particularly focused on developing fast, light weight data-driven iOS applications.
3.
Builds a complete example application. Every technique is shown in context.
iPhone User Interface Cookbook ISBN: 978-1-84969-114-7
Paperback: 500 pages
A concise dissection of Apple's iOS user interface design principles 1.
Learn how to build an intuitive interface for your future iOS application
2.
Avoid app rejection with detailed insight into how to best abide by Apple's interface guidelines
3.
Written for designers new to iOS, who may be unfamiliar with Objective-C or coding an interface
Please check www.PacktPub.com for information on our titles
This material is copyright and is licensed for the sole use by HOSHANG DASTUR on 15th October 2011 7211 NW DONOVAN DRIVE, KANSAS CITY, 64153