Apple iOS Security Defeated By Sneaky App
After slipping a malicious app past Apple's App Store reviewers, security researchers say Apple should strengthen its defenses.
10 Hidden iPhone Tips, Tricks
10 Hidden iPhone Tips, Tricks (click image for larger view)
Five computer security researchers from the Georgia Institute of Technology have demonstrated that they can create malicious apps that can avoid detection by Apple's app review process.
In "Jekyll on iOS: When Benign Apps Become Evil", a paper presented at the Usenix Security '13 conference, Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee describe how they were able to create apps that can be exploited remotely through program paths that did not exist during the app review process. The researchers call these "Jekyll apps," because they conceal their malicious side.
Apple takes justifiable pride in its iOS security regime. Though the company's scrutiny of third-party apps often forces developers to do extra work to satisfy its rules, its oversight has keep malware at bay more effectively than the efforts by the company's competitors. A 2011 research paper, "A Survey of Mobile Malware in the Wild", identified all known Android, iOS, and Symbian malware that spread between January 2009 and June 2011. Of the 46 instances of mobile malware during this period, only 4 affected iOS, compared to 24 for Symbian, and 18 for Android.
[ Read Microsoft Slams Google Over YouTube App Ban. ]
Nonetheless, iOS, like any operating system, has flaws that can be identified and exploited. While Apple tends to address such flaws quickly once it becomes aware of them, it can't fix problems that it can't identify. Wang and his colleagues show that exploitation can be accomplished without a specific vulnerability, by concealing malicious attack logic.
"Jekyll apps do not hinge on specific implementation flaws in iOS," the paper explains. "They present an incomplete view of their logic (i.e., control flows) to app reviewers, and obtain the signatures on the code gadgets that remote attackers can freely assemble at runtime by exploiting the planted vulnerabilities to carry out new (malicious) logic."
Assembling malicious logic at runtime avoids detection by reviewers and by automated methods of static analysis, a way to analyze program code without actually executing the instructions.
To prove that point, the researchers managed to submit a malicious "Jekyll" app, to have it approved by Apple and to download it, before voluntarily removing it from the iTunes App Store.
The construction of "Jekyll apps" may be more elaborate than necessary to sneak code that violates Apple's rules past app reviewers. Last year, for instance, the iOS app iRandomizer Numbers was found to have an undocumented tethering feature that violated Apple's review guidelines. The app was pulled from the iTunes App Store and AT&T's mobile business did not collapse as a result of unexpected network data traffic. But the incident demonstrates that Apple does not catch every app with undocumented features.
Asked whether it might not be easier just to create an app that acted maliciously for a single, targeted victim or after several months of use, Wang in an email responded he and his colleagues are assuming that Apple has complete insight into unexecuted branches of code that lead to malicious behavior when certain conditions are met.
The paper argues that it is theoretically difficult and economically prohibitive for Apple to keep its App Store free of vulnerabilities. Nevertheless, it does offer a few suggestions about how to mitigate the risk of "Jekyll" apps through runtime security monitoring mechanisms.
The researchers propose a stricter execution environment, along the lines of the way Google limits Native Client code, even as they express doubts about the ease with which Apple could accomplish this, due to the tightly coupled nature of public and private frameworks in iOS. They also advocate for more finely-grained use of security techniques like address space layout randomization (ASLR), permission models and control-flow integrity (CFI). Finally, they suggest that Apple adopt a type-safe programming language like Java to protect against low-level memory errors such as buffer overflows.
About the Author
You May Also Like