It's becoming increasingly clear to IT how critical it is to keep applications secure. One problem, though, with keeping apps secure is making sure that they at least started out secure, which is harder than it should be.
Consider this scary piece from Engadget where they found that security in the mobile Android world—specifically apps in Google's Play Store—is rather amorphous. "Apple's App Store at least tries to curate product security, while Google's Play Store is like playing appsec Russian roulette," noted the story, which tellingly had a hed and deck that said "'Secure' apps in Google's Play Store are a crapshoot. If you’re trying to hide photos or videos on your Android, just give up now."
"Hiding" may have been a bad word choice, as the goal is truly to protect those files, allowing access only to authorized users. Then again, if global cyberthieves are on your authorized user list, all is good.
I'd push back and say that this situation is even worse. It's true that Apple's App Store does make developers jump through a few more hoops than do their Google counterparts, but even it doesn't have the resources that they can cost-effectively deploy to do meaningful pen testing of every app before it's approved.
Apple's app security philosophy is closer to that used for decades by virus-detection systems. In effect, it relies on crowdsourcing. The best anti-virus programs today won't protect you against the newest nasty viruses. It relies on having a large customer base and assumes that the first customer victimized will reach out to the anti-virus vendor for help. The vendor in this way learns of the new virus and adds that new attacker to its definitions list and thereby protects everyone else. The first victim is pretty much out of luck.
Apple operates similarly. It assumes that if there's a major security hole, one of its millions of customers will get burned by it and Apple will then learn of it and can patch it for everyone else.
Apologize for the cliché, but this is no way to run a railroad. This is why you need to secure your own dedicated app security system. You simply can't rely on app vendors—or app enablers, such as Apple and Google—to do this for you.
Here's another frightening news item: ThreatPost noted that "b" and pointed to a "study of 45 million transactions during a three-month period" which "identified privacy leakage as the most serious problem with too many apps sending metadata, location and personal identifiable information to the developer’s server or an ad server.
This shouldn't come as much of a surprise as we have noted the many times that apps are found to leak data that caught the end-user and the developer unaware. Even an app that has been vigorously tested for data leakage can be a security problem once it moves out of the developer's sandbox and into a real-world mobile environment, where it must interact with other apps (many written after the app in question was deployed), helper apps (crash detection, backup, etc.) and a constantly changing mobile operating system.
There again it makes having an independent app security strategy essential.