March brought with it yet more news of app security headaches. The latest is the discovery of "132 Android apps on Google Play infected with tiny hidden IFrames that link to malicious domains in their local HTML pages," according to the security firm that made the discovery.
But before you dismiss this latest security hole with a yawn and a "so what else is new?," consider this other comment from that initial statement.
"Our investigation indicates that the developers of these infected apps are not to blame, but are more likely victims themselves. We believe it is most likely that the app developers’ development platforms were infected with malware that searches for HTML pages and injects malicious content at the end of the HTML pages it finds," the Palo Alto Networks blog post said. "If this is this case, this is another situation where mobile malware originated from infected development platforms without developers’ awareness."
What this means is that, even if you and your colleagues wanted to trust the developers who create and post apps to Google Play—a stunningly dangerous security position to take—it wouldn't help. If the coders themselves are not aware of the security issues, their integrity is irrelevant.
This forces the next logical question: If one mid-sized security firm could find a bug—nay, not one but 132—why didn't Google? Google certainly has the talent on payroll and enough people to do it. The sad fact is that they don't have the business model to do it.
Google has no more profit-motive to check every app on Google Play than eBay does to subject every auctioned product to lab durability testing. This means that you simply cannot trust Google or Apple to make sure that any app your people download from those sites are secure.
The only company that has a business need to be into place an app security system for your company is your company. If you don't do this directly, no other firm can justify doing it for you.
What exactly did these Android apps do? "What all the apps have in common is that they employ Android WebView to display static HTML pages. At the first glance, each page does nothing more than load locally stored pictures and show hard-coded text. However, a deep analysis of the actual HTML code reveals a tiny hidden IFrame that links to well-known malicious domains," the blog post said. "We have observed two techniques used to hide this IFrame. One is to make the IFrame tiny by setting its height and width to be 1pixel. The other one is to set the display attribute in the IFrame specification to None. Finally, to evade detection based on simple string matching, the source URLs are obfuscated using HTML number codes. we also identified a sample that didn’t contain an infected IFrame, but an entire VBScript was injected into the HTML. The script contained a Base64 encoded Windows executable that, on a Windows system, the script would decode, write to the file system, and execute."
In short, these issues are highly unlikely to be detected by your users. And other members of your security team will also not likely to find it, during their routine duties, unless they are specifically looking for it.
This is a big part of the reason why specialized app security teams and tools are needed. This doesn't involve merely time or running a script. It involves dealing with the constantly changing world of app security, so that your app security group have help knowing what to look for.