California’s Attorney General issued a warning to mobile developers this week: come clean about what kinds of user data you collect – or else! It was a laudable act – especially in the face of federal government indifference. But more daylight may not make users any safer.
After hammering out an agreement with platform providers Amazon, Apple, HP and RIM in February to improve privacy protections for users of mobile applications, California’s trailblazing Attorney General, Kamala Harris was back again this week: calling out mobile application developers who were violating the state’s sweeping data privacy law.
“Protecting the privacy of online consumers is a serious law enforcement matter,” said the Attorney General in a published statement. “We have worked hard to ensure that app developers are aware of their legal obligations to respect the privacy of Californians, but it is critical that we take all necessary steps to enforce California’s privacy laws.”
Harris’s actions comes after increased media attention to the issue of online privacy. In December, 2010, for example, The Wall Street Journal published the results of an extensive investigation of 101 popular mobile applications for Apple’s iPhone and Google Android devices, finding that a majority – 56 – transmitted the phone’s unique identifier (UDID) to other firms without the consent of the phone’s owner, while a smaller number (around 5%) sent personal information such as the owner’s age, gender and other personal details.
That report was followed by others that exposed how mobile applications and devices were harvesting reams of personal information and even users location – often without the explicit consent of the phone’s owner. In April, 2011, for example, Apple was forced to acknowledge and publicly apologize for a bug that had its iPhones continuing to transmit location data, even when users had disabled Location Services on their phone. The state laws that followed in the wake of those stories and others like them , including California’s Online Privacy Protection Act (COPPA), passed in 2012, required explicit disclosure of data collection and privacy protection policies. In short: software publishers and platform vendors needed to say what kinds of data they collected, which of it was personally identifiable data, and what they were doing with it once it was collected.
But informing consumers is one thing, and actually protecting privacy is another, much bigger problem. It’s dangerous to mistake one for the other.
For one thing: laws or no laws, data harvesting by mobile devices, mobile applications and web-based applications is just as popular as ever. Juniper Networks this week released a survey of 1.7 million mobile apps and found that anonymous and individual data harvesting was still common. In particular: about a quarter of all free mobile applications for the Android platform obtain user permission to track their location. Around seven percent of free apps also get permission to access a user’s address book. An April survey from the information security group ISACA found that close to 60% of smartphone users took advantage of location services, despite concerns that the information might be misused by advertisers.
And this doesn’t even address the problem of poorly coded- or configured mobile applications that unwittingly share information. In just the last year, users of mobile applications by the professional social network LinkedIn and Path found out that their personal information was being harvested without their consent. In the case of LinkedIn, security researchers showed that personal calendar entries on iPhones and iPads were being transmitted back to LinkedIn without their consent. Path, another social network, was caught uploading its members entire address book to their servers for processing in an attempt to expand the reach of its network.
The idea embodied by laws like COPPA is that more information will liberate consumers to make informed and smart decisions. But it just doesn’t pan out. Why? For one thing, it badly discounts the complexity of our fast-evolving mobile marketplace. The concept also ignores a lot of what we know about human psychology and the findings of independent studies. In one related study by Carnegie Mellon University on the effectiveness of browser warnings about unsafe web sites, for example, researchers found that web site visitors frequently exhibited “dangerous” browsing behavior: clicking past warnings about insecure web connections to get to pages they needed to access. Users, the researchers found, put more trust in the “reputation” of the site they were connecting to than what their browser was telling them about how secure their site was. A similar phenomenon is likely at work when users on Apple’s App Store or Google Play blow past privacy agreements to get their copy of Bad Piggies or Instagram.
So what’s to be done? Pressure on mobile app developers has improved conditions somewhat. The Future of Privacy Forum released a report in June that found more mobile application developers issuing explicit privacy policies with their applications. But stronger measures are needed to protect the most sensitive user information. The researchers at CMU concluded that actually securing users by blocking access to dangerous sites was one way to thwart unsafe behavior. A similar approach might work in the context of mobile application privacy. Namely: protect privacy by protecting privacy – make it illegal to collect certain kinds of user information without explicit consent and do away with blanket permission statements, as the EU is doing. Beyond that, users should have information in real time about which of their personal information will be shared as the result of a particular online action (downloading or running an application, signing up for a new service, and so on). Only with that information, presented in context, can users make informed decisions about their online behaviors.