“There are two ways you can do application security” sounds like the setup line for a joke, doesn't it?

Method 1 is what the majority says you're supposed to do, or at least what “best practices” tells you to do. Let's call that Method 1.

Method 1 is the popular way to do network, system and application security these days. Follow the usual recipes, checklists and scripts: firewalls, antivirus, IDS, patch patch patch. Chase the latest vulnerability and remove the latest malware. I used to consider that fun, until the number of users I supported and the security of the networks I supported both ballooned. Then it was not so fun. Then it was murder.

Which led me to looking for another way to do things, and eventually to the OSSTMM and some provocative discussions with senior security gods. Cor Rosielle in particular has offered some examples that made me stop and re-read, twice. That's something I rarely do. But reduced to the simplest level, what he was saying made sense: stop chasing vulnerabilities with your patching net. You will never catch them. Audit interactions instead. It's all about the interactions. That we call Method 2.

This is the kind of concept I had to learn through experience and examples. After a while it became second nature to think about interactions, but initially it was like seeing a new color.

Let's use Google as an example, and Chrome in particular, though I'll start with an Android setting you may be familiar with: the Backup & Reset option. Ever since I learned that Google is glad to cache things like wifi passwords for every network you connect to, for “user convenience” I've made sure that this setting is unchecked on every Android device I use. On most Android phones this setting is on by default, and you're given the chance to change it at the initial phone setup. And it sounds great, right? All that user stuff like passwords and the apps I've bought can follow me to a new phone!

Yes, and all that information will also be available to creative crackers, and any government agency with the right compulsory powers. Regardless of your feelings about governments, agencies or crackers, the simple fact of exposure creates a potential interaction that must be analyzed.

First, are confidentiality controls in place? To most that would mean encryption in transit and during storage.  But it also means no sharing or release of encryption keys or encrypted data. Google passes on the first score, but necessarily fails on the second because it's already well-established that Google releases information to many governments. Even if I grant full trust to governments, they sometimes force people and organizations to do things I can't trust. As I write, for instance, there's debate in the UK about prohibiting “unbreakable” encryption. A prohibition like that requires some kind of back door, which in turn extends an irresistible invitation to crackers everywhere. I mean who doesn't like finding secret doors?

A prohibition of unbreakable encryption requires some kind of back door, which in turn extends an irresistible invitation to crackers everywhere.

The Integrity and Availability legs of the C-I-A triad superficially seem solid in this case, mostly because Google's whole motivation is to make our confidential information intact and available everywhere. But there are actually four legs to that stool, because Privacy is as important as Confidentiality. The Ashley Madison leak has proven that the confidentiality of information doesn't count for much if user identities are revealed and activities are at least strongly implied.

For example, a couple entering the hotel room next door means next to nothing, and certainly doesn't imply someone is cheating. But if the couple consists of your boss and a woman who definitely isn't his wife, suddenly that disclosure of identity becomes a significant violation of privacy (and a vulnerability for your boss, who may soon learn about a class of exploits called “Extortion”).

If you're an OSSTMM user then you know that Confidentiality is a completely different control from Privacy. And most of you, like me, also needed time to understand the difference, since for so much of our best practice lives, it was one and the same.

Where I got it, before we had such big examples like Ashley Madison, is that Confidentiality is protecting the message, whether in transit or on a server or in a briefcase handcuffed to an arm. So nobody can read that message if it isn't intended for them.

Whereas Privacy as a control is about protecting the interactions, which for a message, would mean protecting how it is delivered, who sent it, and who read it.

So seen in this light, any disclosure of my identification/authentication details is a similar problem of privacy, though the parallel (divorce) may happen between me and my employer (who is too much like a spouse anyway). While I may uncheck the Backup & Restore box on my Android devices, every friend who comes to my house and every visitor to my company who gets the wifi password becomes a possible leak: did they uncheck that box? In the enterprise you'd address this kind of issue with effective network segmentation, but many smaller networks and most home networks won't even be aware of this involuntary interaction.

This is a similar problem with Chrome. Google has worked hard to respond to security concerns with the Chrome browser from the start. From the software vulnerability viewpoint, they've done a pretty good job (which is where you'd be putting your attention too, if you're doing security by Method 1 above). But from an interactions perspective, Method 2, Chrome is keeping, moving, storing and disclosing information in ways I can't control, and to parties I haven't specifically trusted. And that's a problem.

The simple truth is, it's hard not to log in to Google when you're using Chrome.

You can, in fact, use Chrome without logging in to Google. Unless you clicked Sign in to Chrome, that is.  Or sign in to Gmail or Youtube on the same browser or device. Or sign in to any other Google service. Often you're not even aware you've enabled automatic login for those sites. The simple truth is, it's hard not to log in to Google when you're using Chrome. Do your users know that even when they close that Gmail tab, they haven't logged out of Google until they literally go through the Log Out process? Or that they've essentially licensed Google to track their every click and remember every password while they're logged in?

Chrome likely remembers far more user accounts and passwords than Android's wifi password caching operation. And like Android, it will cache those credentials not just locally, but on Google's servers. So if your users access a secure corporate website using Chrome, unless you're managing their sync settings, Google has their credentials. Did you choose that interaction, or consciously grant that trust? If you haven't, you've got an uncontrolled interaction, one that occurs every time a user opens Chrome if they always have Gmail open in a default tab.

What I'm most concerned about is the model, what ISECOM calls the Witness Protection Model. Because that's what Method 1 is using.

When you go into witness protection, you get a new name, home, job, life. As long as you do everything perfectly, and manage never to reveal any connection between your old and new identities, you're probably going to be okay. But slip up just once, and horse heads start showing up in your bed. Even if you don't slip up, if someone recognizes you, horse heads. It's tough to play this game. Because WPP security only works if it has the cooperation and diligence of all the parties involved.

ISECOM advocates the Prison Model

This sounds grim and oppressive but actually, it's not. Personally, I like the fact that because it's impossible for me to mess with corporate finances, I can't easily be accused of embezzling. Essentially, anything that I'm not required to do, I'm forbidden to do. We talk about this as the Principle of Least Privilege and Separation of Duties, among other terms. Under this model, Chrome caching/syncing would require conscious decisions and repeat authorization for it to function, and provide fine-grained, individual-item configuration, and be completely manageable by enterprise directory controls.

what's being cached, shared or disclosed, what is valuable and vulnerable, and how much control do you have?

But as things stand currently, I'd be enormously reluctant to sign off on a client allowing employees to use Chrome from within the enterprise, and I'll likely never be happy about them using Chrome to access secure corporate sites. It's easy to write off my concern as just another paranoid security guy ranting about Google. But once you're looking at interactions as something about as welcome as a leech, you're going to want to limit how many of them are hanging off your skin, and you're definitely not going to be happy with anyone surreptitiously sticking them onto you.

Now for an exercise: go to the Google Privacy Page and scroll down to “Information Google receives when you use Chrome.” For each of the 20+ bullet points, give some thought to the interactions involved: what's being cached, shared or disclosed, what is valuable and vulnerable, and how much control do you have?

ISECOM does ongoing development of the OSSTMM specifically to encourage the practice of interactions-based security analysis, and we offer the Hacker Highschool curriculum to train teens in security awareness for all the interactions our wireless world brings to them. You can use these products for free so you're welcome to try them and see just how differently you can practice security: you can get into a patch-and-fix cycle that never ends which you can never “win,” or you can focus on your assets and careful analysis of their interactions both inside and outside your enterprise. There will be two major differences: in the price of your application security solutions, and in your eventual workload.

Personally, I've always been lazy about unnecessary work; that's why I started writing shell code and that's why I've moved to Method 2 in my security practice. And that kind of laziness is not a sin.

About Glenn Norman

Glenn is a freelance security professional with a long and varied experience. He is the ISECOM Hacker Highschool v.2 Project Manager coordinating almost 100 contributors, reviewers and translators. His specialties are: Teaching, training and curriculum development for the fields of cyber security, Electronic medical records (EMR), managing highly secure networks and open-source software development.

Comments (0)

Please Post Your Comments & Reviews

Your email address will not be published. Required fields are marked *

Love to learn about Application Security?

Get all the latest news, tips and articles delivered right to your inbox.