The hardest part of growing up is that everything you’re allowed to do is communicated in a general sense and everything that you’re not allowed to do is enumerated specifically and in detail AFTER you’ve gotten in trouble for doing it. So you’re told things like, “Go play in the yard.” Yet you get chewed out for very specifically flooding the yard to play mud football. Apparently the lawn, the water, and the clothes all cost money. Yet you played in the yard. Crazy, I know. Wouldn’t it have been better had you been told from the start, “Go play in the yard but do not flood it or damage it or your clothes in any way and do not waste water.” Sure, they tell you that NOW but it would have been much smarter to do it before you got in trouble.
This is the 3rd article of a pragmatic series to help you understand security in new and practical ways that you can apply immediately to improve software. So check back regularly and get a new story or learn about software security, whichever, and be sure to take the little quiz at the end. I made that quiz just for you and it’s a personal offense for you to at least not try it.
But don’t blame the parents. The truth is it’s really difficult to imagine all the things a person might do other than what you told them to do. The bad things, the things not to do, seem so obvious. It’s never more obvious then it is in cybersecurity where since the beginnings of networked computers there have been the Admins who make the rules and the Users who are supposed to follow them.
Users are considered rarely capable of following direction and when they are, they’re called Power Users. Which is also why we call children who do what they’re told Power Children (yes we do so maybe you probably weren’t a child that followed directions so you don’t know).
Which is why we rarely tell people what to do while following it up with what not to do. Mostly, because like parents, we just can’t imagine we need to. But also, if we had to we’d be spending the better part of the day detailing all the things that shouldn’t be done. So we don’t.
And that’s just one reason why it’s hard to program securely. We call it the “What not to do” problem. It’s actually the first of four barriers to secure development that you need to be aware of but all four are really things not to do and here they are specifically listed so you can’t say nobody told you:
1. What Not To Do
It is very difficult to build for "what must not be done"; protecting something according what not to do first requires that you know exactly how it operates as a whole with all connected resources. Creating sanitizing filters for input validation in this way creates a blacklist which enumerates all the things that can’t be done. However except for code with extremely few interactions it’s not realistically possible to list all of them because besides input and output, there’s many other types of interactions like error outputs, special character sets for international keyboards, and for specific OS commands or shortcuts.
The environment you develop in includes the operating system, programming language, and shared libraries that make it more and more difficult to enumerate what not to do and secure against those things. Each of those things at minimum affects input and output which can often require two security controls, one for each. At that pace don’t think you can ever get ahead. So focus on working under a whitelist by strictly allowing what is specifically allowed and disallow everything else.
Trying to build without unnecessary complexity requires a lot of upfront planning. A lot. Because while hacking a fix sounds cool (because it is cool) it is not always the most efficient solution. And each bug fix has the possibility of adding more complexity which leads to inefficiency and security issues. I know that sounds weird because some bug fixes are security related. Furthermore, complexity often requires adding more controls to manage changes in interactions. Those additions increase the attack surface to that system.
The attack surface, in simplest terms is when you have interactions with a user or a system, each one of these interactions creates a new way for the program to be attacked. When you minimize the attack surface you minimize how much of the program can be attacked. This improves your risk against attack and it improves your security because you have less area that you need to protect. Which is why when complexity makes more interactions you also have more doors to guard.
And it’s not just systems. Complexity also makes developers perform worse and skip steps in a process. On the human level complex tasks require more concentration and leave gaps in secure behavior which then affects quality and security audits of the code.
It’s not just interactions with users and systems that increases the attack surface. It’s proven that security implementations can increase the attack surface of the scope as a whole. The more security you add, the more attack surface you can create. That requires balance. Because security implementations that protect against active threats do so through interaction. Each interaction opens a new way for a threat to attack. And each opening requires security. This can be a recursive nightmare to solve. We call this a security imbalance.
For example, think of a simple program that takes input and provides an output. I could show you some Hello world code here but I won’t insult you. So let’s do this abstractly instead. Now what happens when you sanitize that input to output the Hello world?
By adding security to sanitize unruly input possibilities we went from 2 interactions (represented by the arrow):
User inputs → Processes Happen → Output sent
to five interactions:
User inputs → Processes Happen → Filter receives → Filter outputs → Processes Happen → Output sent
Controls can quickly build up and become redundant as you fight to gain security over all the newly created interactions. Maintaining global protections to be called throughout the program helps create balance since no new interactions are created if you reuse the same controls. This also lets you use a variety of different types of controls to protect against many more types of threats without increasing the attack surface beyond your control.
4. Shared Resources
Each system has limited resources and many applications need to share those resources so this probably sounds crazy. Which is why it’s a pretty big barrier. You see, the problem is that security should not share resources with the asset it is protecting. If the security implementation needs memory, hard disk, or other resources then it should not be the same as that used by the program. Ideally all resources between security and assets should be separated and it requires your best efforts to reach that ideal separation. It won’t always be possible but you should consider what it means.
In abstract terms, imagine you have an army that needs to protect a town. If the army moves into the town and shares its food and water then an attacker can burn the food and poison the water to defeat the army AND take the town. If the army has its own food and water different from the town then the attacker cannot displace the army by attacking the town’s resources. Additionally, if you move the army outside of the town to protect it, then there’s no attack against the town that can affect the army and no attack against the army will affect the town as long as the army stands.
Now in a computer system this can be difficult to achieve but sometimes when it’s not regarded at all, things can go very wrong. The Heartbleed vulnerability is a good example of this for study as it took random data from memory to pad packets that were sent back to the attacker, eventually including login and password data still in memory. Why was the authentication a shared resource with the application underlying the service?
You will struggle with trying to trade-off efficiency that comes from shared resources with security that requires you don’t. At this point it’s still unclear if different security controls can share resources because we just don’t have enough research to say but what is clear is that the application and the application’s security should not.
How do you choose this? An easy rule of thumb to follow is: “The more confidential the information is within the application the less exposure it should have to the environment.
So while you may not have been a Power Child there’s no reason why you can’t be a Power Developer now and do what you’re told and specifically, these four things that you shouldn’t do.
Quiz – answer in the comments section and gain the respect and envy of your peers!
1. What are the four things you shouldn’t do to assure more secure development?
2. Why does adding security controls increase your attack surface and lead to control recursion?
3. Name another well-known vulnerability that has occurred from an application sharing resources with its security processes?