/mar 28, 2018

It’s Complicated - Operational Security for Developers

By Pete Herzog

The life of a commercial software developer is a difficult one. Or at least we have to assume it is because of how many of them half-ass it when code starts to get complicated.

Okay, maybe that’s unfair. Maybe it’s not all half-assing. It’s complicated. Literally.

There’s many functions that are overly complex. They are so complex with so many variables and interactions as to be actually untestable.

This is the 4th article of a pragmatic series to help you understand security in new and practical ways that you can apply immediately to improve software. So check back regularly and get a new story or learn about software security, whichever, and be sure to take the little quiz at the end. Somebody once forgot the quiz and a bad thing happened as an indirect result. Don’t let that happen to you.

Furthermore, coding security into these complex applications can also prevent the testing of the security. Especially if part of that security is to encrypt, obfuscate, or separate the app code from reverse engineering. This creates code that may be more secure but untestable and unmaintainably secure. That means that any changes to the software will be untestable and the security functions will be unverifiable.

And by code being unmaintainably secure, you run into the problem where it becomes really difficult to know what certain parts of the code even do. Besides being a bugfix issue, it’s a continuity issue since you can’t pass it off to other developers should the original developer leave. Because of that you end up seeing blocks of code separated into little cages by comments that says, “Don’t know what this does but if you delete it then everything will stop working. DON’T DELETE!!!”

Which brings us to the first point of security. Security is a function of separation. Either the separation between an asset and any threats exists or it does not. There are only 3 logical and proactive ways to create this separation:

  1. Move the asset to create a physical or logical barrier between it and the threats.
  2. Change the threat to a harmless state.
  3. Destroy the threat.

There is a fourth, which is destroy the asset, but that’s not really logical for business so let’s put that one aside.

In creating software, the concentration is often on the first and second means of separation. First, we choose environments where certain threats cannot exist and can then classify the software as “internal” or “not for high-risk environments”. This way its use outside of the classification is supposedly the choice of the user. However, this concept has fallen out of favor over the years as increased inter-connectivity (intended or not) has dropped perimeter security all the way back to the application itself and applications have become gateways. For example, a browser is an interactive gateway with many untrusted systems and no perimeter security therefore no classification as “internal only” can help protect it.

In the second, software designers are looking to filter interactions so that anything which can harm the environment or the application is removed. This is a means of changing the threat to a harmless state. However, this requires untrusted interaction with the filter, also a part of the program, and therefore increases the attack surface of the application. This is one of those cases where assuring the filter doesn’t share resources with the application it’s protecting. Additionally it's not possible to know what the threats are or all the possible types of attacks because we can't know all possible motivations. Therefore applications need to focus on what it can accept and this is known as whitelisting.

In whitelisting, we choose what we want or can work with from an interaction. So instead of filtering out the bad, we select the useful (good and bad intentions are currently difficult to discern before an attack so they have no function in a passive filter) from an interaction and change or ignore the rest. So if all elements of the interaction don’t match those in the whitelist the proper reaction is to change them to what is accepted (sanitize) or drop the whole interaction (fail safely).

But maybe you realize you’re making a whole lot more filters then you want to. Or maybe you realize that whitelist filters just aren’t possible. So maybe you should think about what untrusted interactions should be allowed from the start. Doing this is how you begin to simplify securing complex applications.

Application Porosity

Separation is a powerful security tactic but only if it’s applied correctly. It’s strongest when used as a preventative rather than a control. That means it’s better to not have an interaction than have it and need to control it.

Therefore when applying security to applications we need to see where there is the possibility for interaction and where there is not. We know some, all, or even none of these interactions may be required for operations. Like doors into a building, some of the doors are needed for customers and others for workers. However, each door is an interactive point which can increase both necessary operations and unwanted ones, like theft. In software, interactions can occur with users and systems, both trusted and untrusted, but also between the application and system components such as memory, keyboard, peripherals like printers and USB devices, and the hard drive.

All these interactive points together are known as the porosity and it’s what operational security is all about. I’m sure you’ve heard of opsec, right? It’s securing the stuff in motion, like a compiled or running application to assure a separation between a threat and an asset.

The porosity consists of 3 elements: Visibility, Access, and Trust, which further describes its function in these interactions so that the appropriate controls can be put in place. This is extremely important because security controls all match to specific protections of interactions. For example, the Confidentiality control which includes Encryption and Obfuscation, among other things, cannot prevent the message from getting stolen, changed, or destroyed. It can only delay in its getting read. Therefore if the goal is to protect the message, adding encryption will only protect it from one type of threat, the getting read part.

To apply the concept of porosity to coding, address the following:

  • What input do you trust? Do you take data directly from a user, hard-disk, memory, or network or do you select only the data you want to from the input to act on? Trusting even indirect input such as what the program placed in memory and on the hard disk is to ignore that resources can be replaced or snooped on in an environment outside the application.
  • Do you expect global limits within your environment? Environments can be changed outside the application and if protecting the whole environment is not in the scope of the application then the environment needs to be consistently defined as well as the means to constantly measure the state of the environment. Buffer overruns are the common occurrence of overloading a strict environment. These overflows don't need to be a mismanagement of the buffer environment but rather they could be the result of attacks made to shrink or limit the buffer environment outside the application so that when the input is written to buffer, it overflows, possibly performing malicious operations.
  • Address what is Visible, where direct Access is allowed, and what can be Trusted. Consider the environment. In a shared environment, such as a desktop, there is no trust possible as the application is just one of many residing on a foreign system. If the environment is a server, there is more trust allowed as users have less opportunity to insert or interact with other applications, hard drive, or memory. However there are exploits which take advantage of one known application on a server with an input weakness to attack another application on that same server. Be in constant awareness of the environment.

On a final note, we didn’t cover the “destroy the threat” part of logical separation. It wasn’t because we ran out of time. It’s because even if you can detect and respond to a threat, it’s an active defensive mechanism that can go badly if you’re not careful. Things like this are not easy to automate and unless you’re developing a security product, it’s better not in the application at all. For example, IP jailing and account lock-outs are commonly used however when applied to untrusted users over the Internet you need to be sure that the mechanism can’t be used to cause a denial of service attack against a legitimate user. In one case, the developer didn’t allow the mechanism to whitelist specific IPs that could not be blocked and an attacker forged the IP address of the gateway router from the organization and they blocked themselves from all traffic to the application.

Application development will get complicated at times. It’s important that in order to assure a good level of security, especially when an application is getting so large and complex as to be untestable or unmaintainable that you address it from a porosity viewpoint. This will greatly simplify building security into the application by looking at where the application interacts with the outside world. That’s the porosity. In the immortal words of me that I just made up right now to prove my point by throwing down a wise saying that applies to porosity:

“It’s how we are on the inside that matters to us but it’s how we are on the outside that matters to everyone else.”

[nid-embed:26931]

Quiz – answer in the comments section and gain the respect and envy of your peers!

1. If your application is intended to be used in internal environments only, do you still need to sanitize interactions over the network and why?

2. You create a sanitizing whitelist for your application but the list itself is the current list of users. How can you utilize this list from the user database without sharing it as a resource?

3. Your web application allows logins from users over the Internet so how do you prevent brute-force and dictionary password attacks of your users?

Related Posts

By Pete Herzog

Pete knows how to solve very complex security problems. He's co-founder of the Institute for Security and Open Methodologies (ISECOM). He created the international standard on security testing and analysis and Hacker Highschool.