/oct 21, 2022

Why Flaw Mitigation Is Crucial To Manage Risks

By Jim Jastrzebski

Documenting flaws that you don't prioritize today will save you time should they become high-severity flaws in the future. Here's the best way to approach them.

The topic of mitigations is a commonplace source of questions and discussion for our Application Security Consulting group. This is a complicated topic, and I hope the following helps to provide some understanding and guidance on how to think about the role and purpose mitigations play in the security posture of an application and in your security program. 

What is a mitigation? 

In the Veracode system, a mitigation is basically an annotation on a finding - or flaw, typically one detected through static analysis – which explains why the finding does not require code change in order to remediate the risk. In other words, it is an explanation of how the risk reported via the finding is already being addressed effectively.  

It is important to differentiate findings that are mitigated from the idea of a false positive finding. False positives are those findings that are reported in error, meaning the issue being reported doesn’t actually exist. A mitigated finding is a valid, true positive finding, that does not pose a current risk due to the presence of some compensating control.  

For people whose security experience is rooted in interactive testing, such as Manual Penetration Testing, this semantic nuance can be particularly frustrating. In interactive testing, findings are detected by exploiting them. However, especially when using static analysis, the reverse is not necessarily true. A finding that has been discovered is not necessarily exploitable.  

A finding that is not currently exploitable, or which is not apparently exploitable, still poses a very real risk to the application, until it can be determined NOT to be exploitable. It is often the case that findings made using static analysis are not exploitable, specifically due to the presence of some compensating control that mitigates the risk.  

How do mitigations relate to flaws? 

It is good to think of flaws as existing separately from their mitigations: If the compensating control that mitigates the risk fails, is defeated, or changes in some way that reduces its efficacy, the flaw it was protecting can become exposed without the code containing the flaw changing at all. An example of this is to mitigate XSS or SQL Injection risk present in an application by protecting the application with a Web Application Firewall (WAF). If everything is working correctly, the flaws in the application cannot be provoked and exploited, but if the WAF is misconfigured due to a maintenance error or shut down, now the flaws in the application are exposed and prone to being exploited.  

For any given flaw there is a variety of ways in which the risk it poses may be managed. In a great many cases, there are definitive solutions that can be implemented in code to fully eliminate the risk these flaws pose, rendering the flaw undetectable. Once such code changes are applied, we can say the risk has been remediated.  

However, risk may be addressed in many other ways that do not fully remediate the risk by eliminating the flaw, but that either reduce or remove the risk through the addition of in-code compensating controls. Being separate from the flaw, these compensating controls must be documented, so that future maintainers of the code, or in the worst-case forensic analysts reviewing the code after a catastrophic breach, can understand how the risk due to flaws in the code was compensated. 

What are the different kinds of mitigations? 

The Veracode system allows for the documentation of several different kinds of mitigations. We encourage our clients to keep defense-in-depth in mind when using the mitigation feature to document when a detected flaw does not require additional action to further reduce the risk it poses. Different kinds of mitigations may be categorized as follows: 

  • Mitigation by Network Environment – for security features provided by controls in the network on which the application is deployed, such as a WAF, 

  • Mitigation by Operating System – for security features provided by the OS of the machine on which the application resides, such as the permissions the application is given to execute or restrictions on access to the file system, and  

  • Mitigation by Design – for specifying how the design and implementation of the application itself mitigates and ameliorates the risk posed by a given flaw.  

Developers are most likely to use the Mitigation by Design category, because they have direct control and the most awareness over this layer of security. Developers may know how the production server OS and network are configured, and they may rely on this configuration providing the necessary security capabilities their application relies upon. Note that those more distant security features are likely maintained by different teams and may be changed independent of the application code itself. Having all known mitigations documented enables the Security and Risk Management functions of the organization to fully understand the security dependencies of the application, and to verify and periodically reassert whether these mitigations are present, correctly implemented, and effective.  

Is runtime considered when addressing mitigations? 

Static analysis is called “static” rather than “dynamic” because the application being tested is not in an actively running and interactive state during analysis. This is important to keep in mind from the mitigation perspective because static analysis does not have the benefit of interacting with the application in its runtime state. As such, static analysis does not take into consideration the runtime values of data being processed by the application at runtime – static analysis focuses on what code is reachable in any and all circumstances, and on the pathways data takes through the application in all possible permutations.  

Due to the absence of runtime data values, it is commonplace for data validation techniques, which conditionally filter on specific data values, to remain ambiguous in their effectiveness, leading to a developer needing to articulate the purpose and location of such checks in the form of a documented mitigation. As a result, “guard conditions” implemented as allow/deny lists or regular expressions cannot be evaluated as being fully effective. They often require a mitigation to be entered into the system to consider the flaw resolved and remove it from policy consideration. A related situation has to do with the sizes of the data on which a C/C++ application may be operating, the length of a string being potentially greater than the storage buffer into which it is written.  

Most taint-based flaw categories, the ones which identify situations that could provoke security concerns due to specially-crafted data entering the application during runtime, can be remediated using techniques detectable by static analysis. In the case of XSS, for example, using one of many HTML encoding libraries will eliminate the flaw (just be sure to use the correct encoder for the context into which the potentially tainted data is being emitted). Similarly, there are likely several sanitization methods available for many of the injection risk types.  

Some types of injection cannot be effectively resolved using an encoder. In this case developers are forced to implement functionality to ensure that the data they are handling in their application is not only made up of the appropriate characters but is also semantically appropriate to use in the way intended for in their application.  

Example: CWE-73 

One prominent example of such a case is file and path manipulation. CWE-73 poses a risk not only because certain special characters injected into values that later become part of a path or filename can cause the OS to redirect that path to an inappropriate location – where sensitive information could be altered or destroyed, or a scripted backdoor could be planted – but also perfectly valid characters can result in data being overwritten, or data belonging to a different user could be accessed. Simply encoding a path and filename, or canonicalizing one to ensure the result is permissible, will force the path/filename to be technically valid, but it still may be logically or semantically inappropriate. A developer seeking to resolve a CWE-73 flaw needs to ensure not only that the variables used are not tainted with special characters, but also it must validate the path and filename prior to use to ensure that user access controls and permissions, authentication, and authorization concerns are addressed. Doing this is very application-specific, and so CWE-73 flaws almost always require that a mitigation be entered into the system to enable the Security and Risk Management staff responsible for ensuring the organization is meeting its security requirements to understand the nature of the risk and the steps taken by the developer to preserve confidentiality, availability, and integrity of the system and data from within their application.  

What’s the best way to think about mitigations? 

We encourage developers to view mitigations not as a way to resolve findings without having to change and retest their code, but rather as a means of demonstrating their diligence and security-mindedness in how their product operates. We ask that developers see mitigations as documentation for the benefit of the future – for when you are promoted or win the lottery and someone else becomes responsible for maintaining and reusing your code. A well-written mitigation will help others understand how it works and what safety and security features it entails. Mitigations are a great way to take credit for the good work you’ve done to produce secure, reliable, and useful code.   

Veracode’s findings are not a to-do list of items which need to be dismissed or challenged. They are a source of information about risks. Some are very clearly exploitable; others are more subtle, and some are likely not exploitable at present but may become so as the code is revised and evolved over time.  

It is important to consider each finding both from the perspective of immediate risk as well as potential future impact. If code containing flaws is reused in another part of the application, or refactored and reused in ways for which it was not intended at the time it was written, significant risks could result. We urge developers to take all flaws seriously, prioritize remediation of those that can be decisively remediated, and duly mitigate those that require some special and bespoke means to eliminate their risk.  

To learn more about the what, how, and why of mitigations, please visit the Veracode Community and check out the following articles and videos.  


Related Posts

By Jim Jastrzebski

Jim has been an application security practitioner for about 10 years and now manages the Application Security Consulting group at Veracode. He holds a postgraduate degree in computer science from RPI, with a specialization in software engineering. Prior to joining Veracode, Jim developed software for consumer broadband, nuclear power generation SCADA systems, and multimedia content delivery for mobile devices.