You didn’t change anything in your code, yet the scan is different this time. Here’s advice from an Application Security Consultant on why that may be.
Have you ever wondered why you scan code one day and get one result, and then scan the same code a month later and get different results – even though you never changed anything? As Application Security Consultants at Veracode, we often receive questions from developers about unanticipated differences in findings between one static scan and another. Typically, developers will make some changes in their application between scans, but it is common for changes that appear unrelated to developer activity to be encountered. In this article, we’ll explore four of the reasons why this may occur.
1. Changes in Seemingly Unrelated Code
Obviously, changes in code that remediate findings effectively will result in those findings no longer being reported. Changes that add new functionality may include security defects, and these new findings will be reported in the next analysis. The thing is, it’s also possible for changes in one area of code to result in findings in seemingly unrelated areas of the application. Let’s look at examples of how this may occur.
Consider a library that executes a variety of SQL statements that are constructed during runtime from a selection of hard-coded values. There is no security risk reported here because the set of possible statements is predefined by the developer. If this library is changed to allow some of these values to be provided by a user or to be retrieved from some external data source, then the scanner will recognize this runtime data as being potentially tainted. The resulting SQL statements will therefore become prone to SQL Injection, even if the code where the statements are actually executed hasn’t changed. In simple situations, the developer receiving the results will recognize how the code changes have introduced new risk, but as this pattern scales up with the complexity of the application, it quickly becomes far from obvious how seemingly distant areas of code can affect one another.
The Veracode scanner is a very complex thing. It is not boasting to say it is as complex as the software used for mapping and routing commuters to their destinations – except in our case, we need to discover the map of the world first by modeling your application, and then mapping every taint sink back to a taint source.
Consider what closing or adding just one road can do to the traffic flow in your city. If it’s some one-way alley or a cul-de-sac, the change will barely be noticed. However, if you add a bridge or shut down a tunnel, all sorts of pathways start to play very significant roles in how everyone navigates the map. Similarly, even small or distant code changes can have implications on how your application is modeled and subsequently scanned.
2. Changes in Scope
The results of static analysis reflect the security posture of the code provided to the scanner. If the set of files uploaded or selected for analysis changes, we should expect the results to reflect these changes. Consider running a scan that includes unintentionally uploaded test code that exercises functionality deep in the application with some test data the test code reads from a test properties file. While the application code scanned previously performs correct sanitization of the relevant input values, the test code does not, and so the places where these test inputs are used would be reported as unsafe.
Uploading the intended and correct set of files for analysis is also very important from the perspective of policy compliance. Uploading less than all the needed files may result in an under-reporting of the detected risks, which may create the perception of policy compliance by excluding some code that contains defects. Veracode is unable to confirm that all the necessary code is being provided for analysis. We can only indicate when dependencies of the uploaded code appear to be missing, and so it is important to verify that all the components intended to be included in the scope of the analysis are indeed provided, and that the correct “top-level modules” are selected.
3. Module Selection
Module selection is your way of informing the scanner where the analysis should begin modeling your application. This is done either manually by selecting checkboxes in the Veracode Service Platform (our web UI), or by similarly indicating the relevant analysis entry points in our IDE plugins or specifying these components in our various integration plugins – please consult the Veracode Documentation Center for your specific use-case.
The selectable components are often referred to as “top-level modules”. What makes them “top-level” is that they are not themselves dependencies of something else in the set of uploaded files. Veracode will attempt to make an educated guess at which of the detected top-level modules should be selected by trying to recognize which of them are not identifiable as third-party or should be ruled out for other reasons. It is important to know that this decision is made by a heuristic and may not be correct, because our system does not understand the architecture and composition of your application. On your first scan, and occasionally afterward, it is a good idea to review the set of selected modules to ensure it is still appropriate, especially if your application has evolved, been restructured, or refactored.
All this to say, the set of modules selected for analysis is critical to the scope of the scan that follows, and if the selection of modules changes the results will reflect this.
A rule of thumb to keep in mind is that in almost all cases the selected set of modules should consist of those components of your application that expose interfaces to users or other systems during runtime. This means components that provide a user interface, APIs, exposed service endpoints, etc. Any part of the application that sits on your attack surface and accepts inputs during runtime is a good candidate for selection.
What selecting a module means from the scanner’s perspective is this: any selected entry point (this means the main method, UI, or indeed any method declared as “public”) will be inspected and all other uploaded functionality that is reachable during execution in all possible scenarios will be represented in the Semantic Object Model (SOM) the scanner constructs during analysis. This includes not only the functionality inside the modules you select, but also the functionality in the dependencies you provide by uploading them.
Hopefully, this explanation of the complexities of module selection makes clear that changing the inputs into the scanner by changing which modules are selected or uploaded will necessarily change the outputs the scanner produces.
4. Changes to the Veracode Scanner
Veracode is constantly working to improve the accuracy and performance of our analysis technologies. With an ever-changing threat-space where new frameworks and third-party components proliferate, new attacks are detected, and new applications are created at an ever faster pace, staying on the cutting edge and providing you with consistently best-of-breed capabilities requires constant and ongoing improvement. Additionally, the pace of software development is ever faster. Many of our clients scan applications not just annually or quarterly from a compliance perspective, but daily, or even more often, to ensure that every single release they deploy to production undergoes a security assessment to ensure vulnerable code is not shipped. The pace of change can be staggering.
We are always adding new capabilities to our scanner, optimizing the existing capabilities for speed as well as the ability to detect ever-deeper risks to your application. As a result, even if your application is not undergoing active development and has not been changed in years, it is possible that an improvement we made to our scanner will enable us to detect a “new” flaw in your legacy code.
As mentioned before, the Veracode scanner is a very complex thing. It is not boasting to say it is as complex as the software used for weather prediction, where small variations in initial input conditions can lead to considerably different outputs. As in weather prediction, the important thing is accuracy, not consistency. Our goal is to provide you with the most accurate results possible in the least time necessary to get you those results.
We understand that sometimes improvements to our scanner may lead to unexpected findings that challenge pre-defined goals and timelines. The alternative to unearthing new findings is NOT finding these risks, and we’re sure you’ll agree that shipping code with unknown defects is worse than either not shipping known defects or shipping known defects with work to fix them already planned and underway.
Whether due to any of the changes discussed – changes in the source code, the set of files uploaded, the set of modules selected, or the Veracode scanning engine – when results unexpectedly differ, the reason for the difference is apparent. However, often more than just one of these factors may change from one scan to the next. New functions are added, code is refactored, formerly missing dependencies are located and added to the upload. This is all normal progress. As code is refactored and services that expand the application’s attack surface are added to the list of selected modules, the model created during analysis grows more complex. This is also normal progress. As Veracode engineers add support for new frameworks, new risk vectors, and optimizations for our performance or analysis logic we deploy updated versions of our engine to better serve your needs. And this is also normal progress. Making these changes should be deliberate and intentional. Sometimes they happen by accident, and we do not spot the error until an unexpected change in the results draws our attention, and we investigate to see how many of the above factors may have played a role in the discrepancy.
Ultimately, security isn’t a destination, it is a journey. It is not a state to attain but a process to enact and maintain. And so, it is good to keep in mind that results will change over time, sometimes in ways we don’t anticipate, budget, or prepare for, just as does the weather from one day to the next, and as do traffic patterns in a city undergoing maintenance and development. It’s best to allow for such changes rather than stop everything until a root cause is identified. In the end, there is definitely a root cause, but it may be so obscured by the many variables of the complicated system that determining what it was, exactly, is not a worthwhile investment of time.
“Why is it raining?” is less consequential and productive a question than “Where is my umbrella?”. Is the finding valid? Is the risk represented by the finding in need of resolving? These are the questions that support ongoing security improvements. Of course, wildly different results should be looked into. If code or scope of scan have changed for no good reason, sorting that out is important and necessary. If the Veracode engine is suddenly reporting significantly different results for no reason documented in the latest Release Notes, this too requires an investigation – and if this happens, we definitely want to hear from you.