Veracode vs. On-premise tools

On-premise vs. In The Cloud: "What you need to know"

Current forces are putting pressure on organizations to secure their applications fast. Veracode facilitates that for you and we make implementation a breeze with our cloud platform. There's no hardware to buy; no software to install; no disruption to current systems; no product training; and you can be up and running in minutes.

Veracode combines the power of SAST and DAST with the benefits of computing in the cloud to provide a massively scalable, cost-effective, vulnerability detection service. Veracode has the ability to increase infrastructure as needed; no on-premise tool can achieve this level of scalability with it's core technologies. As a continuously learning and updating cloud-based service, Veracode learns from each of the thousands of web and non-web applications it analyzes in their fully integrated form and continually updates its service to achieve the highest rates of true positive security flaw detection and the lowest false positive rates. Updates are always immediately available to everyone.

We created an easy to use tool to help Information Security professionals as part of due diligence into on-premise scanning tools. This research tool is intended for enterprise IT managers, risk management, CISOs, or anyone with responsibility for Application Security within an enterprise. We list the questions to consider and decisions to make when implementing an on-premise solution. Use this tool to help you evaluate the true costs of deploying an on-premise scanning tool.

 

Deployment Consideration

Why is it Important?

Scope of Applications

Looking at your entire software supply chain (whether internally developed, out-sourced, COTS, FOSS, etc.), how many applications do you realistically intend to cover, and still maintain an acceptable level of risk? The rest of your deployment considerations will directly affect your ability to scale.

Number of Developers

With on-premise tools geared towards developers, this metric often has a direct bearing on cost structure. If your budget only allows for licensing a portion of your developers, the number of applications in scope will likely suffer. Also, have you considered if it’s even feasible to roll-out such a disruptive technology to developers without impacting their productivity?

Size of the Security Team

It’s commonly the Security Team that drives the purchase, initial deployment and on-going maintenance of the tool. The security team (usually a fraction of the size of the development team) also bears the burden of the unforeseen tasks associated with ensuring the success of an application security program. Depending on the size of the tool deployment, additional FTE’s should be strongly considered.

Tuning the tool

Configuring a tool for each application to scan is the largest time-sink for on-premise static analysis tools. Consider your application scope along with your intended deployment model, and you can see how scale would be an issue (especially when the work typically falls on the shoulders of the security team).

Centralized vs. decentralized deployment? Or a combination of both?

Do you have the infrastructure and personnel to support a centralized deployment? Or is it more appropriate to delegate the security testing efforts to the development teams? This is one area where a "we'll figure it out as we go" response should be met with extreme caution. Deployment footprint: installing, configuring, upgrading and maintaining enterprise software becomes more complex as the install-base grows in size. Availability of source code: A static source code analysis tool obviously requires source code. For all your applications in scope, is there even source code available? For a centralized service-bureau type deployment model, obtaining the source code is not always feasible. And for 3rd party or COTS software, it almost never is.

Is "Buildable" code always doable?

If there's one detail that static analysis tool vendors agree on, it's that an application should be "buildable", or capable of being fully compiled, on the machine where the scan is to take place. And depending on your intended deployment model, this is much easier said than done. When the code doesn’t build properly, comprehensiveness and accuracy suffer.

Developer roll-out

We all know that developers are pressured to deliver functionality on-time and on-budget, and any outside friction is unwelcome. With no top-down mandate, this leads many development organizations to push back on the addition of more disruptive activities. When it eventually becomes apparent that developers are unable to use their IDE for the duration of the hardware-intensive scan, which can take hours to days, this loss of productivity adds to the disruptiveness. Also, what assurances does the security team have that a developer scanned all application components, that those components scanned did not contain build errors, and that all results were uploaded to the management console for wider distribution? With a decentralized approach comes flexibility but at the cost of governance.

Bring the scan to the build, or the build to the scan?

For on-premise tools, the application must be "buildable" because it requires that all dependencies are available to the analysis engine so that method calls can be properly resolved as the code is being compiled. And if there's one place where this is feasible, it's on the machine where the code is actually built. This "natural habitat" is usually a developer's IDE or a centralized build server. If source code is going to be scanned, it should be scanned in a location that's as close to its "natural habitat" as possible (“bringing the scan to the build”). And if you're going to force it to compile outside of its natural habitat (“bringing the build to the scan”), you should be prepared to invest in making the new build environment as accommodating as possible to yield acceptable results.

Build integration

Centralized build integration may be less disruptive, more efficient, and more process-driven than the alternative of scanning at the developer desktop. However, by adding a resource-intensive scan as part of the build process (often on shared build machines), you are introducing both organizational ("not in my backyard") and operational risk, not to mention the enormous level of effort required by a cross-functional team to deploy and support this initiative. One could choice to avoid the organizational and operational risk by replicating the build environment on a dedicated scanning server, but the work involved in deploying as such increases exponentially. Once again, the ability to scale becomes an issue.

Continuous Integration

In the spirit of automation, development teams often request that the security tool leverages the existing Continuous Integration systems. You must be prepared to not only ensure a successful build integration, but to architect the integration in a way that does not interfere with existing CI efforts. With intensive scans competing for shared hardware resources, do the benefits of leveraging continuous integration systems outweigh the level of effort required to ensure its ongoing success?

Effectiveness of reporting

Are the tool’s sample reports actionable? Will a developer be able to interpret results and fix code accordingly? If reports are not sufficient, you would have to consider alternatives such as installing IDE plug-ins or using a separate vendor-specific interface; both of these alternatives increase the deployment footprint and complexity of the roll-out/upgrades.

Defect tracking system integration

How do you plan on communicating security flaws to developers? Some development organizations resist using “yet another interface” and require pushing flaws into a defect tracking system. While this deployment consideration may seem like a no-brainer, are you prepared to invest the time and resources it takes to carry-out this custom deployment? As a result, it’s often requested, yet rarely implemented.

Hosting the management/reporting/trending console internally

Be ready to involve your infrastructure and operations team because on-premise tools require you to host your own dashboard internally to benefit from a results repository and reporting capability. The ‘owner’ of this system is responsible for installing/maintaining/upgrading all the components (e.g. app server, database, SSL certificates, etc.) while preserving data confidentiality, integrity and system availability. Not to mention keeping current on user account provisioning. As you increase your coverage on the number of applications, you must ensure your hosted infrastructure can support the increasing load.

GRC integration

Does the tool vendor produce a data feed that is consumable by a GRC system such as Archer? What data does the vendor recommend you export into the GRC system, and are they prepared to help you implement it as part of their included service offering?