Software makes the world go round these days, and it’s also causing a lot of problems. The U.S. Department of Homeland Security recently found that 90 percent of security incidents result from exploits against defects in software. It sometimes seems like we’re just rolling out the red carpet for cyberattackers with our applications. Why is software so riddled with security defects? Are developers to blame? Is it just the nature of software?
We’ve found that there are four primary ways that vulnerabilities end up in your software. Understanding these sources and how to prevent them is a good first step in answering the questions above and making your apps less like a red carpet and more like a moat.
Software has become a primary source of innovation for many organizations, putting pressure on developers to get code out the door quickly. But this emphasis on speed often leaves security in its wake. The intense deadlines can leave developers with no choice but to skip security assessments, or tack them on to the end of the development process, when it’s likely the results will be overlooked in favor of getting the code delivered. Developers aren’t intentionally ignoring security issues; it’s more often a case of lack of a security culture or a simple lack of knowledge of secure coding best practices.
Shut this open door to cyberattackers by making sure your application security program requires security assessments at each stage of development. But don’t just dictate these requirements to developers; work with them on the plan and get their feedback and buy-in. In this way, the security assessments will become integrated with the development process, and developers will be far more receptive to a plan they had a hand in creating, than they will to a security mandate. In addition, be sure developers have training in secure coding. Veracode research recently found that development organizations that leverage eLearning see a 30 percent improvement in fix rate.
It’s hard to keep up with the cyberattackers; they spend all their time and resources looking for holes in your code, and you don’t. And software itself isn’t stagnant either – code is constantly being changed and updated. You can’t stop code from changing or hackers from plotting, but you can make it harder for them by assessing the security of code multiple times, and at different stages of development – including when changes are made post-production.
Turns out that developers are not only overlooking security assessments in favor of speed, they’re also unwittingly introducing risk in their pursuit of deadlines. It’s a common development practice to use pre-built open source software components and code. But, as we learned from Heartbleed and Shellshock, these components often contain serious vulnerabilities that expose organizations to significant risk.
Components aren’t going away. But you can reduce their risk by using technologies to keep track of which applications are using each component and what versions are being used. With this insight, it’s easy to update a component to the latest version if a vulnerability is discovered.
Development teams often don’t realize that their choice of programming language affects security. In fact, each programming language is prone to different types of vulnerabilities. For instance, our research has found that applications written in web scripting languages are more susceptible to SQL injection and Cross-Site Scripting than those written in .NET or Java. Your security efforts will be more effective and more efficient if you understand the security strengths and weaknesses of the languages dev teams are using – and prioritize testing methods accordingly.
Don’t welcome cyberattackers with open arms. Understand the ways vulnerabilities end up in your software, and work to prevent them and thwart the attacks. Start by getting more details on shoring up your software with our new guide How Do Vulnerabilities Get Into Software?