Police in the Netherlands recently contacted more than 20,000 people who they suspect had their personal data stolen by a malicious web developer. This developer had built “backdoors” into applications he created for various businesses as a contractor. With the information he stole, it is alleged that he made online purchases, opened gambling accounts and impersonated victims' family members.
Outsourcing application development allows organizations to realize cost savings and provides the flexibility necessary to scale. However, as the recent Netherlands incident illustrates, it also introduces significant risk. How do you know if a contractor is well-versed in secure coding best practices that avoid introducing vulnerabilities? And, as in the Netherlands case, are you confident this contractor won’t add malicious backdoors to your code? How well do you know this contractor? Has the organization or individual been vetted by a third-party security firm?
There’s a lot of talk about shifting security “left” (earlier in the development lifecycle) in the age of Agile and DevOps. But it’s not enough to add security to the development lifecycle – you need to secure the entire lifecycle through deployment. As this case in the Netherlands illustrates, if you only rely on early testing and neglect security later in the process, you are setting yourself up for failure. By shifting security right as well, you are securing both the code you develop internally, and the code you outsource or purchase and don’t have a hand in developing.
Before you turn to outside development, consider the following recommendations for assessing the security of outsourced code, both before and after you implement it.
Understand the impact: Before outsourcing an application’s development, clearly understand the application’s impact to the business. For instance, consider whether a breach of the application would affect the organizations’ reputation or lead to substantial financial loss. Does the application handle sensitive data that would be exposed if the application is breached? Are there personal safety implications in the case of a breach?
Validate with a third party: Use application security expertise as a key element in the evaluation of outsourced application partners. Ensure that you work only with partners that have been formally validated by an independent quality seal of approval and use secure development tools in their development lifecycle.
Put it in the contract: Include security metrics and SLAs in contracts with outsourcing providers. These requirements should be in alignment with your security policy for internally developed applications of a similar risk rank.
Test: Conduct independent application security testing with a third party with no vested interest in the findings. Leverage software security ratings to decide which apps are secure enough to be accepted or deployed.
Create time limits: Establish a timeline for addressing those security findings that are unacceptable. Remediation timeframes can be as simple as “all Very High severity findings must be addressed within 14 days” to as a granular as prescribing timeframes for specific CWEs, such as “CWE 89 must be remediated within five business days.”
Implement runtime protection: Even after thoroughly vetting the outsourcer and its code, be sure to add a layer of protection to any and all apps you deploy. The threat from externally developed applications reinforces the need to assess the security of applications throughout the development lifecycle – from development to QA to production. It’s important to use technologies like runtime protection that assess applications currently running on your network, including those that you were not involved in coding.
Need more information? Get all our best tips and advice on securing third-party software – in one place – in our new Third-Party Software Security Toolkit.