/apr 13, 2023

What Are the Security Implications of AI Coding?

By Natalie Tischler

AI coding is here, and it’s transforming the way we create software. The use of AI in coding is actively revolutionizing the industry and increasing developer productivity by 55%. However, just because we can use AI in coding doesn't mean we should adopt it blindly without considering the potential risks and unintended consequences. It’s worth taking a moment to consider: what are the security implications of AI-assisted coding, and what role should AI play in how we both create and secure our software? 

Exploring Two Security Implications of AI Coding 

Truth be told, the full implications of generative AI and AI-assisted coding, often called companion coding, are unknown and unfolding by the week. However, here are two key areas we can explore today around the security implications of AI coding. 

1. How AI Coding Affects the Security and Integrity of the Software it is Used to Create 

Let’s start with the security of AI-generated code suggestions. Over 70% of software applications scanned in the last twelve months contain security flaws. It follows that unsupervised machine learning models trained on the average codebase will learn insecure coding practices and create security flaws. It’s already a struggle for many teams to keep pace with security flaws in applications, so as the rate of code (and therefore security flaw) creation increases, there’s the potential to widen the security gap even further. This creates significant risks and costs.  

Machine learning models are only as good as the data and training they learn from. It is important to critique their outputs, not only for code quality and functionality, but also for security. If the critique reveals cause for concern, we need to take a step back from the intoxication of the technology and take a sober look at how this impacts security posture. Especially given the second concern of an intensifying threat landscape with attackers enabled by the application of generative AI. 

2. How AI Coding Affects the Threat Landscape 

The second security implication of AI coding is the potential for it to be used to make cybersecurity attacks faster and more severe. Consider both the speed at which malicious scripts can now be written and how much lower the barrier to entry is for creating a script.  

It’s also worth considering the space this creates for new types of attacks. Hacking is a long tradition of manipulating inputs and outputs to make systems behave in unintended (and often malicious) ways. The inputs and outputs evolve – and so does the hacking of them.  

There’s a reason many have a healthy concern about the integrity of AI tools, how they handle user data, and how that data can train and influence the model. Development teams and security professionals need to be critical of the tools they introduce and the code those tools suggest. 

Exploring the Role of AI in Securing Software 

If AI tools are contributing to software development, it follows that they should also contribute to how software is secured. One use of AI in securing software is to augment development teams to meet both functional and security requirements. To deliver this, we need complementary solutions to create and secure software. If machine learning models are only as good as the data and training they learn from, this means we need GPT-based models with supervised training on a curated dataset to excel at cybersecurity tasks like recommending flaw remediation suggestions with high reliability. And any tool that handles your source code, especially for security use cases, needs to handle that data with the highest integrity and security. 

Building a Cybersecurity Skillset into Your Emerging AI Tech Stack 

As we look at our emerging AI tech stacks, it’s worth considering how to build cybersecurity skillsets into these stacks. In 2023, there is a 3.4-million-person cybersecurity talent gap. We need specifically trained tools to bring a level of automation to areas like flaw remediation which will significantly help scale security champions and close the growing security gap.  

Generative AI can also be a key tool to build security competency and educate developers on how to write secure code. Think of how using spellcheck can improve your grammar or spelling over time; the same is true for using an intelligent remediation tool. These tools apply general competency to a specific application, and developers can observe these coding best practices in the context of their own code. This can accelerate understanding of secure code practices and help developers write more secure software in the first place.  

The Future of Intelligent Software Security 

It's essential to remember that with great power comes great responsibility. AI coding is here to stay, and it's transforming the way we create and secure software. It takes automation to keep pace with automation, and as you build an AI toolkit, you need to include skillsets that complement the full team – from full-stack generalist, front-end and back-end specialists, and security champions.  

The value potential of doing this successfully is tremendous. In the future developers will likely leverage multiple generative AI technologies in concert to deliver software better, faster, and cheaper. That applies to the security of software as well – more secure software, delivered faster, with less manual effort and development resources. That future is happening now at Veracode – join us on April 18th for a first look

Related Posts

By Natalie Tischler

Natalie Tischler believes in a world where software is built secure from the start. She writes content for Veracode that focuses on empowering harmony between Security and Development teams.