/nov 1, 2023

How Executive Order on Artificial Intelligence Addresses Cybersecurity Risk

By Chris Wysopal

Unlike in the 1800s when a safety brake increased the public’s acceptance of elevators, artificial intelligence (AI) was accepted by the public much before guardrails came to be. “ChatGPT had 1 million users within the first five days of being available,” shares Forbes. Almost a year later, on October 30, 2023, President Biden issued an Executive Order “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI).” Here’s what the Executive Order gets right about addressing cybersecurity risk and promise posed by AI. 

Overview of Key Points in the Executive Order on Artificial Intelligence 

Before diving more deeply into a few cyber-specific aspects of the Executive Order on Artificial Intelligence, let’s look at some of the key points and goals included in this far-reaching order.  

From requiring “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government” to “[protecting] against the risks of using AI to engineer dangerous biological materials,” the Executive Order covers a host of potential AI risks. 

The section on protecting the privacy of American’s privacy is filled with direct actions and followed by a section on the advancement of equity and civil rights. It concludes with “the responsible government deployment of AI” followed a pledge to “modernize federal AI infrastructure” and work with “allies and partners abroad on a strong international framework to govern the development and use of AI.”  

The Importance of Cybersecurity in the Era of Artificial Intelligence 

AI is a double-edged sword. It has many cybersecurity benefits, but it poses many cybersecurity challenges as well. One such challenge is its exploitation by hackers to carry out sophisticated and targeted cyber attacks. Link11 Security Operations Center (LSOC) DDoS-Report for the 1st Half of 2023 states: “AI-based attacks are increasing in number and threaten critical infrastructures.”  

While many more challenges remain theoretical, we know that hackers can use AI to automate attacks, carry out data theft, evade detection systems, and engage in social systems. Evolving AI-based cyber threats makes the use of, and regulation around, AI critical.  

As someone closely involved in the development of cybersecurity regulations of the years, I see the rapid dissemination of AI as no different from past revolutionary innovations. The difference I see today is how much faster this Executive Order happened than other cyber regulations, like the Executive Order on Cybersecurity in 2021, which responded to warnings like the one I gave to the Senate as part of L0pht in 1998. I will go into more detail below after discussing a fundamental cybersecurity standard this order gets right. 

Fundamental Cybersecurity Standard the Artificial Intelligence Executive Order Gets Right 

The order’s New Standards for AI Safety and Security section states: “Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.” Securing software is the meat and potatoes of reducing risk from cyberattacks. It is through exploiting vulnerabilities in software that hackers can take control of a device, install malware, steal or manipulate data, and more. 

This element of the Executive Order builds on the AI Cyber Challenge. This challenge is “a major two-year competition that will use artificial intelligence (AI) to protect the United States’ most important software, such as code that helps run the internet and our critical infrastructure... by finding and fixing vulnerabilities in an automated and scalable way.” 

As someone with decades of experience inventing ways to find and fix software vulnerabilities, here’s a word to the wise. Risk reduction requires burning down the security debt in large, legacy applications as well as preventing and fixing flaws progressively in new builds.  

Data tells us that one in four vulnerabilities remain open well over a year after first discovery. Due to time, resources, and security training limitations, developers aren’t fixing what’s being found. This results in increasing security debt: the accrual of documented security risk resulting from compromised security measures.  

Think of how many applications were built in the 1990s and 2000s that hardly had any security built into the development process, and then think of how much security debt has accrued since their creation. How can it be burned down when developers are busy creating profitable (and hopefully secure) new innovations? 

To tackle this challenge, Veracode created Veracode Fix using our curated dataset and “master patches” created by our security experts to train a Generative Pre-trained Transformer (GPT) to augment fixes developers can approve with ease. 

In the following short video, you’ll see how a developer can generate insecure code with ChatGPT, find the flaw with static analysis, and secure it with Veracode Fix to quickly develop a function without writing any code. 

 

Future Implications of Artificial Intelligence Standards and Regulations 

As mentioned above, I’ve been advocating for software security regulations for a long time. There’s potential for regulation of artificial intelligence to mean increased regulation of software security as a by-product.  

The order mentions the development of “standards, tools, and tests” which will help create a safe harbor for what constitutes a company doing their due diligence to mitigate risks related to AI. This type of clearly standardized legislation is still needed for certain aspects of the order to become tangible. 

The U.S. isn’t alone in regulations on AI. As of June 2023, the use of AI in the European Union “will be regulated by the AI Act, the world’s first comprehensive AI law.” Just this week, there is an AI summit happening in the UK with the goal of finding "some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models.” 

The world is watching as these landmark decisions are being made proactively. As I shared in a comment to The CyberWire, “Collaboration between the tech industry and the government is key for instilling a secure space for innovation and safety to thrive.” 

Learn more in our whitepaper, AI and the Future of Application Security Testing

Related Posts

By Chris Wysopal

Chris Wysopal, co-founder and Chief Security Evangelist of Veracode, is recognized as an expert and a well-known speaker in the information security field. He has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. His opinions on Internet security are highly sought after and most major print and media outlets have featured stories on Mr. Wysopal and his work. At Veracode, Mr. Wysopal is responsible for the security analysis capabilities of Veracode technology.