
Discover how over 100 large language models perform on real-world secure coding tasks and what their limitations mean for developers, security teams, and businesses.
As generative AI becomes a mainstream tool for software development, one question is becoming increasingly urgent: Can we trust AI to write secure code?
This webinar presents key findings from the 2025 GenAI Code Security Report, one of the most comprehensive evaluations to date of code security across over 100 large language models. Covering Java, Python, C#, and JavaScript, our research reveals troubling trends, including high failure rates on critical security tasks and no measurable improvement in security performance over time, even as models grow more powerful.
Join us to learn:
- How often AI-generated code introduces vulnerabilities and in which languages
- What types of security issues are most common
- Why newer, bigger models aren’t necessarily safer
- The hidden risks facing your software supply chain
- What developers and security teams must do to stay ahead
Whether you’re a developer, security lead, or business decision-maker, this session will help you navigate the real-world security implications of GenAI in your development workflow.
Speakers:

Natalie Tischler
Content Marketing Manager
Veracode

Chris Wysopal
Chief Security Evangelist
Veracode

Jens Wessling
Chief Innovation Officer
Veracode

Samuel Guyer
Lead Architect, AI Innovation
Veracode

Humza Tahir
Principal Software Engineer (ML)
Veracode