In episode 111 of the Cybersecurity Minute, Chris Hughes explores a new report on cybersecurity and artificial intelligence (AI) code generation from Snyk, which is on the Acceleration Economy Top 10 Shortlist of Cybersecurity Providers.
00:21 — Snyk, a cloud-native security company focusing on functions including application security, software supply chain security, has published a report on AI-generated code.
01:05 — It found that AI code generation tools routinely recommend vulnerable open-source software libraries. This means that they’re not necessarily recommending the best library from a security perspective. Additionally, it found that developers have a false sense of security when it comes to AI code generation tools.
02:19 — It also said that 75% of developers believe that AI code is more secure than human code. However, code scanning tools are finding something different. Another finding is that 80% of the surveyed developers admitted to bypassing security policies to use these tools.
03:02 — Less than 25% of those surveyed said they were using tools like software composition analysis to identify vulnerabilities in the AI-generated code. This means that they’re bypassing policies to use these tools.
04:04— These AI code generation tools are being adopted and used widely, but security is not a key consideration. They may be able to produce code faster and products faster but are likely producing vulnerabilities faster or vulnerable code faster as well.