In episode 105 of the Cybersecurity Minute, Chris Hughes gives an overview of an Endor Labs report on the intersection of artificial intelligence (AI) and cybersecurity. Endor Labs is on the Acceleration Economy Cybersecurity Top 10 Shortlist.
This episode is sponsored by “Selling to the New Executive Buying Committee,” an Acceleration Economy Course designed to help vendors, partners, and buyers understand the shifting sands of how mid-market and enterprise CXOs are making purchase decisions to modernize technology.
00:39 — Recently, Endor Labs used large language models (LLMs) and ChatGPT-3.5 to take a look at malware reviews. There’s a lot of interest in how malicious actors could use AI for criminal purposes. Endor is looking at the flipside, asking: “How can we leverage AI to augment human activity from a defensive perspective to mitigate risks?”
Which companies are the most important vendors in cybersecurity? Check out
the Acceleration Economy Cybersecurity
Top 10 Shortlist.
01:03 — Endor used ChatGPT-3.5 to perform malware reviews on various software packages and code snippets. Out of 1,800 artifacts examined in the study, 34 were classified as malicious. Of those, 13 were true positives (malicious), and 15 were false positives, wrongly classified as malicious by ChatGPT.
02:00 — These findings emphasize that AI and LLMs are not impervious to evasion or manipulation. But it’s the same for humans and other tools we use in cybersecurity. The study also demonstrates that defenders can think about use cases for leveraging AI and LLMs. A tool like this can reduce the time it takes to identify and analyze packages.
02:39 — AI and LLMs are not perfect. There are obviously challenges. But it’s great to see vendors in the ecosystem starting to apply it to their activities and doing some research showing how it can be done.