In episode 110 of the Cybersecurity Minute, Chris Hughes discusses some new cybersecurity guidelines for AI system development.
This episode is sponsored by Acceleration Economy’s AI Ecosystem Course, available on demand. Discover how AI has created a new ecosystem of partnerships with a fresh spirit of customer-centric cocreation and renewed focus on reimagining what is possible.
00:17 —The Cybersecurity Infrastructure and Security Agency (CISA), along with the National Cybersecurity Center in the UK, released guidelines for secure AI system development. This builds on previous publications that advocate for baking security into the software development lifecycle rather than bolting on.
01:43 — Let’s start with the system design phase. Some of the secure AI practices include raising staff awareness of threats and risks and threat modeling your system to understand potential attack vectors and risks to our system.
02:33 — Moving on from that, let’s move to the system development phase. It mentions some practices for secure AI system development, including securing the software supply chain, and third-party proprietary products or software. It also talks about identifying, tracking, and protecting your assets. It talks about documenting your data models and prompts.
03:36 —We see prompt injection often being cited as one of the top malicious types of attacks that people can use against AI systems. It also talks about managing your technical debt.
04:12 — Moving on from there, let’s talk about the practices for secure AI deployment. This includes securing your infrastructure, whether it’s a cloud-native environment, hosting environments, such as Azure and AWS or GCP, virtual machines, Kubernetes clusters, the underlying infrastructure that’s hosting your models and your AI system, as well as if you’re using an on-premise data center.
05:14 — It also talks about releasing AI responsibly. Last but not least, it talks about practices for secure AI operations and maintenance, listing things like monitoring systems’ behavior and looking for nefarious changes in the behavior.
06:41 — It talks about following a secure-by-design approach for updates. As you push new updates out, it’s important you’re doing it securely, ensuring you’re not going to disrupt business operations, and that you’re pushing out software that’s been thoroughly vetted and tested from a security perspective.
07:09 — It talks about collecting and sharing lessons learned. A lot of people are just starting to play with AI, starting to understand how these things work. Just be open and transparent with the community and share these lessons learned.