From ChatGPT to Midjourney, generative artificial intelligence (AI) has taken the world by storm. Every week, more new generative AI tools are released, taking on tasks such as image creation, writing, video editing, social media management, and creating summaries of other content. Many industries have started to adapt to support tailored use cases, leveraging a human-plus-AI workflow.
While many of these tools are benign, their use in some business contexts could prove problematic from a security and privacy standpoint. Just recently, Samsung was caught up in security-related headlines as some of their proprietary source code was leaked from employees using ChatGPT.
In this analysis, I’ll explore ways that organizations should think about generative AI models from a risk perspective, with a special focus on the conversations security leaders should be having with senior leadership about the new technology.
Identifying the Risks of Generative AI
You can only manage risk by first identifying it. Unfortunately, the range of generative AI tools available today introduces a potentially expansive risk surface area. There are three possible scenarios:
- Misinformation used against your organization: deepfake video, audio, or written content
- Misinformation generated by your organization: content generated using generative AI models but not fact-checked
- Privacy and data misuse: generative AI used for a task requiring sensitive data to be uploaded or fed into the tool/service
Each scenario should be approached differently, with its own unique plans and mitigation tactics from a risk management perspective, though all will include conversations with your organization’s PR, communications, and legal teams.
Additionally, each organization is going to have its own unique generative AI risks to consider. In the Samsung case mentioned earlier, they had proprietary data. A healthcare organization such as a hospital would have user privacy concerns relating to the Health Insurance Portability and Accountability Act (HIPAA). Therefore, security leaders need to look at this spectrum of generative AI tools and use cases through the lens of their specific organization.
Below, we’ll expand on what to do in the third scenario: privacy and data misuse.
Approaching Risk Management for Data and Privacy Misuse
On an elemental level, managing the risk of emerging AI tools in a similar way to addressing shadow IT. Monitoring network traffic, controlling access to sensitive data, and deploying edge controls like SASE (secure access service edge) capabilities can all help prevent the sprawl of data from being passed into unmanaged and unauthorized tools.
Additionally, security leaders must have more strategic conversations in their organizations regarding generative AI. Security leaders have a tremendous opportunity to engage their peers in senior leadership in a strategic risk-reward conversation regarding generative AI. One of the reasons this technology has garnered so much interest is its potential to be disruptive. If we are to serve as business enablers within our organizations, we have a responsibility to engage proactively to find safe ways to use these tools while continuing to protect them.
Engage your peers to identify the types of tools they want to use, their use cases, the data they may need, and the intended outcomes. Marketing may want to write better and more efficient copy for the website and social media. Developers may want to write and review code more efficiently. The security team may want to analyze code for potential vulnerabilities. The legal team might want to analyze complex contractual terms to augment their team’s limited capacity to review and process contracts across the supply chain.
As you’re identifying use cases, data that might be needed, and possible tools, you now have a goldmine of context for creating more proactive training, safe use guidelines, and potentially even authorized lists of tools for use. Find a way to keep these conversations alive over time. Use cases will evolve quickly, and people usually want to do the right thing. The more security teams can position themselves in a servant leadership and enabler role, the better.
Concluding Thoughts
There’s a lot to analyze around the security implications of generative AI. We don’t know everything today; things are changing extremely rapidly with the pace of development of tools and capabilities. One of the best things we can be doing right now is leaning into and leading the conversation within our organizations and identifying how this new technology could be used in service of the mission. This conversation shouldn’t be a static, one-time event. It should be an ongoing and fluid conversation between security and the myriad stakeholders across our organizations.
Want more cybersecurity insights? Subscribe to the Cybersecurity as a Business Enabler channel: