In episode 47 of the Data Modernization Minute, Wayne Sadin discusses how generative artificial intelligence (AI) increases data security risk and offers steps to mitigate it.
This episode is sponsored by Acceleration Economy’s Digital CIO Summit, taking place April 4-6. Register for the free event here. Tune in to the event to hear from CIO practitioners discuss their modernization and growth strategies.
0:44 — As part of its training processes, generative AI ingests large amounts of data, explains Wayne, with proposals on deck for models that “will grab all of the information on the internet.” This type of data consumption has consequences for security.
01:10 — Wayne warns that, with the rise of generative AI, companies with weak data exfiltration controls are at even more risk of inadvertent disclosure of data.
01:32 — Before generative AI, if people took data they shouldn’t have and posted it somewhere, it was likely that only “a couple of people see it.”
01:52 — That is not the case with generative AI, which captures data and puts it in a data repository. Generative AI never forgets anything and is a “queryable” tool. With generative AI, there’s a heightened chance of that data being broadcast far more widely to an audience that includes “your competitors, your customers, your regulators, your board members.”
02:28 — The bottom line is that generative AI heightens security risk and the possibility of data disclosure with the potential not only for embarrassment but for intellectual property loss.
02:51 — Wayne concludes with some advice on how to guard against this risk. Make sure your chief data officer (CDO), CISO, information protection experts, general counsel, compliance and risk people, and insurance carrier are communicating with your CIO, “who tends to control the physical placement of data” and that your controls “are locked down tight.”
Looking for more insights into all things data? Subscribe to the Data Modernization channel: