In episode 88 of the AI/Hyperautomation Minute, Aaron Back discusses the importance of data-centric artificial intelligence (AI) frameworks after the release of the “DC-Check” framework.
This episode is sponsored by Acceleration Economy’s Digital CIO Summit, taking place April 4-6. Register for the free event here. Tune in to the event to hear from CIO practitioners discuss their modernization and growth strategies.
00:44 — UCLA and the University of Cambridge recently released a new data-centric AI framework. With this framework, the researchers have coined a term for “DC-Check.”
00:58 — The goal behind this framework is to address a few things. The first aspect is that the researchers were aiming to shift current AI approaches from making the machine learning (ML) model work to making real-world ML systems
02:00 — The second is focused on three areas:
- Serves as a data-centric AI guide, providing an actionable checklist for each stage of the ML pipeline which reduces the risk of missing something
- Built for both practitioners and researchers, suggesting data-centric tools, modeling approaches, and research opportunities
- Goes beyond being a documentation tool by unlocking greater transparency and accountability regarding ML pipelines, enabling companies to maintain compliance
03:53 — The third is comprised of four components:
- Data — the input of data into the AI and ML models as well as the output of data; the framework was built with considerations to improve the quality of data, so it’s proactive in data selection, curation, and cleaning
- Training — the models are trained as new parameters are fed into them; new data is always ingested to help the outcomes of these models and improve the training
- Testing — the data-centric testing will consider aspects such as data splits, targeted metrics, stress tests, and evaluations on subgroups to test how the model would run inside various areas
- Deployment — this is based on the focus of the post-deployment areas, specifically around data and model monitoring, adaptation, and retraining
06:13 — Aaron explains, “It’s like a loop that goes back in so you deploy it out, but then it goes back in as new data emerges…and is fed back into that framework again.”
06:26 — AI has come a long way. However, it’s not yet completely standardized and ethical. Implementing frameworks is becoming more of a true standard around AI and ML. Bias, whether intentional or unintentional, is still a major concern. There are big strides being made to mitigate bias.
07:13 — Aaron foresees the DC-Check framework to be an extension of the AI Bill of Rights that the White House released. He’s hopeful that there will be stronger AI/ML standards put in place that are specifically built to integrate across cybersecurity and data standards that have already been in place.
07:40 — The lines are continuing to blur between AI, security, and data as cloud platforms are being used more. Further, the use of multi-cloud adds more complexity. Aaron highlights that standardization and frameworks are becoming increasingly important across AI, security, and data as these areas continue to work more together.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: