In episode 114 of the AI/Hyperautomation Minute, Aaron Back reviews a recent press release from VERSES AI about its breakthrough in explainable AI research.
This episode is sponsored by “Selling to the New Executive Buying Committee,” an Acceleration Economy Course designed to help vendors, partners, and buyers understand the shifting sands of how mid-market and enterprise CXOs are making purchase decisions to modernize technology.
01:16 — After reviewing the press release and VERSES AI’s research on explainable AI, Aaron noted a few key points:
- Fully transparent explainable AI is not here yet
- Human influence is still a major factor
- There’s still a lot of work to do
01:32 — This surfaces critical points that are outlined in the Acceleration Economy guidebook “The Ethical & Workforce Impacts of Generative AI.” In the guidebook, Aaron posited that explainable AI should provide visibility for both technology teams and business decision-makers.
01:57 — For technology teams, explainable AI should provide visibility into data sources, data usage, data influence, how the AI model can be improved, and AI data security.
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
02:11 — For business decision-makers, explainable AI should provide visibility into competitive AI opportunities, AI data compliance, AI upskilling opportunities, and AI security.
02:34 — The VERSES AI research paper raised the topic of audits and the complexities of really understanding the AI output. Could it pass a stringent audit, especially in the areas of compliance or for specific industries?
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: