Acceleration Economy
  • Home
  • Cloud Wars
  • Analyst Content
    • By Category
      • AI/Hyperautomation
      • Cloud/Cloud Wars
      • Cybersecurity
      • Data
    • By Interest
      • Leadership
      • Office of the CFO
      • Partners Ecosystem
      • Sustainability
    • By Industry
      • Financial Services
      • Healthcare
      • Manufacturing
      • Retail
    • By Type
      • Courses
        • Understanding the New Executive Buying Committee
      • Guidebooks
      • Digital Summits
      • Practitioner Roundtables
    • By Language
      • Español
  • Vendor Shortlists
    • All Vendors
    • AI/Hyperautomation
    • Cloud
    • Cybersecurity
    • Data
  • What we do
    • Advisory Services
    • Marketing Services
    • Event Services
  • Who we are
    • About Us
    • Practitioner Analysts
  • Subscribe
Twitter Instagram
  • CIO Summit
  • Summit NA
  • Dynamics Communities
Twitter LinkedIn
Acceleration Economy
  • Home
  • Cloud Wars
  • Analyst Content
        • By Category
          • AI/Hyperautomation
          • Cloud/Cloud Wars
          • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
          • Data
        • By Interest
          • Leadership
          • Office of the CFO
          • Partners Ecosystem
          • Sustainability
        • By Industry
          • Financial Services
          • Healthcare
          • Manufacturing
          • Retail
        • By Type
          • Courses
            • Understanding the New Executive Buying Committee
          • Guidebooks
          • Digital Summits
          • Practitioner Roundtables
        • By Language
          • Español
  • Vendor Shortlists
    • All Vendors
    • AI/Hyperautomation
    • Cloud
    • Cybersecurity
    • Data
  • What we do
    • Advisory Services
    • Marketing Services
    • Event Services
  • Who we are
    • About Us
    • Practitioner Analysts
  • Subscribe
    • Login / Register
Acceleration Economy
    • Login / Register
Home » How Frameworks Ensure AI Systems Uphold Customer Trust — and Corporate Values
AI/Hyperautomation

How Frameworks Ensure AI Systems Uphold Customer Trust — and Corporate Values

Toni WittBy Toni WittApril 4, 20237 Mins Read
Facebook Twitter LinkedIn Email
AI Frameworks
Share
Facebook Twitter LinkedIn Email

As I discussed in a recent AI/Hyperautomation Minute, I consider customer trust to be an up-and-coming factor for success when it comes to providing AI-powered products and services. While performance has dominated the conversation around AI in the past, with demonstrations of algorithms beating human chess players and talk of ever-increasing parameter counts, I think the tide is turning in favor of trust as AI moves into consumer markets and gains mainstream awareness.

Buying decisions will be made not just on the raw performance of an AI system but increasingly based on other factors — and, in particular, trust. This makes sense, given that more vital processes — from mortgage lending to healthcare checkups to hiring — are being automated with AI. Going forward, there has to be zero tolerance for bias, errors, or leaks in these critical systems.

However, most organizations still aren’t assigning top priority to trust-building activities like bias mitigation, ethical development, safety, and security. With their sights set on maximizing performance, your organization has an opportunity to focus on these factors for competitive differentiation. This analysis dives into one way of doing that: using AI frameworks.

The Role of Frameworks in AI Development

It’s clear that building an AI system or embedding the technology into existing operations isn’t a box that your PR or innovation team is urging you to check. AI is here to stay and organizations that don’t use automation technologies will struggle to maintain a competitive edge in the Acceleration Economy.

As such, it’s necessary to have a set of guiding principles that help you along the way. A good framework leans on the unique values and aspirations of your company while also providing a clear lens through which to make day-to-day decisions. It keeps everyone on the same page and ensures AI integrates well into the entire organization, not just certain departments. It also helps you maintain compliance and be transparent with the public, which builds trust!

Now I’d like to share three examples of organizations using AI frameworks and how they’re helping those firms.

Insights into the Why & How of AI & Hyperautomation's Impact_featured
Guidebook: Insights Into the Why and How of AI and Hyperautomation’s Impact

Pfizer Implements ‘Responsible AI Principles’

There are plenty of opportunities for automation in the healthcare industry. Since the start of the pandemic, there has been an increased need to reduce workload for healthcare workers. Recognizing this, Pfizer has been leading the industry forward in its application of AI.

Part of Pfizer’s effort was creating its own framework for responsible AI development — the Responsible AI Principles — based on fairness, empowering humans, privacy, transparency, and ensuring accountability of its AI systems.

These principles consist of inspiring language that can help other companies clarify their own responsible AI objectives. This is partly because Pfizer has grown in an industry where real people — patients — are the top priority. Further, building trust is paramount to delivering quality healthcare services.

In order to stay true to these principles, Pfizer worked with Dataiku, a company on the Acceleration Economy Top 10 Shortlist of AI/Hyperautomation enablers. Together, the partners built and deployed an AI Ethics Toolkit that conducts bias detection, bias mitigation, and model transparency.

Pfizer’s AI team leveraged Dataiku’s Interactive Statistics for EDA (Exploratory Data Analysis) tool to explore data sets, detect biases, and improve the data before even training a model.

Pfizer’s approach to bias detection.

Using Dataiku’s visual dashboards and plug-ins, Pfizer was able to mitigate bias after it was detected. The company relied on a technique drawn from traditional social science known as ‘matching,’ or minimizing differences in how subsegments of a population receive treatments to mimic randomized control experiments.

Graphical user interface, application

Description automatically generated
Pfizer’s approach to bias mitigation.

To ensure model transparency and explainability, the Pfizer team developed a list of questions to ask before and after developing a model. For more on this approach, check out Dataiku’s overview of the partnership.

Ultimately, Pfizer was able to answer questions posed about its models based on data made visible through Dataiku’s dashboards and plug-ins. The guidance provided by Dataiku’s team helped Pfizer’s AI team deploy AI systems in line with its responsible AI principles.

Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.

Partnering with one of the Acceleration Economy AI/Hyperautomation Top 10 companies is a great way to stay true to whichever framework you adopt, regardless of your industry. As a startup founder, I’ve learned that you will not reach lofty goals alone. Outsourcing and bringing in capable partners is critical, especially if your company doesn’t have experience operating in the AI world. There could be many variables you don’t see, leading to missed opportunities to recognize and work toward your company’s mission and values. The end cost of these misses isn’t just monetary; faulty AI systems can be harmful to the lives of others.

Rolls-Royce’s Aletheia Framework

Motivated by its own need to increase trust in the outcomes of its AI algorithms, Rolls-Royce released the Aletheia Framework to the public so that other organizations can leverage it to their benefit and hopefully ensure a better future for everyone, with the intention of “[moving] the AI ethics conversation forwards from discussing concepts and guidelines to accelerating the process of applying it ethically,” according to then-CEO Warren East.

The framework contains a list of 32 actionable steps, all based on three core pillars:

  1. Positive social impact on all stakeholders
  2. Accuracy and trust through mitigating bias and increasing validity
  3. Proper governance through protocols and checks

These pillars are useful for guiding your framework because they can help you avoid the common pitfall of focusing only on certain trust-building processes like bias mitigation, which has had extensive media coverage in recent years. Rather, it enables you to consider the broader impact on all stakeholders. Plus, the final pillar — proper governance — is necessary for putting ideals into practice. A framework is useless without this component.

Deloitte’s ‘Trustworthy AI Framework’

Deloitte developed a framework that outlines the core values that lead to trust in an AI system. Of course, regulatory compliance is at the heart, but buyers will also care about elements including fairness, reliability and robustness, privacy, safety and security, accountability, and transparency.

Diagram

Description automatically generated
Components of Deloitte’s Trustworthy AI framework

This framework enables Deloitte to maintain compliance and be transparent with customers. Aiming to build trust in their AI systems, other companies can apply the elements of this framework to guide them in meeting the expectations of customers and regulatory compliance requirements.

Government’s Role

I believe governments will need to play a role in the regulation of AI in the coming years. Left alone, companies might not hold themselves fully accountable. By nature, they could prioritize profits and go-to-market speed over safety and long-term societal impact. Regulators must defend the latter, yet it is becoming increasingly difficult to do so with the accelerating pace of AI development.

When AI systems contain bias against certain demographics, it leads to the production of harmful outputs. Additionally, it runs the risk of eluding human control in what is sometimes called “runaway AI.”

For instance, OpenAI has already started trials to make GPT-4 learn recursively and act on its text outputs by giving it compute resources and access to capital. This is the breeding ground for superintelligence — and there are no regulatory boundaries for protection. There is still significant ambiguity in many cases involving ethics and AI.

If you disagree with the view that government must be involved in AI, consider this example: if a financial advisory firm with fiduciary responsibility to clients builds an AI tool advising those clients on certain investments, and the underlying security declines in value and the client sues, who is held responsible? The AI developers? The client? The CEO of the financial firm? This is just one of many possible scenarios that will need to be addressed in a regulatory context.

Governments can get some inspiration from the EU and its AI Ethics Guidelines which, among many other important principles, continue the EU’s emphasis on consumer protection it established with the GDPR.

Conclusion

Trust in your AI systems is not just a nice-to-have; it’s a competitive advantage. To build trust, organizations must detect bias in their models early on, mitigate it where possible, and take factors like security, reliability, and ethics seriously. AI frameworks can help keep your organization aligned on these values throughout the development pipeline and bringing you proactively down the path of building and maintaining trust in your AI models.


Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel:

Interested in Dataiku?

Schedule a discovery meeting to see if we can help achieve your goals

Connect With Us

Book a Demo

ai Artificial Intelligence Compliance Dataiku ethics featured framework trust
Share. Facebook Twitter LinkedIn Email
Analystuser

Toni Witt

Founder/CEO, CosmoLabs
Acceleration Economy Hyperautomation Host

Areas of Expertise
  • AI/ML
  • Virtual/Augmented Reality
  • Web 3.0
  • Website
  • LinkedIn

Besides keeping up with the latest in corporate innovation and emerging tech, Toni Witt runs CosmoLabs, a creative agency that builds custom web-based augmented reality experiences for brands. He offers a free consultation session to get the conversation going about how your organization can leverage AR.

  Contact Toni Witt ...

Related Posts

How to Implement Zero Trust For Remote Endpoints in the Enterprise

May 28, 2023

How Celonis Makes Process Mining More Accessible: Embracing Generative AI, Partners

May 27, 2023

C3 AI’s Thomas Siebel on How Generative AI Applies to Business Apps, Impacts Workers

May 26, 2023

Innovation Profile: How Generative AI Enhances ServiceNow Platforms to Enable Better Customer Experiences

May 26, 2023
Add A Comment

Comments are closed.

Recent Posts
  • How to Implement Zero Trust For Remote Endpoints in the Enterprise
  • How Celonis Makes Process Mining More Accessible: Embracing Generative AI, Partners
  • C3 AI’s Thomas Siebel on How Generative AI Applies to Business Apps, Impacts Workers
  • Innovation Profile: How Generative AI Enhances ServiceNow Platforms to Enable Better Customer Experiences
  • Innovation Profile: How IBM watsonx Helps Organizations Manage Data, AI, and Governance

  • 3X a week
  • Analyst Videos, Articles & Playlists
  • Exclusive Digital Business Content
This field is for validation purposes and should be left unchanged.
Most Popular Guidebooks

The Ethical and Workforce Impacts of Generative AI

May 26, 2023

Co-Creation and Growth With Professional Services

May 24, 2023

The Business Impact and Opportunity of Generative AI

May 16, 2023

Healthcare Industry Clouds

May 10, 2023

Advertisement
Acceleration Economy
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Advertising Opportunities
  • Do not sell my information
© 2023 Acceleration Economy.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?

Connect with

Login with Google Login with Windowslive

Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.