Artificial intelligence (AI) tops the agenda for many organizations, but the anticipated opportunities come with a host of challenges, many of which relate to ethics and trust. For Ritu Jyoti, Program Vice President, Artificial Intelligence (AI) Research with IDC’s software market research and advisory practice, there’s a business imperative to build trustworthy and ethical AI to realize AI/ML at scale and enjoy sustainable competitive advantage.
In a recent interview with ITWC President Fawn Annan, conducted as part of the CMO Talks podcast series, Jyoti explored the foundational elements of building and deploying trustworthy and ethical AI and machine learning (ML). The key to reducing exposure and potential negative impact, she said, is a holistic governance framework across the AI lifecycle, encompassing everything from design and development to deployment to production.
A trusted advisor to some of world’s largest technology firms and end-users, Jyoti shared some of the findings from her Jan 2020 IDC Perspective paper: Mitigate AI/ML Risks with Trustworthy, Ethical, and Governed AI. “AI has vast potential,” she said. “Responsible implementation is up to us.”
As AI and ML technologies become part of everyday life, and data and big data insights become more accessible to everyone, business decisions, said Jyoti, should be based, not only on information and knowledge, but also on company values and national standards and regulations. “The whole C-suite”, she said, “including the CMOs, the CDO, and the data teams, must take on a very important model role as the conscience of the corporation.”
According to Jyoti, there are five foundational elements to an AI governance framework: fairness, explainability, robustness, lineage, and transparency. After walking Fawn Annan and podcast listeners through a high level explanation of each element, she compared them to critical checks and balances to prevent adversarial attacks, protect brand reputation and ensure best practices.
When asked by Annan to talk more about the role of the CMO, Jyoti referenced a popular misconception that driving AI is the sole responsibility of the data science team. Not true, she said. “Based on our research and talking to a lot of the early adopters, AI is actually a team sport that requires both the business side and the data scientists.”
Another misconception is that any one group is responsible if something goes wrong. From Jyoti’s perspective, mistakes are a company responsibility. Blaming one particular group within the organization erodes a culture of risk taking and prevents innovation from flourishing. Jyoti believes the CMO has to partner with the CIO, the CEO, the chief data officer, or the trust officer, and they need to assess business specific ethics and define solutions on a case by case basis. “The CMO is actually becoming a very, very strategic role in organizations these days,” she said. “They’re the brand cheerleaders for the company.”
After discussing the impact of AI on customer experience and stressing the importance of top down leadership when it comes to implementation, Jyoti concluded the podcast with her favourite sentence: Every company will be an AI company.
Embracing AI is no longer a choice, she added. Organizations that embrace it will thrive. Those that don’t will be left behind. With so much at stake, the onus is on organizations to design and maintain trustworthy and ethical AI.