Microsoft details five ways to govern AI in new report

Yesterday, Microsoft released a new report, “Governing AI: A Blueprint for the Future”, detailing five guidelines that governments should consider when developing policies, laws, and regulations around AI. The report also seeks to highlight AI governance within Microsoft.

The company has been at the forefront of the AI frenzy, raining AI updates across its products, backing OpenAI’s viral ChatGPT, juicing up its own Bing bot with features like images and videos and ditching the waitlist for its full release, despite the bot’s infamous tendency to “hallucinate” or make up false information.

Microsoft sees even bigger promise from AI – new cures for cancer, new insights about proteins, climate change, fending off cyberattacks, and even protecting human rights in nations afflicted by civil wars or foreign invasions.

There is no slowing down the advances, but the push to regulate AI is now resounding as regulators worldwide start investigating and clamping down on the technology.

Microsoft’s president, Brad Smith, said in the report, “it’s not enough to focus only on the many opportunities to use AI to improve people’s lives,” adding that social media, for instance, was a tool that technologists and political commentators gushed about, but five years later, became “both a weapon and a tool – in this case aimed at democracy itself.”

Deepfakes, the alteration of existing content or generation of totally new content almost indistinguishable from “reality”, Smith noted, is the greatest threat from AI. We saw, for example, a couple of months ago, the mass circulation of a  synthetic video of U.S. president Joe Biden spewing transphobic discourse. 

But countering these emerging ills of AI should not be the sole responsibility of tech companies, Smith stated, as he went on to ask “how do governments best ensure that AI is subject to the rule of law?” and “what form should new law, regulation, and policy take?”

Here are the five guidelines to govern AI, according to Microsoft:

  1. Implement and build upon the successes of existing and new government-led AI safety frameworks, notably, the AI Risk Management Framework completed by the U.S. National Institute of Standards and Technology (NIST). Microsoft proposed the four following suggestions to build on that framework.
  2. Create safety brakes for AI systems that control the operation of a designated critical infrastructure. They would be similar to the braking systems engineers have built into elevators, buses, trains, etc. With this approach, the government would classify the high-risk AI systems that control the critical infrastructure and that could warrant brakes, mandate operators to implement the brakes, and ensure they’re in place before the infrastructure is deployed.
  3. Create a new legal framework that reflects the technology architecture for AI itself. For that purpose, Microsoft included details about critical pieces that go into building generative AI models and proposed specific responsibilities to be implemented at the three layers of the technology stack – applications layer, model layer, and the infrastructure layer. For the application layer, the safety and rights of people will be the priority. The model layer will require regulations involving the licensing for these models and the infrastructure layer will involve obligations for the AI infrastructure operators on which these models are developed and deployed.
  4. An annual AI transparency report, and expanded access to AI resources for academic research and the nonprofit community. Scientific and technological inquiry will suffer unless academic researchers can access more computing resources, the report reads.
  5. Pursue public-private partnerships to use AI to help address societal challenges

Microsoft also touted what the company is doing internally to govern AI, noting that it has almost 350 employees working on responsible AI. It added that it developed ethical principles over the past six years that translated into specific corporate policies spanning training, tooling and testing of systems. Additionally, the company said it completed roughly 600 sensitive use case reviews since 2019.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Featured Story

How the CTO can Maintain Cloud Momentum Across the Enterprise

Embracing cloud is easy for some individuals. But embedding widespread cloud adoption at the enterprise level is...

Related Tech News

Get ITBusiness Delivered

Our experienced team of journalists brings you engaging content targeted to IT professionals and line-of-business executives delivered directly to your inbox.

Featured Tech Jobs