Americas

  • United States

Asia

Oceania

by Paul Barker

MIT delivers database containing 700+ risks associated with AI

News
15 Aug 20246 mins
Risk Management

Called the AI Risk Repository, the goal, its creators say, is to provide an accessible and updatable overview of risk landscape.

A group of Massachusetts Institute of Technology (MIT) researchers have opted to not just discuss all of the ways artificial intelligence (AI) can go wrong, but to create what they described in an abstract released Wednesday as “a living database” of 777 risks extracted from 43 taxonomies.

According to an article in MIT Technology Review outlining the initiative, “adopting AI can be fraught with danger. Systems could be biased or parrot falsehoods, or even become addictive. And that’s before you consider the possibility AI could be used to create new biological or chemical weapons, or even one day somehow spin out of control. To manage these potential risks, we first need to know what they are.”

The AI Risk Repository

To answer that question, and others, researchers with the FutureTech Group at MIT’s Computer Science & Artificial Intelligence Laboratory (CSAIL), assisted by a team of collaborators, embarked on the development of the AI Risk Repository.

A news alert on the CSAIL site about the launch stated that a review by the researchers “uncovered critical gaps in existing AI risk frameworks. Their analysis reveals that even the most thorough individual framework overlooks approximately 30% of the risks identified across all reviewed frameworks.”

In the alert, current project lead Dr. Peter Slattery said, “since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots.”

The abstract noted, “the risks posed by AI are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research and react to them.”

The Repository itself, according to an FAQ from MIT, was created by using a “systematic search strategy, forwards and backwards searching, and expert consultation to identify 43 AI risk classifications, frameworks, and taxonomies. We extracted 700+ risks from these documents into a living AI risk database.”

MIT researchers said it provides an accessible overview of the AI risk landscape, a regularly updated source of information, and a common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators.

It has, they added, three parts:

  • The AI Risk Database, which captures the 700+ risks extracted from the 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks that classifies how, when, and why the risks occur.
  • The Domain Taxonomy of AI Risks, which classify the risks into seven domains and 23 subdomains.

The seven domains are Discrimination & Toxicity, Privacy & Security, Misinformation, Malicious actors & misuse, Human-computer interaction, Socioeconomic & environmental harms, and AI system safety, failures and limitations.

A tool to assist in AI governance

Brian Jackson, principal research director at Info-Tech Research Group, described the repository as being “incredibly helpful for leaders who are working to establish their AI governance at their organizations. AI poses a lot of new risks to organizations and also exacerbates some existing risks. Cataloging all of those would require an enterprise risk expert, but now MIT has done all that hard work for organizations.”

Not only that, he said, “it is available in a convenient Google Sheet that you can copy and then customize for your own use. The database categorizes AI risks into causation and into seven different domains. It’s an indispensable knowledge base to work from for anyone working in AI governance, and it’s also a killer tool that they’ll use to create their own specific organizational catalogs.”

However, researchers noted in the FAQ that the Repository does have several limitations, including being limited to risks from the 43 taxonomies, so it “may be missing emerging, domain-specific risks, and unpublished risks, and has potential for errors and subject bias; we used a single expert reviewer for extraction and coding.”

Despite those shortcomings, the MIT Technology Review article stated that findings “may have implications for how we evaluate AI,” and also contained the following from Neil Thompson, director of MIT FutureTech and one of the creators of the database: “What (it) is saying is, the range of risks is substantial, not all of which can be checked ahead of time.”

A living work

In the abstract, Thompson and others involved in the project wrote that the Repository, “is to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing and managing the risks posed by AI systems.”

Bart Willemsen, VP analyst at Gartner, said of the initiative, “first,  it is great to see these efforts and we believe it imperative this type of work continues to grow. Earlier initiatives, such as Plot4AI, may have had less official standing and breadth, but have long informed the many who demonstrate concerns in using AI. We’re fielding client questions with concerns on AI risks all over the world for several years.”

The informative work of MIT, he said, “provides a much more comprehensive understanding of AI technology risks, helps anyone prepare to use it responsibly, and exercise control over the technology we decide to deploy.”

Willemsen added, “As the repository is expected to grow over time, it being a living work, it would be great to also see it flanked with potential mitigating measures that lay the groundwork for minimum bets practices to be applied. The time for ‘running fast and ignorantly breaking whatever stands in the way’ ought to be over.”

It also allows, he said, “for a more proactive approach to using AI responsibly and maintain control over our operations, in terms of what data we use and how, as well as granular control over the technology functions we decide to deploy and the context in which we decide to deploy it. “

Show me more