Welcome To Our Awesome Magazine WordPress Theme

NIST releases AI Framework

Snapshot:

  • The Framework is voluntary, flexible guide designed to assist organizations implementing trustworthy and responsible AI systems.
  • Part I helps organizations assess AI-related impacts and risks; Part II outlines functions that will allow AI actors to address those risks in practice.
    • Part I set outs the characteristics of trustworthy AI systems: they are valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced;  and fair (harmful bias managed).
    • Part II sets out the four “functions” that need to be in place for organizations to manage the risks that are posed by an AI system: Govern, Map, Measure, and Manage.
  • Concepts from the Framework will provide a roadmap for jurisdictions seeking to regulate the AI space – as is currently underway in Canada. Organizations should consider the Framework in their development or acquisition of AI systems, and consider developing AI governance programs that align with the concepts in the Framework.

Background

The U.S. National Institute of Standards and Technology (“NIST”) recently released version 1.0 of its Artificial Intelligence Risk Management Framework (“AI RMF” or “Framework”), which amounts to a practical, flexible, and adaptable set of guidelines for AI actors across the AI lifecycle to use when they design, develop, deploy or use AI systems. The goal of the AI RMF is to provide a voluntary, rights-preserving, sector- and use-case agnostic guide for AI actors to implement in order to promote trustworthy and responsible AI systems.

This is a notable development in an area that has had few voluntary or obligatory requirements imposed on such actors to date, and is particularly relevant in the Canadian context as Bill C-27, which includes a proposed Artificial Intelligence and Data Act (“AIDA”), makes its way through second reading at the federal legislature.

AI RMF

The Framework is divided into two parts: the first equips AI actors, including individuals and organizations, with the tools needed to assess AI-related impacts and risks, and the second outlines functions that will allow AI actors to address those risks in practice.

Part 1: Foundational Information – Framing risk, defining the audience, and articulating key characteristics of “trustworthy” AI

The AI RMF defines “risk” as a combination of likelihood and impact of an event: “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” The Framework explains that such consequences, or impacts, of AI systems may be both positive and negative, and can be experienced by people, organizations, and ecosystems. Notably, the Framework provides tools for actors across the AI lifecycle to collaborate in order to minimize anticipated negative impacts, while also identifying opportunities to maximize positive ones.

Part 1 acknowledges…

Read The Full Article at Dentons

Post Tags
Share Post
Written by
No comments

Sorry, the comment form is closed at this time.