The United States National Institute of Standards and Technology (NIST), operating under the Department of Commerce, has taken a significant step in promoting a safe and trustworthy environment for Artificial Intelligence (AI) by establishing the Artificial Intelligence Safety Institute Consortium (“Consortium”). This Consortium aims to develop a new measurement science that identifies scalable and proven techniques and metrics to advance the responsible use and development of AI.

Objective and Collaboration of the Consortium

The primary objective of the Consortium is to address the potential risks associated with AI technologies and protect the public while encouraging innovative advancements in AI. NIST aims to leverage the expertise and capabilities of the broader community to identify reliable and interoperable measurements and methodologies for the responsible use and development of trustworthy AI.

Collaborative Research and Development (R&D), shared projects, and the evaluation of test systems and prototypes are among the key activities outlined for the Consortium. These efforts are in response to the Executive Order titled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” issued on October 30, 2023. The Executive Order emphasizes a comprehensive set of priorities related to AI safety and trust.

Call for Participation and Cooperation

To achieve these objectives, NIST is inviting interested organizations to contribute their technical expertise, products, data, and models through the AI Risk Management Framework (AI RMF). This invitation applies to nonprofit organizations, universities, government agencies, and technology companies. The Consortium’s collaborative activities are expected to begin no earlier than December 4, 2023, upon receiving a sufficient number of completed and signed letters of interest. The participation is open to all organizations that can contribute to the Consortium’s activities, and selected participants will be required to enter into a Consortium Cooperative Research and Development Agreement (CRADA) with NIST.

See also  BTC Exchanges Reach Lowest Levels Since December 2017

Addressing AI Safety Challenges

The formation of the Consortium reflects the United States’ commitment to catch up with other developed nations in establishing regulations for AI development, particularly regarding user and citizen privacy, security, and unintended consequences. This initiative marks a significant milestone under President Joe Biden’s administration, demonstrating the adoption of specific policies to govern AI in the country.

The Consortium will play a crucial role in developing guidelines, tools, methods, and best practices to facilitate the evolution of industry standards for safe and trustworthy AI development and deployment. It is positioned to contribute at a critical moment, not only for AI technologists but also for society, ensuring that AI aligns with societal norms and values while promoting innovation.

Image source: Shutterstock



### News source: blockchain.news

By Team