U.S. Government to Introduce First AI Development Regulations
The U.S. government plans to implement its inaugural regulations for the development of artificial intelligence (AI) with an executive order from the Biden administration. The new standards will require AI system developers to report their progress to the Federal Government, establish testing criteria before public release, and ensure that the U.S. remains at the forefront of AI technology.
The executive order, available on the White House website, emphasizes the importance of safety, security, and trustworthiness in AI systems. However, it does not specifically address transparency or oversight, leaving uncertainty about public access to information regarding how large corporations build AI systems and the datasets used to train them. Given the societal impact of AI, it is worth considering recording and timestamping training data on a public blockchain to provide outsiders with a clearer understanding of AI behavior.
President Joe Biden has acknowledged the need to govern AI technology, highlighting the risks it presents when in the wrong hands. He warns that AI could make it easier for hackers to exploit software vulnerabilities that power society.
AI Executive Order: First of Its Kind in the United States
The Biden administration claims this executive order represents the most comprehensive efforts to protect Americans from the potential risks associated with AI systems. It specifically addresses risks related to national security, health, individual privacy and safety, as well as the potential for fraud and deception in AI content.
The order also directs the National Security Council and White House Chief of Staff to develop a National Security Memorandum for further AI-related actions. This memorandum aims to ensure the safe, ethical, and effective use of AI by the U.S. military and intelligence community, while also countering adversarial military applications of AI in other countries.
The executive order addresses two categories of risks: those posed by domestic AI development and those from AI systems developed outside the U.S. that could be weaponized against U.S. systems, public health, or used for fraudulent purposes. The Biden administration also intends to establish relationships with ideologically-aligned foreign countries to promote similar standards and closely monitor their AI development efforts.
In line with other recent regulations, equity and the protection of civil rights for vulnerable minorities are key considerations. The order urges landlords, the criminal justice system, employers, and federal contractors to avoid using AI profiling algorithms that could result in discrimination.
On a positive note, the new rules aim to leverage AI’s benefits in healthcare, workplace training, and education. These include the development of affordable and life-saving drugs and the implementation of AI-based personal tutoring to assist educators.
To foster innovation and maintain the U.S.’s leading role in AI, a pilot National AI Research Resource will provide students and researchers with increased access to AI resources and introduce a grants program. Furthermore, small developers and entrepreneurs will have greater access to technical assistance and resources.
Regulation, Pause, or Monitoring?
While the concept of artificial intelligence has been part of computer science for decades, the public has only recently witnessed impressive and sometimes alarming demonstrations of AI capabilities. The nature of this technology enables its exponential growth, sparking calls for various degrees of regulation, ranging from increased oversight to outright bans.
In March 2023, the Future of Life Institute issued an open letter suggesting a six-month pause in the development of AI systems more advanced than OpenAI’s GPT-4 (note: GPT-4 was just added to the public version of OpenAI’s ChatGPT this week). The letter questions the need to develop AI systems that are “human-competitive at general tasks,” as it introduces risks that even expert developers may not comprehend, ultimately rendering humanity obsolete.
More than 1,000 technology and AI research experts, including Elon Musk, signed the institute’s letter.
Auditing AI Datasets as a Potential Solution
In light of these concerns and the extensive list of known and unknown risks associated with AI progress, governmental regulation is likely necessary. However, there is currently no global governing body capable of regulating AI development worldwide. As has occurred with biological research, any research labeled “edgy” or “dangerous” will likely shift to jurisdictions where local regulations do not apply.
An alternative solution, though not foolproof, would involve storing training datasets on a trusted global Ledger accessible for auditing by anyone. This approach would help monitor dataset usage and track data sources. Blockchains with fast, scalable, and open characteristics, universally recognized as a source of truth, would be suitable for this purpose.
Konstantinos Sgantzos, an AI researcher and lecturer, advocates for maintaining training data on such a Ledger to enable auditing by any AI system. Sgantzos emphasized that dataset auditability is essential, even more so than analyzing the code underlying neural networks and machine-learning systems.
In a recent Twitter Spaces session with fellow researcher Ian Grigg, Sgantzos expressed his preference for the term “analysis of information” instead of “artificial intelligence” to distinguish between different levels of AI algorithms found in consumer products like Alexa and YouTube compared to those with more advanced capabilities.