Generative AI is a rapidly developing field with the potential to revolutionize many industries. However, the A.I. explosion has also raised fears about the risks to user privacy, the economy, its ability to be used by criminals, and more. The White House joined countries across the globe in taking actions to regulate generative AI. On May 5, 2023, the White House issued a press release outlining their efforts to promote responsible AI innovation and protect American businesses and individuals.
Here is an overview of these actions and what they might mean for the future of AI.
Investment to power responsible American AI Research and Development
The National Science Foundation is investing $140 million to launch seven new AI research institutes. The institutes will focus on ethical, trustworthy, and responsible AI advances that serve the public good. They will also bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce.
Public Assessments of Existing Generative AI Systems
The White House announced new independent public evaluation of AI systems developed by leading AI developers at the AI Village at DEFCON 31. This will allow current models to be thoroughly evaluated by community partners and AI experts, and will provide critical information to researchers and the public about the impacts of these models.
Policies to Ensure the U.S. Government is Leading by Example
The Office of Management and Budget (OMB) will release draft policy guidance on the use of AI systems by the U.S. government for public comment this summer. The guidance will establish specific policies for federal departments and agencies to follow, and will empower agencies to responsibly leverage AI to advance their missions.
Creation of AI Bill of Rights
The White House has created a blueprint for an AI Bill of Rights, which outlines a set of standards that each person or business is entitled to when it comes to using AI. These standards include:
- Safe and Effective Systems
- Algorithm Discrimination Protections
- Notice and Explanation
- Alternative Options
Creation of AI Risk Management Framework
The AI Risk Management Framework (AI RMF) is a voluntary framework developed by the National Institute of Standards and Technology (NIST) to help organizations manage the risks associated with artificial intelligence (AI). The AI RMF provides a structured and repeatable process for identifying, assessing, and mitigating AI risks. It is designed to be flexible and adaptable to the specific needs of each organization.
AI is constantly advancing, which means more actions and legislation will continue to emerge. We’ll continue to monitor and bring you the latest updates for businesses.
Have questions about how these updates affect your business? Contact us.