The White House announced on Friday that several major US companies developing artificial intelligence have agreed to cooperate with the US government and adhere to several principles to ensure public trust in artificial intelligence.
Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have all signed a commitment to make AI safe, secure and reliable. In May, the Biden administration announced that it would meet with top AI developers to ensure they are aligned with US policy.
The obligations are not binding and there are no penalties for non-compliance. The policies also cannot retroactively affect AI systems already in place — one provision says companies commit to testing AI security holes internally and externally before it goes public. However, the new duties aim to convince the public (and to some extent lawmakers) that AI can be deployed responsibly. The Biden administration has already proposed using artificial intelligence to make government tasks easier.
Perhaps the most immediate effects will be felt in AI art, with all parties agreeing to a digital watermark that identifies AI-created artwork. Some services, such as Bing's Image Creator, already do this. All signatories have committed to using AI for public good, such as cancer research, and to identify appropriate and inappropriate areas of use. It was not specified, but it could include existing safeguards that prevent ChatGPT from helping to plan a terrorist attack, for example. The AI companies also promised to maintain the data protection that Microsoft maintained in the business versions of Bing Chat and Microsoft 365 Copilot.
All companies are committed to conducting internal and external security tests of their AI systems before release, and to sharing AI risk management information with industry, governments, the public and academia. They also promised to give third-party researchers access to find and report vulnerabilities. Microsoft president Brad Smith supported the new commitments, noting that Microsoft supported the creation of a national registry of high-risk AI systems. (One California congressman called for a federal agency to monitor AI.) Google also revealed its "red team" of hackers who try to break AI using attacks like blitz attacks, data poisoning and more. "As part of our mission to create safe and useful AGI, we continue to test and refine specific control practices tailored specifically to high-performance underlying models like the ones we produce," OpenAI said in a statement. "We will also continue to invest in research in areas that can help regulate, such as techniques to assess potentially dangerous properties of AI models."
COMMENTS