On October 30, 2023, the president of the United States, Joe Biden, signed an order to regulate the use of generative AI. Software publishers will have to report the results of security tests carried out during model design to the White House, if the models “pose a serious risk to national security, national economic security, or national public health and safety.”
Aside from this measure, the order is mostly a declaration of intent. It suggests setting up standardized testing to assess the risks of AI models, as well as a content marking system to keep track of which AI generated the content.
The Biden administration aimed to pass a broader bipartisan bill, equivalent to the European Union’s AI Act. However, it was unable to reach a majority in Congress on the issue, particularly in the House of Representatives, where Democrats are a minority. In order to work on regulating AI, the US executive must therefore fall back on orders such as this one, with a limited scope, and on unbinding agreements negotiated with industry players.
In the summer of 2023, fifteen US leaders in generative AI thus signed a White House agreement, which provides for a tracking system and independent audits. The parties also made a commitment to uphold their users’ privacy rights. However, the agreement does not include measures on illegal content or training data transparency.