On October 30, 2023, after recognizing that Artificial Intelligence (AI) is the most consequential technology of our time and anticipating that it will accelerate more technological change in the next five to ten years than witnessed in the past fifty, President Biden issued an Executive Order directing actions to establish new AI standards. These directives, which the White House presents as constituting the most significant action any government has taken to address AI safety, security and trust, cover a variety of issues for private and public entities domestically and internationally. Such issues include safety and security, privacy, equity and civil rights, healthcare, employment and education in addition to promoting innovation, developing international standards and ensuring responsible government AI use.

Safety and security

Some of the Order’s most sweeping directives pertain to AI safety and security, including mandated testing, federal reporting and screening for certain models. Specifically, prior to making their AI systems public, developers of foundation models posing “a serious risk to national security, national economic security, or national public health and safety,” must notify the federal government when training the model and share results of all red-team safety tests, standards for which will be established by the National Institute of Standards and Technology (NIST). The Department of Homeland Security (DHS), establishing the AI Safety and Security Board, will apply these testing standards to critical infrastructure sectors, and the Departments of Energy and Homeland Security will address AI systems’ threats to critical infrastructure in addition to chemical, biological, radiological, nuclear and cybersecurity risks. Moreover, to protect against AI being used to engineer dangerous biological materials, the Order directs agencies that fund life-science projects to establish standards for biological synthesis screening as a condition for federal funding.  

Also pertaining to security, the Order introduces several safety-related measures addressing deepfakes, cybersecurity and government AI use. First, to prevent fraud and deception, the Department of Commerce will develop guidance for authenticating official content and watermarking to clearly label AI-generated materials. Second, the Order calls for the establishment of an advanced cybersecurity program to develop AI tools to identify and fix vulnerabilities in critical software. Third, the Order mandates the National Security Council (NSC) and White House Chief of Staff to develop a National Security Memorandum ensuring the US military and intelligence community use AI safely, ethically and effectively and counter adversarial AI use.


Regarding privacy, the Order calls on Congress to pass data privacy legislation and directs federal support for privacy-preserving techniques and technologies, such as cryptography, including through the establishment of a Research Coordination Network. The National Science Foundation will work with the network to promote federal agencies’ adoption of these technologies. Furthermore, federal agencies’ collection and use of commercially available information containing personally identifiable data, including information obtained from data brokers, will be evaluated, and they will receive strengthened privacy guidance.

Discrimination, bias and more

As for efforts to combat AI discrimination, bias and other abuses, guidance will issue for landlords, federal benefits programs and federal contractors to ensure AI algorithms are not used to exacerbate discrimination. Algorithmic discrimination will be addressed through training, technical assistance and coordination between the Department of Justice (DOJ) and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI. Lastly, the Order affects a multitude of other areas such as employment, healthcare, education, innovation, foreign affairs and government with directives including the following:

  • Advancing AI in healthcare and drug development, with the Department of Health and Human Services establishing a safety program to receive reports of and remedy harms or unsafe healthcare practices involving AI.
  • Developing principles and practices to address job displacements; labor standards; workplace equity, health and safety; and workers’ data collection.
  • Producing a report on AI’s potential labor-market impacts.
  • Studying and identifying options for strengthening federal support for workers facing labor disruptions from AI.
  • Creating a National AI Research Resource, a tool providing AI researchers and students access to key AI resources and data.
  • Encouraging the Federal Trade Commission (FTC) to exercise its authorities for promoting a fair, open and competitive AI ecosystem.
  • Promoting safe development of AI abroad to mitigate dangers to critical infrastructure.
  • Facilitating rapid, efficient contracting for agency acquisition of specific AI products and services.

Our take

The March 2023 open letter from the likes of Elon Musk, Max Tegmark and Steve Wozniak calling for at least a six-month moratorium on the training of AI systems more powerful than GPT-4 so that AI research and development can be refocused on principles such as safety, interoperability, transparency and trust seems to have been answered in part by the raft of initiatives not only in the US but globally that will provide regulators with better insights into systems’ risks and capabilities. Recently, the US and UK announced a close collaboration on AI safety—which is expected to combine the Executive Order’s protections on AI development with existing work by the UK’s Frontier AI taskforce — and the establishment of AI safety institutes in their respective countries. Moreover weeks ago, EU lawmakers made progress towards the world’s first comprehensive legal framework for AI — the AI Act — by agreeing to most parts of Article 6 of the draft Act that outlines the types of systems that will be designated “high risk” and consequently subject to greater regulatory scrutiny. Similar to the Executive Order’s requirements for models posing serious risks to areas such as national security and public health and safety, the EU AI Act will require high risk systems undergo testing prior to market launch.

The Order, which is a broad, ambitious move, is also forward-looking contrasted to its immediate impact. For example, while the set of technical conditions for models subject to the reporting requirements for systems posing serious risks to areas such as national security is yet to be defined and will be regularly updated, the Order states that in the meantime, federal safety and security reporting requirements apply to any model trained using either i) a quantity of computing power greater than 10^26 integer or floating-point operations per second (FLOPS) or ii) primarily biological sequence data and a quantity of computing power greater than 10^23 integer or FLOPS. For reference, it is estimated that GPT-4 comes in just under the Order’s threshold at 2.10 x 10^25 FLOPS. However, the Order provides a specific roadmap, and players in the AI industry should take note of its multitude of provisions centered on safety, security and trust and position themselves accordingly.