WASHINGTON — President Joe Biden’s directive to all U.S. national security agencies to embed artificial intelligence technologies in their systems sets ambitious targets amid a volatile political environment.
That’s the first-blush assessment from technology experts after Biden on Oct. 24 directed a broad swath of organizations to harness AI responsibly, even as the technology is rapidly advancing.
“It’s like trying to assemble a plane while you’re in the middle of flying it,” said Josh Wallin, a fellow at the defense program at the Center for a New American Security. “It is a heavy lift. This is a new area that a lot of agencies are having to look at that they might have not necessarily paid attention to in the past, but I will also say it’s certainly a critical one.”
Federal agencies will need to rapidly hire experts, get them security clearances and set about working on the tasks Biden lays out as private companies are pouring in money and talent to advance their AI models, Wallin said.
The memo, which stems from the president’s executive order from last year, asks the Pentagon; spy agencies; the Justice, Homeland Security, Commerce, Energy, and Health and Human Services departments; and others to harness AI technologies. The directive emphasizes the importance of national security systems “while protecting human rights, civil rights, civil liberties, privacy, and safety in AI-enabled national security activities.”
Federal agencies have deadlines, some as soon as 30 days, to accomplish tasks. Wallin and others said that the deadlines are driven by the pace of technological advances.
The memo asks that by April the AI Safety Institute at the National Institute of Standards and Technology “pursue voluntary preliminary testing of at least two frontier AI models prior to their public deployment or release to evaluate capabilities that might pose a threat to national security.”
Frontier models refer to large AI models like ChatGPT that can recognize speech and generate human-like text.
The testing is intended to ensure that the models don’t inadvertently enable rogue actors and adversaries to launch offensive cyber operations or “accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models.”
But the memo also adds an important caveat: The deadline to begin testing the AI models would be “subject to private sector cooperation.”
Meeting that testing deadline is realistic, said John Miller, senior vice president of policy at ITI, a trade group that represents top tech companies including Google, IBM, Intel, Meta and others.
Because the institute “is already working with model developers on model testing and evaluation, it is feasible that the companies could complete or at least begin such testing within 180 days,” Miller said in an email. But the memo also asks the AI Safety Institute to issue guidance on testing models within 180 days, and therefore “it seems reasonable to question exactly how these two timelines will sync up,” he said.
By February the National Security Agency “shall develop the capability to perform rapid systematic classified testing of AI models’ capacity to detect, generate, and/or exacerbate offensive cyber threats. Such tests shall assess the degree to which AI systems, if misused, could accelerate offensive cyber operations,” the memo says.
‘Dangerous’ order
With the presidential election just a week away, the outcome looms large for this directive.
The Republican Party platform says that if elected, Donald Trump would repeal Biden’s “dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
Since Biden’s memo is a result of the executive order, it’s likely that if Trump wins, “they would just pull the plug” and go their own way on AI, Daniel Castro, vice president at the Information Technology and Innovation Foundation, said in an interview.
The leadership at federal departments tasked with compliance would change significantly under Trump as well. As many as 4,000 positions in the federal government change hands with the arrival of a new administration.
However, people tracking the issue note there’s broad bipartisan consensus that adoption of AI technologies for national security purposes is too critical for partisan disputes to derail it.
The tasks and deadlines in the memo reflect in-depth discussions among agencies going back several months, said Michael Horowitz, a professor at the University of Pennsylvania who was until recently a deputy assistant secretary of defense with a portfolio that included military uses of AI and advanced technologies.
“I think that the implementation of [the memo] regardless of who wins the election is going to be absolutely critical,” Horowitz said in an interview.
Wallin noted the memo emphasizes the need for U.S. agencies to understand the risks posed by advanced generative AI models including risks related to chemical, biological and nuclear weapons. On threats like those to national security, there’s agreement between the parties, he said in an interview.
Senate Intelligence Chairman Mark Warner, D-Va., said in a statement that he backed the Biden memo but the administration should work “in the coming months with Congress to advance a clearer strategy to engage the private sector on national security risks directed at AI systems across the supply chain.”
Immigration policy
The memo acknowledges the long-term need to attract talented people from around the world to the United States in areas like semiconductor design, an issue that could get tied to larger questions about immigration. The Defense, State and Homeland Security departments are directed to use available legal authorities to bring them in.
“I think there’s broad recognition of the unique importance of STEM talent in ensuring U.S. technological leadership,” Horowitz said. “And AI is no exception to that.”
The memo also asks the State Department, the U.S. Mission to the United Nations and the U.S. Agency for International Development to draw up a strategy within four months to advance international governance norms for the use of AI in national security.
The U.S. has already taken several steps to promote international cooperation on artificial intelligence, both for civilian and military uses, Horowitz said. He cited the example of the U.S.-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy that has been endorsed by more than 50 countries.
“It demonstrates the way that the United States is already leading by establishing strong norms for responsible behavior,” Horowitz said.
The push toward responsible use of technology needs to be seen in the context of the broader global debate on whether countries are moving toward authoritarian systems or leaning toward democracy and respect for human rights, Castro said. He noted that China is stepping up investment in Africa.
“If we want to get African nations to line up with the U.S. and Europe on AI policy instead of going over to China,” he said, “what are we actually doing to bring them to our side?”