Reporting requirements are key to alerting the government to potentially dangerous new capabilities of increasingly powerful AI models, says a U.S. government official who works on AI issues. The manager, who requested anonymity to speak freely, emphasizes Admission to OpenAI about the “inconsistent refusal of requests for the synthesis of nerve agents” of his latest model.
The official says the reporting requirement is not too onerous. They argue that, unlike AI regulations in the European Union and China, Biden’s EO reflects “a very broad, light-touch approach that continues to foster innovation.”
Nick Reese, who served as the Department of Homeland Security’s first director of emerging technologies from 2019 to 2023, rejects conservative claims that the reporting requirement would jeopardize companies’ intellectual property. And he says it could actually benefit startups by encouraging them to develop AI models that are “more computationally efficient,” less data-intensive, and that fall below the reporting threshold.
The power of AI makes government surveillance imperative, says Ami Fields-Meyer, who helped write Biden’s EO as a White House tech lead.
“We’re talking about companies that claim to build the most powerful systems in the history of the world,” Fields-Meyer says. “The government’s first obligation is to protect people. “Trust me, we get this” is not a particularly convincing argument. »
Experts hail NIST security guidance as a vital resource for integrating protections into new technologies. They note that faulty AI models can produce serious social harms, including rental and lending discrimination and unjustified loss of government benefits.
Trump’s AI order for his first term required federal AI systems to respect civil rights, which will require research on social harms.
The AI industry has widely received Biden’s security agenda. “What we’re hearing is that there’s a lot of value in clarifying these things,” the U.S. official said. For new companies with small teams, “this expands their people’s ability to address these concerns.”
Reversing Biden’s EO would send an alarming signal that “the U.S. government is going to take a hands-off approach to AI security,” says Michael Daniel, a former presidential cybersecurity adviser who now heads Cyber Threat Alliance, a non-profit information sharing organization.
As for competition with China, EO advocates say the security rules will actually help America prevail by ensuring that American AI models perform better than their Chinese rivals and are protected from economic espionage from Beijing.
Two very different paths
If Trump wins the White House next month, expect a sea change in how the government approaches AI security.
Republicans want to prevent harm from AI by enforcing “existing tort and statutory laws” rather than adopting new blanket restrictions on the technology, Helberg says, and they favor “a much greater focus on maximizing the opportunities offered by AI, rather than focusing too much on the risks.” mitigation.” This would likely spell disaster for the reporting requirement and perhaps for some NIST guidelines.
The reporting requirement could also face legal challenges now that the Supreme Court has weakened deference that courts used to give agencies to evaluate their regulations.
And the GOP’s refusal could even jeopardize NIST’s results. voluntary AI testing partnerships with leading companies. “What happens to these commitments in a new administration? » asks the American official.
This polarization around AI has frustrated technologists who fear Trump will undermine the quest for safer models.
“The promise of AI comes with perils,” says Nicol Turner Lee, director of the Center for Technology Innovation at the Brookings Institution, “and it is vital that the next president continues to ensure the safety and security of these systems. “