Trump Administration's Potential Repeal of Biden's AI Order Sparks Regulatory Uncertainty
As the new year approaches, many changes are expected with the upcoming Trump administration. AI regulation is on the agenda, and this could mean the repeal of an executive order set by President Joe Biden.
This order established government oversight offices and encouraged developers to implement safety standards. If it gets repealed, some companies might benefit while others could face challenges. For instance, businesses may have to navigate a confusing mix of regulations, deal with limited data sharing, and see less government-funded research.
Before the executive order was signed, policymakers held listening tours with industry leaders to find the best way to regulate technology. There was strong momentum for AI regulations under a Democratic-controlled Senate. However, insiders now believe that interest in federal AI rules has cooled significantly.
Gaurab Bansal, the executive director of Responsible Innovation Labs, shared his thoughts at the ScaleUp: AI conference in New York. He pointed out that without federal oversight, states might step in to create their own regulations.
“Both parties in Congress seem hesitant to regulate AI,” Bansal said. “This could lead states to follow California’s example with laws like SB 1047.” He emphasized that businesses need consistent standards, but a patchwork of regulations could make things tricky.
California's SB 1047 aimed to require a “kill switch” for AI models and other controls. It reached Governor Gavin Newsom but was vetoed, a decision welcomed by industry leaders like Meta’s Yann Le Cunn. Bansal believes states are likely to pursue similar legislation.
Dean Ball, a research fellow at George Mason University’s Mercatus Center, expressed concern about companies managing different regulations. “These laws could create complex compliance challenges for AI developers and users,” he noted. “It’s unclear how a Republican Congress will address this.”
Industry-led responsible AI initiatives have always been around. However, the pressure on companies to be accountable may increase as customers demand safety. Developers and users should prioritize implementing responsible AI policies and aligning with regulations like the European Union’s AI Act.
During the ScaleUp: AI conference, Microsoft’s Chief Product Officer for Responsible AI, Sarah Bird, mentioned that many developers, including Microsoft, are preparing for the EU’s AI Act.
Bird highlighted that even without comprehensive laws, it’s smart to integrate responsible AI and safety from the start. “This approach benefits startups. Much of what the AI Act requires is just common sense,” she said. “If you’re building models, manage the data input and validate them. For smaller organizations, starting from scratch makes compliance easier, so invest in a solution to govern your data as it grows.”
However, understanding the data used in training large language models (LLMs) can be challenging. Jason Corso, a robotics professor at the University of Michigan and co-founder of Voxel51, remarked that the Biden executive order encouraged more transparency from developers.
“We can’t fully understand the impact of a single sample on a model with a high risk of bias. This lack of governance can put businesses at risk,” Corso explained.
AI companies are currently attracting significant investor interest. However, government support often targets projects deemed too risky by some investors. Corso expressed concern that the Trump administration might choose not to invest in high-risk AI research to cut costs.
“I worry about the lack of government resources for those early-stage projects,” he said.
Still, a new administration doesn’t necessarily mean less funding for AI. It’s uncertain whether the Trump administration will dismantle the newly established AI Safety Institute and other oversight offices. The Biden administration has guaranteed budgets through 2025.
“How the Trump administration organizes authorities and allocates funds from the AI Initiative Act will be crucial,” said Matt Mittelsteadt, a research fellow at the Mercatus Center. “This act provides many of the authorities and activities that Biden assigned to agencies like NIST, with funding continuing until 2025. Since these funds are already allocated, many activities will likely continue, but the specifics are still unclear.”
We’ll learn more about AI policy under the next administration in January. In the meantime, businesses should prepare for whatever changes lie ahead.