Importance Score: 84 / 100 🟢
A proposed measure in the U.S. House of Representatives could prevent states from enforcing their artificial intelligence (AI) regulations for a decade. The legislation, included as an amendment approved this week by the House Energy and Commerce Committee, stipulates that no state or political subdivision “may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next 10 years. The bill requires approval from both Congressional chambers and the President before it can be enacted.
Proposed Moratorium on State AI Oversight
AI developers and certain legislators argue that federal intervention is essential to avert a fragmented regulatory landscape across the U.S., which could hinder the expansion of AI technology. The swift advancement of generative AI, spurred by the debut of ChatGPT in late 2022, has prompted companies to integrate AI across diverse sectors. With the U.S. and China vying for technological dominance and its economic benefits, generative AI presents serious risks to consumers related to privacy, transparency, and other areas that lawmakers are looking to mitigate.
“As an industry and a nation, we need one definitive federal norm, regardless of what it entails,” Alexandr Wang, founder and CEO of data firm Scale AI, stated before lawmakers during an April hearing. “We need alignment and clear guidance on a singular federal standard and preemption to avert the scenario of 50 separate standards.”
Restricting states’ power to oversee artificial intelligence could diminish consumer protections for a technology increasingly prevalent in American lives. “Extensive dialogues are happening at the state level, emphasizing the value of tackling this issue from numerous angles,” stated Anjana Susarla, a professor at Michigan State University. “Employing both national and state-level strategies is crucial for a comprehensive approach.”
State-Level AI Regulation Initiatives
The suggested stipulation would prohibit states from enforcing existing regulations, except for rules facilitating AI development or those mirroring standards for non-AI models and systems. These forms of policies are beginning to materialize, predominantly in Europe where the European Union has put in place standards for artificial intelligence; U.S. states are also starting to engage.
Colorado enacted consumer safeguards last year, slated to commence in 2026. California passed over a dozen AI-related laws last year. Furthermore, multiple states already have rules pertaining to challenges like deepfakes, or that mandate AI developers to disseminate details on their training datasets. At the local level, some laws also address potential job discrimination if AI systems are implemented in hiring processes.
“States have varied aspirations for AI regulation,” noted Arsen Kourinian, partner at the Mayer Brown law firm. Thus far in 2024, according to The National Conference of State Legislatures, state lawmakers have introduced at least 550 proposals on artificial intelligence. Representative Jay Obernolte, a Republican from California, during a House committee hearing last month, voiced the urgency to proactively resolve the question surrounding state-level regulations before they progress further. “We have a limited amount of legislative runway to be able to get that problem solved before the states get too far ahead,” he said.
Even though several states have previously legislated regulations, many have yet to implement or execute them, hence restricting the moratorium’s immediate impact, said Cobun Zweifel-Keegan, managing director for the International Association of Privacy Professionals. “There isn’t really any enforcement yet.”
Zweifel-Keegan suggested that a halt would likely discourage state legislators and policymakers from formulating new policies, noting that “the federal government would become the primary and potentially sole regulator around AI systems definitively.”
Implications of Halting State AI Regulation
AI developers are pushing for predictable and uniform guidelines. During a Senate Commerce Committee hearing last week, OpenAI CEO Sam Altman informed Senator Ted Cruz that replicating an EU-style regulatory environment “would be disastrous” for the industry. Altman suggested that the industry should set its own benchmarks.
When Senator Brian Schatz inquired if Altman thought internal regulation was enough, Altman responded that “it’s easy for it to go too far. As I have learned more about how the world works, I am more afraid that it could go too far and have really bad consequences”.
Industry Concerns and Consumer Advocacy
Concerns voiced by companies, both the developers of AI systems and the users interfacing with consumers, often center on state mandates such as carrying out assessments or issuing transparency notices before product releases, according to Kourinian. Consumer advocates assert the need for more controls, emphasizing that constraining states’ capacity to regulate could compromise user privacy and safety.
“AI is increasingly used to make decisions about people’s lives without transparency, accountability or recourse — it’s also facilitating chilling fraud, impersonation and surveillance,” Ben Winters, director of AI and privacy at the Consumer Federation of America, stated. “A 10-year pause would lead to more discrimination, more deception and less control — simply put, it’s siding with tech companies over the people they impact.”
Potential Shift to Legal Challenges
Kourinian predicted that a hold on specific state rules could lead to more consumer protection concerns being addressed in courts or by state attorneys general, as existing laws on unfair and deceptive practices (unrelated to AI) would still be relevant. “Time will tell how judges will interpret those issues,” he added.
Susarla mentioned the omnipresent state of AI across various sectors could imply states regulating aspects like both privacy and transparency more extensively irrespective of the technology itself. A moratorium on AI regulation, though, could lead to such policies facing numerous legal battles. “It has to be some kind of balance between ‘we don’t want to stop innovation,’ but on the other hand, we also need to recognize that there can be real consequences,” she stated.
Technology-Agnostic Laws and AI Governance
Zweifel-Keegan specified how much policymaking regarding AI systems revolves around technology-agnostic laws and rules, adding that “there are a lot of existing laws and there is a potential to make new laws that don’t trigger the moratorium but do apply to AI systems as long as they apply to other systems.”