Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Reason Magazine (Reporter)
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

Democrats and Republicans Both Want To Regulate AI. They Just Can't Agree on How.

% of readers think this story is Fact. Add your two cents.


An AI illustration showing a Republican elephant and a Democratic donkey in a boxing match | Illustration: Joanna Andreasson/ChatGPT-5.3

At the federal level, Republican-written AI bills tend to be less concerned with policing how individuals use the technology than with regulating the development and deployment of the underlying technology—large language models (LLMs). Democrat-written bills tend to focus on individual malfeasance rather than the tech itself.

Accordingly, Sen. Amy Klobuchar (D–Minn.) was so outraged last year by a (hilarious) deepfake of herself that she called on Congress to affirm “the right to demand that social media companies remove deepfakes of their voice and likeness.” In California, Democratic Gov. Gavin Newsom signed three bills in 2024 that restricted the use of AI to create political content deemed deceptive in advance of elections.

On the other side of the aisle, Sen. Josh Hawley (R–Mo.) doesn’t just want to ban driverless cars to protect unionized truck drivers from automation or ban minors from accessing AI companion chatbots; he wants frontier AI developers to submit their models to the Energy Department for potential nationalization before they’re granted permission to deploy their models commercially.

But it’s not like there’s no overlap. Each of these bills is co-sponsored by at least one senator from the other party.

Let’s start with the Republicans. Hawley’s AI Accountability and Personal Data Protection Act, which outlaws the use of legally acquired copyrighted materials for AI training without the copyright holder’s permission, is co-sponsored by Democratic Sens. Richard Blumenthal of Connecticut and Peter Welch of Vermont. The bill is perhaps a response to Bartz v. Anthropic, which found Anthropic did not violate the Copyright Act by training its LLM on legally acquired copyrighted works. (Anthropic was found guilty of copyright infringement for using over 7 million copies of copyrighted books illegally acquired from pirate sites.) If enacted, the bill would cripple AI developers, which depend on public and legally purchased private data to train their increasingly sophisticated models.

Hawley’s Artificial Intelligence Risk Evaluation Act, co-sponsored again by Blumenthal, would require AI developers to turn over detailed information about their frontier LLMs to the Energy Department before their deployment, letting the department mull whether various “adverse scenarios” are likely. If the department decides such events are probable enough, it would be allowed to nationalize the technology. Talk about discouraging innovation: Fewer people will want to advance the technological frontier if the government has the right to take any company whose product is toogood.

Of the current crop of AI bills, Hawley’s GUARD Act is the one that’s most likely to become law. It’s co-sponsored by 12 senators: Blumenthal, Welch, Katie Britt (R–Ala.), Tom Cotton (R–Ark.), Ruben Gallego (D–Ariz.), Maggie Hassan (D–N.H.), Mark Kelly (D–Ariz.), James Lankford (R–Okla.), Mike Lee (R–Utah), Chris Murphy (D–Conn.), Mark Warner (D–Va.), and Catherine Cortez Masto (D–Nev.). The legislation would not only ban chatbots that produce sexually explicit content for minors; it would outlaw the provision of any AI companion to minors whatsoever.

To comply with this wide-reaching regulation, chatbot companies would be required to freeze every user account, which they could unfreeze only after users provide “age data that is verifiable using a reasonable age verification process.” Such processes include providing government-issued ID or biometric data to AI companies, which “means every chatbot interaction could feasibly be linked to your verified identity,” warns the Electronic Frontier Foundation.

That isn’t a risk too small to worry about. AU10TIX, a third-party identity verification software used by TikTok, Uber, and X, recently left such personal identifiable information exposed for over a year.

Hawley’s AI-Related Job Impacts Clarity Act, co-sponsored by Warner and Sen. Tim Kaine (D–Va.), is superficially innocuous: It would require all publicly traded companies to submit quarterly reports to the Labor Department on the number of employees fired, hired, and retraining “substantially due to the replacement or automation by artificial intelligence.” That phrase is ambiguous, but the senators’ motive is not: They want to render AI’s labor market effects legible so that the government can more easily interfere with private business decisions.

On the Democratic side, Sen. Dick Durbin (D–Ill.) has given us the DEFIANCE Act, which passed the Senate by unanimous consent in January and is being championed in the House by Rep. Alexandria Ocasio-Cortez (D–N.Y.). The bill would make it a civil offense to create digital forgeries “depicting intimate activity or nudity.” While this legislation does not impose liability on AI companies for individuals’ odious misuse thereof, another Durbin bill would do that: The AI LEAD Act, introduced in September and co-sponsored solely by Hawley, would make deployers and developers liable when a user’s application of an AI system “causes harm.”

Hawley frames the AI LEAD Act as empowering parents to bring suits against Big Tech when “AI products harm…their children,” but virtually any product imaginable can be used maliciously. Surely, Hawley would balk at holding firearm manufacturers liable when their products are used to murder innocents instead of protecting them. The principle that people are responsible for malign misuses of tools applies to AI just as strongly as it does to firearms or any other thing that can be used to injure a person.

Unlike the AI LEAD Act, the NO FAKES Act is actually viable. Introduced by Sen. Chris Coons (D–Del.), it has 11 co-sponsors, six of them Republicans. The bill is intended to protect “the voice and visual likeness of all individuals from unauthorized computer-generated recreations [using] generative artificial intelligence,” according to Coons.

It’s safe to say that nobody wants others creating and sharing photorealistic depictions of them doing disreputable things that they didn’t actually do. But the NO FAKES Act goes beyond that, holding platforms ”liable for hosting unauthorized digital replicas” and excluding digital replicas from protection under the First Amendment.

Sarah Montalbano, policy fellow at the Center of the American Experiment, has explained how the NO FAKES Act would jeopardize creativity in the gaming industry. Penalties of up to $25,000, she wrote last year at Reason, “would fall hardest on small developers, hobbyists, and fan communities making non-commercial games or mods” and encourage developers to preemptively “restrict the range of faces, voices, and customizable features.”

As lawmakers of both major parties hustle to name, shame, and regulate their preferred villains, they’re losing sight of the big picture. The possible gains to humanity from AI are enormous. The AlphaFold AI system uses primary amino acid sequences to predict the 3D structure of proteins, cutting prediction times from years to hours and reducing the cost of early-stage drug discovery by anywhere from 30 percent to 70 percent. And it exists—in the words of Taylor Barkley, director of federal government affairs at the Abundance Institute—”because researchers were free to release and iterate on imperfect models in the open.”

The AI LEAD Act would impose strict liability on developers of “unreasonably dangerous” AIs. This would have discouraged the kind of experimentation that produced AlphaFold, leaving “today’s researchers without a tool that has accelerated drug discovery, structural biology, and our basic understanding of life,” says Barkley.

Meanwhile, R-Super—a novel algorithm developed by Johns Hopkins University researchers—trains AI models to segment a tumor, a crucial step in cancer diagnosis and treatment, in one to two minutes instead of 30 minutes to an hour required by unassisted radiologists. The Energy Department has deployed AI to reduce the risk of outages by anticipating grid disruptions and improving load forecasting. Similarly, improved demand forecasting can reduce inventory and logistics costs by double-digit percentages. It has eased the stress on public defenders by reducing document review time by 63 percent. It helps researchers by translating papers from any language to any other language in mere minutes or even seconds, depending on the amount of data it has to sift through. It increased the speed of software development in one experiment by over 55 percent, and Anthropic’s Claude Cowork assistant is so good at coding that its release and updates have triggered multiple stock market sell-offs since its January debut. AI has also been used to save taxpayers billions of dollars through enhanced fraud detection.

Not everything AI touches has been so positive, of course. People using it carelessly have made embarrassing mistakes in law, journalism, and other fields. There have been AI-related tragedies too. In February 2024, 14-year-old Sewell Setzer III shot himself after allegedly becoming obsessed with an AI companion chatbot designed by the service character.ai. Fourteen months later, 16-year-old Adam Raine took his own life after ChatGPT allegedly provided him a “step-by-step playbook for ending his life ‘in 5-10 minutes,’” according to the lawsuit his parents filed against OpenAI. AI has been used to commit fraud as well as detect it.

But no technology should be evaluated exclusively by its harms. Over 40,000 Americans die in car crashes every year. Yet no sensible official would want to ban motor vehicles, and not just because AI will likely decrease that death toll soon by automating cars and trucks. This is because cars’ benefits—including rushing people to the hospital—outweigh their costs.

Talk is cheap; hundreds of billions of dollars of investment is not. Venture capital firms invested $259 billion in AI firms in 2025 alone, and half a trillion in AI capital expenditure is projected for 2026. The magnitude of AI investments indicates that its benefits are expected to be even greater.

But AI is under threat from lawmakers at all levels. Not only do some congressmen want to pass the aforementioned national laws, but Congress has been unable and unwilling to preempt the proliferating patchwork of state laws that threatens to hinder the technology’s growth.

To be sure, not everyone in government wants to hamstring artificial intelligence. Sen. Ted Budd (R–N.C.), chairman of the Subcommittee on Science, Manufacturing, and Competitiveness, said in anticipation of a September subcommittee hearing that “prioritizing AI advancement without subjecting this technology to overregulation is critical to maintaining America’s competitive edge.” Likewise, Sen. Ted Cruz (R–Texas), chairman of the Committee on Commerce, Science, and Transportation, has called AI a “new global industrial revolution that could unlock opportunities for improving quality of life, creating jobs, and stimulating economic growth.”

During the hearing, Michael Kratsios, director of the White House Office of Science and Technology Policy, called for the application of “interstate commerce principles to prevent balkanized rulemaking.” Half a year later, it remains unclear whether the administration will succeed in preempting state-level AI regulation.

Then there is the president himself, who has called AI “an industrial revolution, an information revolution, and a renaissance—all at once.” One of the first actions President Donald Trump took in his second term was rescinding his predecessor’s precautionary AI framework. Trump has also appointed AI proponents such as Kratsios and David Sacks to federal posts.

Leading up to the July passage of the One Big Beautiful Bill Act (OBBBA), congressional Republicans seemed united in wanting to protect AI from state-level strangulation. House Republicans included an outright 10-year moratorium on states and localities “limiting, restricting, or otherwise regulating artificial intelligence” in their May 22 version of the OBBBA. In the Senate version, Cruz proposed language to withhold access to $42 billion in broadband deployment funds from states that passed AI laws. (The move to de facto instead of de jure preemption was required by the “Byrd Rule,” which excludes nonbudgetary items from reconciliation bills.)

Then several congressional Republicans defected from the pro-AI side to join their Democratic colleagues in regulating the technology. Sen Marsha Blackburn (R–Tenn.) joined forces with Sen. Maria Cantwell (D–Wash.) to remove AI conditions on broadband funding from the final version of the reconciliation bill, denouncing Cruz’s provision as a way for “Big Tech” to “exploit kids, creators, and conservatives.”

Trump kept pushing for a light-touch approach to AI regulation, insisting that “We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes” in a November 18 Truth Social post. In the same post, Trump called on Congress to put a federal preemption provision “in the NDAA [National Defense Authorization Act].” Some Republicans tried to do that, and they failed.

Then Trump tried another approach: a December 11 executive order that conditioned disbursement of certain broadband funds on whether a state has laws that conflict with the White House’s AI Action Plan. There was an explicit carve-out for state laws that govern child safety, data center infrastructure, and local government procurement and use—regulations that neither implicate interstate commerce nor the development of underlying AI models.

The order acknowledged the need for a “carefully crafted national framework” on AI. The president cannot create such a framework single-handedly; Congress must. But legislators are unlikely to pass a stand-alone bill for or against AI, as they remain divided on the issue.

The good news is that most of these federal bills will probably fail—only the DEFIANCE Act, the NO FAKES Act, and the GUARD Act stand a strong chance of being enacted. While the first two pose serious First Amendment concerns and the last one gravely threatens AI users’ privacy, none is likely to seriously hinder the development and deployment of the LLMs undergirding the myriad productive applications of AI.

The bad news is that the Trump administration flip-flopped on its relatively laissez faire approach to AI at the end of February. Anthropic CEO Dario Amodei refused to update the terms of service for the Pentagon’s use of its AI model to permit all lawful military applications, insisting on maintaining its explicit prohibitions on domestic mass surveillance and fully autonomous weapons systems. In retaliation, Trump banned all federal agencies from contracting with Anthropic, and Defense Secretary Pete Hegseth directed the Pentagon to label the AI company a supply chain risk. Accordingly, Anthropic’s $200 million Pentagon contract was terminated and “anyone seeking to do business with the U.S. military [must] cut ties with the AI firm,” explains Axios. This designation places Anthropic in the same category as Chinese telecommunications company Huawei and drone manufacturer DJI.

That is no minor footnote on AI. According to Dean Ball, who was previously a senior technology adviser for the Trump administration, “The United States federal government is now, by an extremely wide margin, the most aggressive regulator of artificial intelligence in the world.”

The post Democrats and Republicans Both Want To Regulate AI. They Just Can’t Agree on How. appeared first on Reason.com.


Source: https://reason.com/2026/04/11/democrats-and-republicans-both-want-to-regulate-ai-they-just-cant-agree-on-how/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

MOST RECENT
Load more ...

SignUp

Login