amac at UMass: How do we regulate AI?
Alex MacGillivray – better known as “amac” – is a lawyer and technologist who was most recently deputy assistant to the president and principal deputy US chief technology officer (CTO) in the Biden-Harris administration. Previously, Macgillivray was deputy US CTO for the Obama-Biden administration, and served as Twitter’s general counsel and board secretary, and as the company’s head of corporate development, communications, and trust and safety. As someone who worked directly on the Biden administration’s Blueprint for an AI Bill of Rights and within Twitter in developing core approaches to trust and safety issues, it’s hard to think of anyone more knowledgeable about how technology regulation is built and how corporations respond to it.

Tech regulator amac – Alex MacGillivray – speakong to UMass Amherst via zoom
amac is leading off a year-long series at UMass titled “AI and Us: Rethinking Research, Work, and Society” with a virtual talk titled “The Past and Future of AI and Regulation: A Practitioner’s View”. SBS Dean Karl Rethemeyer introduced the series with questions about how AI transforms research and teaching in the social sciences, how AI will change democracy and the world of work, all topics central to the social sciences.
amac explains that his current work focuses on coding with AI and, more broadly, trying to understand learning with AI – like many people fascinated with AI, he’s trying to figure out what the potentials and realities of these systems really are. Even before ChatGPT came out, amac tells us, the CTO’s office was studying AI, how it might be used in government and how it might be regulated. Executive orders and the bully pulpit of the White House give several ways of influencing the development of AI, and amac and others were trying to build a structure, which bifurcated around national security and non-national security questions. The National Security Council would likely take on one set of questions and leave the remaining questions to the CTO’s office. Similarly, amac was part of building an AI research resource, allowing people who were not part of massive corporations to investigate AI, given the massive hardware and data needs.
The Bureau of Industry and Security, under the Department of Commerce, issued a restriction on the best AI chips, which could be used to build powerful AI models. Eventually, the restrictions extended to the weights on large AI models, the information one would need to replicate a powerful model like ChatGPT. The Blueprint for the AI Bill of Rights, spearheaded by Alondra Nelson, was probably the key use of the “bully pulpit”, reframing AI within the lens of what rights users should have in using AI. The Blueprint came out in October 2022, as a non-binding white paper, rather than a formal set of regulations. amac tells us that non-binding approaches are often very useful in bringing together large groups of stakeholders and getting them to understand interests they have in common.
All this happened before the launch of ChatGPT, which suddenly launched AI in the front and center of everyone’s policy agenda. In November 2022, when ChatGPT launched, the landscape changed radically. The rapid uptake of the product put a great deal of pressure on regulators to “do something” about this new technology. The blueprint, amac argues, looked really good because it wasn’t about a specific AI, but about algorithmic decisionmaking more broadly. amac brought together a set of AI CEOs to talk with each other at the White House and got them to agree to a set of voluntary commitments. These voluntary commitments often end up being codified as a “floor” for state or federal regulation.
amac was out of government by the time the Biden administration executive order on AI tried to find a balance between the benefits and harms associated with AI. Are we talking about an existential risk, skynet scenario? Or real-world issues like AI discrimination in hiring and housing? The EO was really split between those two, amac tells us, and you can see the document trying to handle both those questions. A later executive order focused on more data center building, essentially an endorsement of investing in the field of AI.
What happened next was a transfer of power from Biden to Trump. The export restrictions were kept, but then relaxed after meetings with Nvidia. The new idea that other countries should have these chips and be able to train large models seems to have come after Saudi Arabia bought $2 billion in cryptocurrency from a Trump linked firm – it seems like the government’s view of what chips could be purchased and where model training should happen has changed. NAIRR – the National AI Research Resource – is now permanent but underfunded. It’s being funded via NSF’s reduced funds and there’s no dedicated funding for it. The Blueprint didn’t need to be rescinded, but the Trump focus appears to be very different: they want to ensure that AI is not woke, and that capitalism has no barriers to spending infinite sums on AI systems.
Voluntary commitments from AI companies are still around, but it’s unclear what they’ll matter – what is intriguing is that the California AI bill has many of those commitments in them. The Trump AI bill is really about a single thing: increasing investment in power plants, data centers and other precursors to building larger systems. A separate EO seeks to reduce government use of “ideologically biased” AI, which has meant largely using Elon Musk’s GrokAI, which has explicit ideology coded into it.
The “sleeper” in all of this, amac tells us, is an Office of Management and Budget “m-memo” which tries to ensure transparency about what AIs are used in government and about training and talent around the use of AI – while that memo came under the Biden administration, it’s survived thus far in the Trump era.
Pivoting to the future, amac warns that people tend to predict a particular future and shape their policy accordingly. He cites the AI 2027 paper, which predicts superintelligence based on exponential growth in AI research – if you adopt that framework, you advocate for different policies than if you anticipate a different scenario. The first question you need to answer: how good will AI get? The 2027 paper postulates that AI will become a godlike, all-knowing intelligence, possibly in the next two years. This scenario also assumes robots and ways of interacting in the physical world also making extremely rapid progress – while amac says he doesn’t believe in this future, there’s a whole world of AI safety people building policies around of this.
There’s another camp (which I tend to side with) that believes that we’re reaching the end of how much better LLMs can get just from more data and more compute. amac also notes the AI as Normal Technology paper, which sees AI as slower and addressable, and manageable through existing policy mechanisms. amac says he doesn’t know which of these is realistic – the competence of models is increasing over time with an exponential increase of resources, and AI CEOs say “it’s worked for the last few doublings of resources” – “they don’t know, I don’t know, I don’t have a reason to say that this will stop here.” But continuing to double means that you will – eventually – use all the chips and all the power in the world. Where do we hit the limit of this as a strategy?
Another question is “how many frontier models will there be?” amac thinks we’re likely to see a world where a bunch of models are, over time, roughly the same as each other in terms of capacity. Open models are a confounding factor within this space – a number of firms are putting out their models with all the information you’d need to run those models. There’s nothing to say that these models will continue to be released, but right now there’s about a six month delay between frontier models and open models.
There’s also a set of questions around the power of small models – if small, frontier, open-source models end up being extremely capable, how does that change the landscape for regulation? Encryption software is massively distributed around the world and it’s hard to imagine a regulatory scheme that can meaningfully restrict it – are we headed somewhere similar within small AI models. How do we square questions of explainability and observability with a huge, diverse set of models?
While this isn’t a roadmap for regulation, it might offer some suggestions on what people will try to regulate. As someone not currently in the regulatory game, amac notes Lina Khan’s observation, “there is no AI exception” to existing laws on the books regarding hiring, housing or other forms of discrimination. amac suggests regulation should focus on real harms, not hypothetical or science fiction harms. Deepfakes are causing real, meaningful harm right now – that would be an excellent place to start.
We need to bring expertise around these technologies and bring these technologies into government so that people in government understand what’s really going on. Ideally, something like the AI Bill of Rights will eventually move into influencing law and policy, influenced by agencies like the Federal Trade Commission. Critically, legislation will need to think about all possible scenarios, and be continually aware of when we’re moving from one scenario to another.
Weightlifting versus forklift
The post amac at UMass: How do we regulate AI? appeared first on Ethan Zuckerman.
Source: https://ethanzuckerman.com/2025/10/08/amac-at-umass-how-do-we-regulate-ai/
Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.
"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.
Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.
LION'S MANE PRODUCT
Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules
Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.
Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.
