PDF LinkFacebook share link LinkedIn share link

Not a Rational Choice: How AI (Regulation) is Eroding our Autonomy (JSDA Conference 2025)

Frederica Fedorczyk

23 Apr 2025

Legal systems as we know them are about to change with the advent of AI, and public regulation worldwide may not be ready.


In my paper I critically analyze the way AI has been regulated until now, describing how – although the serious risks of certain AI applications are clear – governments and legislators across the globe have deliberately chosen to accept these risks at the expense of fundamental rights. They profess a commitment to human rights, while simultaneously justifying their erosion or denial.


I claim that this tendency is not uncommon and permeates not only the relationship between AI and regulators but also between AI and individuals. At an individual level, we are aware of the serious risks associated with certain AI applications, but we often forgo self-protection for the lure of immediate benefits. In doing so, we might even compromise our own values – much like governments that might choose to sideline fundamental rights to prioritize other goals.


The appeal of AI’s immediate and long-term benefits – such as enhanced performance, increased efficiency and safety, along with advancements in healthcare and production – gradually reduces our willingness to make sacrifices to uphold our other core values. Though we value our privacy, we freely share personal information with chatbots; though we value our intelligence and critical thinking, we increasingly rely on tools like ChatGPT. Our reasons may vary, but they converge under the same umbrella: performance benefits.

This mirrors a tendency also seen among regulators: while professing to prioritize democracy and individual freedoms, they allow and even promote AI systems that pose significant threats to these very principles. The rationale is often rooted in securing different benefits, such as increased public safety and control, that supposedly contribute to overall societal performance.


In the paper, I critically examine the following persistent misconception: that both individuals and governments make rational and balanced choices when deciding to use AI, weighing its benefits against its risks.

I argue the opposite, making the following hypothesis: the rational choice is progressively becoming not possible. The very existence of AI – with its significant advantages – subtly erodes our ability to truly choose, as our dependence on these tools increasingly narrows the scope of viable alternatives.


Building on this hypothesis, the paper aims to explore whether AI’s transformation from an optional resource into a perceived necessity – often an assumed and inescapable condition of human life – truly results in a diminished capacity for individual and government autonomy.


In examining this shift, I focus on the possibility that, as individuals increasingly delegate their autonomy, critical thinking and decision-making to AI, a parallel erosion is occurring at the level of regulators. The latter poses an even greater risk, as it weakens or even disintegrates the essential role of regulators as the protector of its citizens and society’s core values, reshaping the traditional relationship of trust between governors and the governed. As a result, trust deteriorates both vertically, as individuals lose faith in their governments, and horizontally, as they doubt their own and their peers’ capacity for self-determination, critical thinking, and autonomy.

Keywords: AI regulation; fundamental rights; autonomy erosion; trust and dependence.

Continue Reading

Federica Fedorczyk is a Postdoctoral Emile Noël Fellow at NYU, where she is also an Affiliate at the Information Law Institute (ILI), and a Postdoctoral Research Fellow at Sant’Anna School of Advance Studies. Starting in June 2025, she will join the University of Oxford as an Early Career Research Fellow at the Institute for Ethics in AI.