Are we finally waking up to AI's ethical demands?
What we need are policies - Measurements - Behaviors
Are we finally waking up to AI’s ethical demands?
The questions are no longer theoretical. They live in our workplaces, our feeds, our sense of who we are. What we need are Policies~Measurements~Behaviors (PMBs)
We are no longer speculating about artificial intelligence — we are living its consequences. What once felt like a distant concern is now part of daily life: algorithms shape what we see, influence hiring, monitor workplace behavior, and set the pace of industries. The ethical stakes are now social, economic, and deeply personal.
To meet these ethical demands, we can turn to the See–Judge–Act method—a framework rooted in Catholic social thought and community organizing. By guiding us through observation, evaluation, and action, it offers a path for responding with clarity and moral seriousness.
SEE
Naming what’s already here
Across society, signs of strain are impossible to ignore. Workers in manufacturing, logistics, customer service, journalism, and creative fields are facing role displacement faster than any institution can meaningfully respond.
Privacy, once assumed as a basic right, is being quietly eroded through surveillance capitalism and the routine aggregation of our most intimate data. Governance structures are badly lagging behind technological innovation, meaning that, for now, corporations and developers building these systems are effectively setting the ethical norms themselves — without sufficient democratic input or oversight.
The medium is the message.
— MARSHALL MCLUHAN
Marshall McLuhan’s insight feels especially urgent now. AI is not simply a tool we pick up and put down — it reshapes how we perceive knowledge, how we understand authority, and even what it means to be human. The speed, scale, and invisibility of AI systems create environments where ethical reflection can barely keep pace with the next deployment.
We can frame what we’re seeing through four essential lenses:
People
Dignity, agency, and the right to meaningful participation in society
Planet
The environmental impact of data and energy use.
Purpose
The risk of reducing identity to metrics or profiles.
Prosperity
Widening inequality between those who control AI and those subject to it
These are not isolated concerns — they are interconnected pressures shaping a new social reality, and they demand an interconnected response.
JUDGE
Interpreting through the wisdom we already have
The monk and writer Thomas Merton warned that the greatest danger of modern society is not simply the accumulation of technological power, but inner fragmentation — a loss of the reflective self. A world driven by efficiency without contemplation risks producing what he called the “false self”: a life disconnected from authentic meaning and groundedness.
Applied to AI, this raises a question we should be asking out loud: Are we designing systems that serve human flourishing — or systems that subtly reshape humans to serve the system’s efficiency?
The philosopher Mortimer Adler, with his emphasis on ethical reasoning and the “great conversation” of civilization, would likely push us further: do our current AI policies reflect genuine moral deliberation, or merely technical problem-solving dressed up in the language of ethics? For Adler, a just society depends on shared understanding of truth, justice, and the common good — not just on innovation.
From a Catholic social teaching perspective, read the encyclicals and the Documents of Vatican II and think about how these four principles offer a moral compass:
Human dignity demands that AI systems never reduce persons to data points or optimize them as variables.
The common good calls for governance structures that benefit all of society, not only a technological elite.
Subsidiarity insists that decisions about how AI is used in communities should involve those communities — not only centralized powers or distant platforms.
Solidarity compels us to attend especially to those most vulnerable to displacement, exclusion, and algorithmic harm.
McLuhan helps us see the environment being created around us. Merton helps us see the soul at risk inside it. Adler helps us recover the discipline of moral reasoning that this moment demands. Together, they point to a central insight: the ethical crisis of AI is not primarily technical — it is anthropological. It concerns who we are becoming.
ACT
Building guardrails for a human future
If we are to move forward responsibly, action must take concrete shape in policies, measurements, and personal behavior. Each of these matters cannot substitute for the others.
Policy
Advance proactive regulation: establish clear, enforceable standards for transparency, accountability, and data protection. Ensure that AI systems are not only auditable and explainable but also rigorously aligned with human rights—beyond just serving market interests.
Measurement
Expand what we count. We need metrics that evaluate the impact on human dignity and well-being, the environmental footprint of AI infrastructure, and the equitable distribution of economic benefits.
Behavior
Change at both the individual and institutional levels. Developers, policymakers, educators, and users all share responsibility for cultivating ethical awareness—and acting on it.
Culture
Build participatory forums where communities help shape how AI enters their lives. Encourage reflection on blind adoption. Design AI that augments human capacities rather than replacing them.
Merton would call this last point an inner transformation: a refusal to be passively shaped by the systems around us, and a commitment to act with conscious intention. That is not a soft or optional aspiration — it may be the most urgent demand of our moment.
The goal is clear: we must actively shape AI to serve human needs and values.
We stand at a threshold. The question is not whether AI will transform society — it already is, and the pace will only quicken. The question is whether we will guide that transformation with wisdom, justice, and a commitment to the flourishing of all — especially those with the least power to shape it.
If we fail to act, we risk building systems that diminish the very humanity they were meant to serve. If we succeed, we may discover that this technological revolution, grounded in ethical clarity and human solidarity, can deepen rather than diminish what it means to be human.
That is a future worth working toward.
QUESTIONS TO SIT WITH
In your own life or work, where has AI already changed how you think, decide, or relate to others — and did you notice it happening?
Are the systems you interact with daily designed to serve your flourishing — or to optimize your behavior for someone else’s benefit? (true self vs false self)
Who in your community is most exposed to the harms of AI-driven displacement, surveillance, or algorithmic bias — and what would solidarity with them actually require?
Merton warns against the “false self” shaped by efficiency culture. In what ways might your relationship with technology be quietly reshaping your sense of identity or worth?
What is one concrete action — in your role as a citizen, professional, educator, or consumer — that you could take this month to help humanize AI rather than simply accept it?
If the “great conversation” Adler envisioned requires shared moral truth, what does it mean that so many of today’s norms around AI are being set by a handful of private companies — and what would democratic AI governance actually look like?

