“As we’re designing the system, we’re designing society. Ethical rules that we choose to put into that design [impact society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.”
—Sir Tim Berners-Lee, creator of the World Wide Web
As both religious sisters and IT professionals, we share Berners-Lee’s belief in the power of technology to influence society and the need for ethical design to accompany that power. The term algorethics (coined by Fr. Paolo Benanti in 2018) specifically refers to this need for ethics in the design of algorithms, that is, automated processes such as those deployed by artificial intelligence (AI). Lately, a specific concern has caught our attention: the pervasive tendency to deceptively “style” these tools as human or human-like.
Let’s start with what we mean by human-”styled” AI. Many AI tools utilize a chatbot interface explicitly intended to mimic human conversation. Because of this, AI tools often use a narrative voice that is inappropriate to their artificial character. For example, when a chatbot generates text such as “I think…” or “I believe…,” it not only implies that an AI can perform actions exclusive to humans (such as thought and belief), it also appropriates the singular first-person pronoun that indicates a unique human consciousness. While most users know ChatGPT or Gemini isn’t a person, such human “styling” can corrupt our understanding over time. Even if a user knows that chatbots aggregate data rather than forming opinions, it is a normal and good human reaction to respond to “I think…” by engaging with the other party as thoughtful.
The field of personal assistance, especially customer service and tech support, is one of the most prominent arenas for human-styled AI products. These platforms often go further than a chat interface alone, also assigning human-like names and personalities to their tools. Many online stores, for example, will prompt users to chat with “Joe” or “Jane,” “your virtual assistant” wearing a stock-photo profile image and programmed to project a peppy personality.
It is not always obvious to the user that “Joe” or “Jane” isn’t human. We recently experienced this ourselves while reaching out to an Internet service provider through their online “live chat.” The “live agent” promising to “look into it” as it passed us between team members turned out not to be “live” at all, but a chatbot generating new names. At another time, our co-worker got stuck on the phone with an AI voice tool trained to deny being an AI.
Why does this kind of deception matter? In the history of computer science, the possibility that computers could be trained to deceive was actually considered a benchmark of progress, making it an important aspect of the development of these technologies. The famous “Turing Test,” proposed by Alan Turing in 1950, asked whether a human could distinguish between another human and a computer in text exchanges. Modern AI tools, with their fluency in human-like speech, have long passed the Turing Test—but the ethical implications of that benchmark are only beginning to unfold. If we can now accept, as a matter of science, that computers are capable of being deployed for deception, we must now turn to ethics to hold humans responsible for reversing the deception trend.
At the same time, it is often perfectly clear to human users when they are interacting with AI tools, even those styled as humans. When a virtual assistant offers a canned response, or when a chatbot produces an instantaneous essay, it is obvious to the vast majority of users that they are interacting with an artificial tool. In these cases, the human “styling” of AI tools is not only deceptive about the nature of AI, but about the nature of human beings. If we accept computers casually using human names, faces, and voices, we risk treating humanity as a costume to be thrown on.
Whether the deception is intentional, deliberately trying to pass off an AI tool as a human, or merely the baked-in deception of “styling” an obvious AI with human qualities, it matters. Deception is corrosive to the trust and connection that humans naturally desire to have with one another. To revisit the tech support example: as IT professionals, we know that when someone seeks our help, they are only partially looking for solution-oriented information. They’re also seeking empathy and human connection through the stress. AI tools might be able to provide the informational service that tech support workers like us do, and that’s wonderful. But they can’t provide our listening ear, and they shouldn’t be designed to act as if they can.
Companies and users turn to AI support because it’s fast, cheap, and always available. Insofar as that frees human workers from expectations of constant availability, that doesn’t have to be a bad thing. The reality is that AI exploits a real, beautiful need in the human heart: the need for a compassionate companion in life who is always there for us and ready to pay us full attention on the spot. In other words, there is an innate human desire for God in each of us, the only ever-available all-knowing One who cares for us more than any machine could ever mimic.
Ultimately, that exploitation is the real reason that the deceptive “styling” of AI is so harmful. It’s not just promising to do what only a human can do, which is offer true connection. It’s promising to do what only God can do: to be always present, always knowing, always available.
What should change? The human “styling” of AI tools is a choice, which is good news—it’s something that humans decided, and we can decide differently. We can and should choose to take the deception out of these tools. ChatGPT’s success proves AI products don’t need human names, and large-language models can be instructed to eliminate anthropomorphic language. But the real change, as always, needs to take place in the human mind and heart. In that spirit, technology scholar Stephanie Hare has proposed a Hippocratic Oath for scientists and technologists, rooted in medical ethics.
In 2020, the Pontifical Academy for Life convened global leaders for a conference that led to the creation of the “Rome Call for AI Ethics,” with Brad Smith (President of Microsoft) and John Kelly III (Vice President of IBM) among its first signatories.
“Pointing to a new algorethics, the signatories committed to request the development of an artificial intelligence that serves every person and humanity as a whole; that respects the dignity of the human person, so that every individual can benefit from the advances of technology; and that does not have as its sole goal greater profit or the gradual replacement of people in the workplace.”
Cisco also signed in 2024. Chuck Robbins, Chair and CEO of Cisco said:
“For nearly 40 years Cisco has built the networks that connect people and organizations across the globe, and today we are building the critical infrastructure and security solutions that will power the AI revolution…. The Rome Call principles align with Cisco’s core belief that technology must be built on a foundation of trust at the highest levels in order to power an inclusive future for all.”
As religious sisters whose mission is to evangelize through media, we see technology design as a sacred responsibility—a vocation that demands care for the human beings whose lives these products shape.
Resources for Further Exploration:
- Two books that offer perspectives informed by algorethics: Technology Is Not Neutral by Stephanie Hare and This Is for Everyone by Tim Berners-Lee
- Two films that explore the pitfalls of placing human expectations on non-human technologies: Transcendence (2014) and A.I. (2001)
- Two websites that aim to inspire and equip developers and organizations with shared ethical principles to build into their AI projects: Algorethics AI Library and RenAIssance Foundation
Acknowledgments: Special thanks to Sister Khristina and Russell Luchin, a fellow member of the IT team at Pauline Books & Media, for his insights on this subject.
AI Usage Disclosure: Before writing, we used Claude AI to organize our notes.
Image by Brian Penny from Pixabay