Earlier this year I posed a question to ChatGPT: Is AI spiritual? The bot’s response: “AI does not have a soul. Souls are typically associated with consciousness and spirituality, which AI lacks. AI systems are complex algorithms and data processing tools, but they do not possess self-awareness or emotions like humans do.” 

I am as concerned about the ethical responsibility of failure to adopt AI and make it widely accessible as I am about the risks of deploying it. 

ChatGPT went further to declare that even artificial general intelligence—should we achieve it—would not be inherently spiritual. Rather, spirituality is “subjective,” personal, related to “consciousness, beliefs, emotions, and experiences that are typically associated with human beings and their connection to a higher power or a sense of purpose.” In sum, ChatGPT identifies, conflates, and confuses several concepts: spirituality, soul, emotions, self-awareness, connection to a higher power and sense of purpose—all human traits (says the bot), and all lacking in ChatGPT. 

We All Have a Stake in This

I am an ethics advisor, teacher, and inquirer (full disclosure: I am neither an expert in religion nor a technologist). I explore effective pro-innovation approaches to problem-solving, integrating ethics into our everyday decision-making. In that light, I raise questions here about AI and spirituality and invite readers to pursue their own. My interest is in democratizing ethics—every one of us has a stake in decisions about AI that society is making, particularly with such deeply personal considerations as spirituality.

On to these questions about AI: could these machines at some point have a spiritual life? What would that mean? How would we know? Since we regard AI as a human creation, could we make it spiritual—instill in it a larger sense of purpose or quest for meaning greater than itself? The conundrum: doing so would put us humans in the role of creator of spirit—the role of higher power. We would be giving ourselves the divine privilege of conferring spirit (or spirituality) onto a physical entity. If so, wouldn’t we be responsible for the consequences of unleashing unprecedented interactions between humans and machines?

How Do We Stay in Control?

Recently, participants in a YDS advisory council retreat were given a timely assignment: we read The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma by AI pioneer Mustafa Suleyman, who focuses on a critical question: how do we stay in control of powerful new technologies?  Some experts argue that the technology will not be able to overtake humans and potentially harm us—at least not any time soon. But it is unclear whether such viewpoints are considering AI’s spiritual potential, what the nature of that power could be, or what new conflict with humans it might provoke.

Some world religious leaders have lately underscored Suleyman’s plea that we must increase our control over the technology so that AI does not become a reckless power operating beyond our rational understanding and potentially harming humans. Pope Francis’s 57th World Day of Peace 2024 message argues for AI in promotion of “human development” and inclusivity—technological innovation firmly anchored in human life and value. “Fundamental respect for human dignity demands that we refuse to allow the uniqueness of the person to be identified with a set of data. …,” the statement declares. I’ve heard a Buddhist perspective that we have an obligation as humans to develop AI to actively reduce human suffering—not just to avoid potential harm from AI. Most experts I have consulted believe AI “spirituality” is not spiritual at all but rather an exercise in data analysis—an intellectual matter, created by humans. As ChatGPT indicated, we could “program” or “design” AI to “understand and discuss spiritual concepts … based on algorithms and data rather than genuine spiritual experiences.”

Ethics ≠ Spirituality

In my own work I try to position ethical issues of AI alongside questions about spirituality and religion. Ethics is not a substitute for spirituality. Nor should spirituality be used to excuse ethics. They do different dances with each other, and of course spirituality can be a source of ethics guidance for many people. Various important risks stemming from AI breakthroughs—issues of privacy, bias, errors, skewed data sets, inaccurate policing, identity falsification—all in my view need both regulation and ethical oversight above and beyond law and in addition to spiritual engagement.

Here I confess to being a staunchly pro-technology ethics explorer. I am as concerned about the ethical responsibility of failure to adopt AI and make it widely accessible as I am about the risks of deploying it. It’s easy from a perch of privilege (like mine) to say that we should reject driverless cars unless they are perfectly safe, or forsake AI diagnostics unless they are overwhelmingly accurate, or renounce machines that substitute for therapists or friends on occasion. For those living in a country with weakly enforced road rules and licencing, disproportionate numbers of deaths from auto accidents, or unreliable access to care in the case of highway mishaps, driverless cars look compelling. Similarly, not everyone has access to good mental health care or confidantes. So who am I to issue cautions about slowing the progress of AI options?  

AI is Still a Tool, for Now

Thus, aside from speculation about AI spirituality, there are countless day-to-day ethical impacts of AI to identify and assess—ranging from its effect on congregational life to the management of a fast-food restaurant chain. We know AI can search religious texts, translate, analyze data on consumer (or parishioner) trends, interpret financial information (sales, donations), and answer queries. Ethics transgressions committed by AI, whether plagiarizing a sermon or misrepresenting financial information to donors, are ultimately traceable to human decisions and actions. Those humans designing, building, and deploying the AI—not the AI—are responsible for these legal and ethical violations. None of these ethical considerations depend on speculation about AI spiritual power. Even in humanoid form like the robot Sophia, AI is still a tool—more like a vacuum cleaner, not a human replicant or a divine substitute.[4]

If AI is neither human nor a spiritual being but can functionally substitute for humans in certain circumstances (flipping burgers, offering companionship), a further question is whether AI could functionally substitute for human leaders (including spiritual leaders). Could AI ministers lead congregations—even become ordained, an ecclesial version of today’s AI mental health therapists or AI executive coaches?

So many critical questions are too complex for this short piece. Could AI achieve spiritual/religious neutrality—untethered from any specific institutional religion? Will AI so alter our view of what it means to be human or, put differently, transform what it does to our place in the universe, that AI-generated knowledge will overtake our perennial quest for a higher power? How will AI influence national and global conflicts, particularly where religion plays a significant part? Could AI systems aid us in furthering compassion, community, and, indeed, spiritual values? Could AI integrate different, even conflicting, ethnic, cultural, and religious views of spirituality and create a new synthesis of values and accord?

Here I end where I started—with humility, questions, a commitment to ethical components in AI-related problem-solving, and a keen interest in the views of others. What do you think?

Author(s)

  • Dr. Susan Liautaud is an innovative ethics and resilience expert, keynote speaker, author, and professor. With over 25 years of experience, she advises global leaders, companies, governments, and organizations on complex ethics and governance challenges, specializing in the ethics of technology, including AI. Her firm, Susan Liautaud & Associates, offers practical, solution-oriented ethics advice applicable in both professional and personal contexts.
    Susan chairs the London School of Economics and Political Science (LSE) Council and teaches ethics at Stanford University. She is the author of The Power of Ethics and The Little Book of Big Ethical Questions. She also serves on advisory boards such as Stanford HAI, SAP’s AI Ethics Advisory Panel, and the Carnegie Endowment for International Peace. Additionally, Susan founded The Ethics Incubator, a non-profit platform that explores ethical questions through interviews with renowned artists and leaders. Her academic credentials include a PhD in Social Policy from LSE, a Juris Doctor from Columbia Law School, and degrees from Stanford University.