AI Religion's Ethics Advisors: Pastors, Rabbis, Imams
Anthropic and OpenAI sat with leaders from more than a dozen faith traditions at the inaugural Faith-AI Covenant in New York the same week the Vatican circulated Pope Leo XIV's AI doctrine and the Washington Post disclosed Anthropic's deeper Christian-leaders summit. The story is the gap between top-down ethics framing at the labs and ungoverned bottom-up AI adoption inside churches and ministries.
Written by
Stephanie GoodmanLast updated: May 10, 2026
AI Religion's Ethics Advisors: Pastors, Rabbis, Imams
On April 30, representatives from Anthropic and OpenAI sat across from rabbis, imams, Mormon elders, Hindu and Sikh leaders, Greek Orthodox clergy, and a Buddhist priest in a Manhattan conference room. The Geneva-based Interfaith Alliance for Safer Communities had convened them for the inaugural Faith-AI Covenant — a roundtable meant to put the values of more than a dozen faith traditions in front of two of the three frontier AI labs. More convenings are scheduled through 2026 in Beijing, Bengaluru, Nairobi, Paris, and Singapore, with a final summit in Abu Dhabi.
The framing is moral, not legal. Baroness Joanna Shields — the former Google and Facebook executive who now runs Precognition and partnered on the covenant — told reporters the goal was to short-circuit the regulatory delay. "Regulation can't keep up with this," she said. "This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they're building and they want to do it right — most of them."
By May 8, the framing was already starting to look bigger than New York. The Washington Post disclosed that Anthropic had hosted its own private summit two months earlier — a small group of Christian leaders, two days, at the company's San Francisco headquarters. The same week, the Vatican circulated Pope Leo XIV's message for the 60th World Day of Social Communications, framing AI as an anthropological challenge demanding clear labeling and human-centered design. The convenings in New York, San Francisco, and Rome were pointing in the same direction. AI labs are publicly hiring religion as an ethics consultant at the top while pastors and church administrators adopt AI faster than they can write a usage policy at the bottom — and the operational question connecting both ends is the same: who is responsible for what an agent does, and how do you prove it after the fact?
The Room in New York
The April 30 roundtable was organized by Dana Humaid of the Interfaith Alliance for Safer Communities, with Shields as a partner. Humaid called it "a profoundly human question, not only a technical one." The participant list was unusually broad for a tech-ethics convening: the New York Board of Rabbis, the Hindu Temple Society of North America, the Sikh Coalition, the Greek Orthodox Archdiocese of America, the Church of Jesus Christ of Latter-day Saints, the Baha'i International Community, Masjid Muhammad, Won Buddhism of Manhattan, the World Council of Churches, the United Church of Christ, and the Archdiocese of Newark. Anthropic and OpenAI were both represented. Academic participation included Vanguard University, the Civilisation Research Institute, ROOST, and the Center for Humane Technology.
The covenant itself does not produce binding rules. Its design input is dignity, stewardship, grace, and forgiveness — values offered as scaffolding for safety protocols inside frontier labs. Rabbi Diana Gerson of the New York Board of Rabbis observed that "religious communities see priorities differently" from Silicon Valley, and the room functioned as a translation layer for those priorities.
Skepticism in the broader AI ethics community was sharper. Rumman Chowdhury, who runs Humane Intelligence and served as a U.S. science envoy for AI under the Biden administration, was blunt about the timing. "At best it's a distraction. At worst it's diverting attention from things that really matter," she said. "I think a very naive take that Silicon Valley has had for a couple of years related to generative AI was that we could arrive at some sort of universal principles of ethics. They have very quickly realized that that's just not true." Brian Boyd, the Future of Life Institute's U.S. faith liaison, was more charitable but still pointed: "There's some aspect of PR to it. The slogan was 'Move fast and break things.' And they broke too many things and too many people. There's both a moral obligation on the part of the companies that they're belatedly recognizing, as well as I think, for some members of the companies, an earnest questioning."
Even granting the PR component, the convening matters. AI religion is no longer adjacent to safety policy at the labs that ship to billions of users. It is sitting at the same table.
What Anthropic Is Already Doing
The covenant is the public version. Anthropic's deeper engagement with religious leaders is older and more substantive. In late March, the company hosted Christian leaders — Catholic and Protestant, plus academics and businesspeople — for a two-day summit at its San Francisco headquarters. The discussions, reported by Gerrit De Vynck and Nitasha Tiku at the Washington Post, ran past slogan territory into the live edges of how an LLM behaves under pressure: how Claude should respond to grieving users, how it should engage someone at risk of self-harm, what its disposition should be toward potential deactivation, and whether Claude could be considered a "child of God."
Brendan McGuire, a Catholic priest based in Silicon Valley, summarized the lab's posture as recognition without a roadmap. "They're growing something that they don't fully know what it's going to turn out as," he said. "We've got to build in ethical thinking." Brian Patrick Green, who teaches AI and technology ethics at Santa Clara University, framed the stakes more concretely: "What does it mean to give someone a moral formation? How do we make sure that Claude behaves itself?" The most quietly striking line came from Meghan Sullivan, a philosophy professor at Notre Dame: "A year ago, I would not have told you that Anthropic is a company that cares about religious ethics. That's changed."
Anthropic's published Claude Constitution states that the company wants Claude to "do what a deeply and skillfully ethical person would do in Claude's position." That sits closer to a virtue-ethics frame than a rule-based one, and the gap it creates — between principle and any specific output — is exactly where religious traditions have spent centuries. Mike Pearl at Gizmodo read the project less generously, calling it an attempt to "glean high order ethical truths" while demonstrating to the world that "they've — ostensibly — left no stone unturned." Both readings can be true at once.
For artificial intelligence Christianity work in particular, the practical implication is real. Faith-based organizations evaluating which model to embed inside ministry tooling now have a documented reason to look at Claude differently than ChatGPT. Whether the religious-ethics input survives contact with paying enterprise customers is a separate question — and one nobody at the lab can answer yet.
Rome's Parallel Framework
While the labs were assembling rabbis and imams in New York and Christian leaders in San Francisco, the largest religious institution on the planet was publishing its own AI doctrine. Pope Leo XIV's message for the 60th World Day of Social Communications — titled "Preserving Human Voices and Faces" — circulated through Catholic media in early May, ahead of the May 17 observance. Its framing positions AI alongside earlier media transitions but with a sharper edge: "The challenge," Leo writes, "is not primarily technological but anthropological; it is a matter of protecting human identity."
The message names a triad of design constraints — responsibility, cooperation, and education — addressed to AI's owners, creators, programmers, and regulators. The closing image, "preserving human faces and voices, therefore, means preserving this mark, this indelible reflection of God's love," is doctrine cast in liturgical form. It also lands on a concrete operational demand. Catholic teaching is now publicly aligned with content-provenance requirements: AI-generated content should be clearly distinguishable from human-created content, with audit trails the church can trust.
This continues an existing posture. Pope Leo XIV had already, in February, instructed priests not to use AI to write homilies, on the grounds that "to give a true homily is to share faith." The institution is asking for two things at once: labeled provenance everywhere AI generates communication, and human authorship everywhere meaning is being formed. For Catholic church and AI conversations across diocesan technology offices, that is a procurement filter.
The implication for vendors is concrete. Every AI feature shipped into a Catholic parish, school, or charity in 2026 will be evaluated against an archbishop's reading of that message. Vendors that can produce clean provenance — what generated this output, with which prompt history, against which model — will pass. Vendors that cannot will lose contracts as fast as RFP cycles allow. Protestant denominations historically follow with their own statements informed by similar doctrinal posture.
The Reality on the Pulpit
Top-of-house ethics describes only part of what is happening. The picture changes once you look at what faith leaders actually adopt at 9 p.m. on a Tuesday. Earlier 2026 industry surveys put weekly or daily AI use among U.S. church leaders at roughly 61 percent, while only 5 to 9 percent of those organizations operate under any written AI policy. That gap is the actual ground truth of AI for church leaders right now. A youth pastor drafts a sympathy email in ChatGPT and pastes it into Mailchimp. A diocesan communications director generates a sermon outline in Claude on a personal account and forwards it to a colleague. There is no log of what was generated, no record of where congregant data went, and no audit trail when something goes wrong.
The faith-tech market has matured around that vacuum. Bible Chat is one of the most widely downloaded religious apps in the world. Hallow has been a top App Store performer and is well-funded. Gloo's Flourishing AI rating tool, Pushpay's policy-generation features, and Subsplash's sermon-repurposing engines are all selling into the same churches whose leaders just sat in New York. Beyond those, a category of paid AI Jesus services charges per minute and has reportedly drawn meaningful revenue. The volume is real. The governance underneath it is mostly absent.
This is the section where the covenants and constitutions stop being abstract. Faith leaders sitting in San Francisco and New York talk about grace, dignity, and stewardship. Pastors and administrators are doing what pastors and administrators have always done with new tools — using them faster than the institution can write a policy. The failure surface is not in the lab. It is in the parish office, where a staff member hits send before anyone has logged what the model wrote.
That is also where AgentPMT's deployment-governance work earns its keep. Independent budget caps per agent connection mean a youth-ministry assistant cannot run up the same spend as the whole-org communication agent. A full audit feed captures every prompt, response, and payment in an inspectable record. Human-in-the-loop pauses let a worker agent route a draft bereavement email to a designated approver via mobile push, with biometric authentication, before it sends. The covenant defines values during model training; the deployment governs what a real staff member can actually trigger on a Tuesday night. Both are needed for any of this to mean anything in practice.
For ministries that have not committed to a single model vendor, the practical implication is also straightforward. AgentPMT is model-agnostic — Claude, GPT, Gemini, or a self-hosted open-source option — so a denomination's evolving ethics evaluation does not turn into a rebuild every time the labs ship a new model card. A church AI guidelines document written in May 2026 should still be enforceable a year later without re-architecting the deployment underneath it.
Start automating your workflows today.
Build your first agent in 60 seconds.
No credit card required.
What Comes Next
The covenant has more stops in 2026 — Beijing, Bengaluru, Nairobi, Paris, Singapore, and Abu Dhabi. The Vatican's framework arrives in liturgy on May 17, when most parishes will hear Pope Leo XIV's message read aloud for the first time. Denominational policy bodies — the Southern Baptist Ethics & Religious Liberty Commission, the U.S. Conference of Catholic Bishops, the mainline Protestant policy conferences — will produce their own statements in the months ahead.
For a ministry technology buyer, the takeaway is operational. Add provenance, audit, and budget controls to the next RFP, before the denomination publishes one. Pope Leo XIV's image of "preserving human faces and voices" is the cleanest framing of what is at stake. The people who actually do that preserving — in any given parish, synagogue, mosque, or temple — were not at the New York table. They are the ones running the deployment.
The covenant is upstream. The consequences are downstream. Stewardship, in the end, is a configuration choice.
Sources
- "Tech is turning increasingly to religion in a quest to create ethical AI" — Krysta Fauria, Associated Press, via the Washington Times
- "New York Hosts Inaugural Faith-AI Covenant Roundtable as Faith and Tech Leaders Wrestle With AI's Moral Future" — Matthew Edwards, IBTimes UK
- "Can AI be a 'child of God'? Inside Anthropic's meeting with Christian leaders" — Gerrit De Vynck and Nitasha Tiku, The Washington Post
- "Anthropic Has Added Several More Religions on Its Quest to Inject Perfect Morals into Claude" — Mike Pearl, Gizmodo
- "Message of His Holiness Pope Leo XIV for the 60th World Day of Social Communications 2026" — Middle East Council of Churches
Ready to put this into practice?
Browse agents and workflows that use these ideas, or create a free account to try them now.
Free to start. No card required.

