Saturday, May 09, 2026
 
search-icon

Religion Emerges as Unlikely Guide in Push for Ethical AI

publish time

09/05/2026

publish time

09/05/2026

Prayer beads are photographed over a screen displaying binary code in Phoenix, Friday May 8, 2026. (AP Photo/Dario Lopez-Mills)

LOS ANGELES (AP), May 9: As concerns mount over artificial intelligence and its rapid integration into society, tech companies are increasingly turning to faith leaders for guidance on how to shape the technology — a surprising about-face on Silicon Valley’s longstanding skepticism of organized religion.

Leaders from various religious groups met last week with representatives from companies including Anthropic and OpenAI for the inaugural “Faith-AI Covenant” roundtable in New York to discuss how best to infuse morality and ethics into the fast-developing technology. It was organized by the Geneva-based Interfaith Alliance for Safer Communities, which seeks to take on issues such as extremism, radicalization and human trafficking. The roundtable is expected to be the first of several around the globe, including in Beijing, Nairobi and Abu Dhabi.

Tech executives need to recognize their power — and their responsibility — to make the right decisions, said Baroness Joanna Shields, a key partner in the initiative. She worked as a tech executive with stints at Google and Facebook before pivoting to British politics.

“Regulation can’t keep up with this,” she said. But the leaders of the world’s religions, with billions of followers globally, have the “expertise of shepherding people’s moral safety,” she reasoned. Faith leaders ought to have a voice, Shields said.

“This dialogue, this direct connection is so important because the people who are building this understand the power and capabilities of what they’re building and they want to do it right — most of them,” she said of AI tech executives.

The goal of this initiative, according to Shields, is an eventual “set of norms or principles” informed by different groups and faiths, from Christians to Sikhs to Buddhists, that companies will abide by.

Challenges

Present at the meeting were a variety of faith groups, including representatives from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America and The Church of Jesus Christ of Latter-day Saints, widely known as the Mormon church.

Before these companies initiated outreach, some traditions had issued their own ethical guidance on using AI. The Church of Jesus Christ of Latter-day Saints has given a qualified approval of the technology in its handbook. “AI cannot replace the gift of divine inspiration or the individual work required to receive it. However, AI can be a useful tool to enhance learning and teaching,” it reads.

The Southern Baptist Convention, the largest Protestant denomination in the U.S., passed a resolution in 2023: “We must proactively engage and shape these emerging technologies rather than simply respond to the challenges of AI and other emerging technologies after they have already affected our churches and communities.”

One challenge in creating a list of common principles is that global faiths, despite common ground, differ in their values and needs. “Religious communities see priorities differently,” said Rabbi Diana Gerson, a roundtable participant and the associate executive vice president of the New York Board of Rabbis.

The partnership highlights a growing coalition between faith and tech, born out of an effort to create moral AI — a contested concept which begs questions about whether that is possible and what it means.

“We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position,” Anthropic states in the public “Claude Constitution” written for its chatbot. That constitution was made with the help of a host of religious and ethics leaders.

In this burgeoning alliance, Anthropic has been the most assertive, at least publicly, in their efforts to court faith leaders. The move follows a public dispute earlier this year with the Pentagon over military use of artificial intelligence after Anthropic said it would restrict its technology from being used to develop autonomous weapons or for mass surveillance of Americans.

“There’s some aspect of PR to it. The slogan was ‘Move fast and break things.’ And they broke too many things and too many people,” said Brian Boyd, the U.S. faith liaison for the nonprofit Future of Life Institute. “There’s both a moral obligation on the part of the companies that they’re belatedly recognizing, as well as I think, for some members of the companies, an earnest questioning.” - KRYSTA FAURIA (AP)