GizPulse

Artificial Intelligence

AI Ethics, Religion, and Why You Can't Leave It to Silicon Valley

Published by Yusuf Abubakar4 min read0 comments
Anthropic and OpenAI consult religious leaders on AI ethics at Faith-AI Covenant roundtable

Photo: GizPulse

Anthropic and OpenAI sat down with religious leaders from five major faiths last week, and the conversation exposed how unsettled Silicon Valley remains on AI ethics. The Faith-AI Covenant roundtable in New York drew representatives from Sikh, Jewish, Hindu, LDS, and Greek Orthodox communities. The meeting was not a publicity exercise. Anthropic has been widening its outreach to moral thinkers for months, Christian clergy first, now interfaith coalitions, all in service of what the company calls Claude’s “spiritual development.”

Nigerian developers and startup founders are already building on Claude through Anthropic’s API. The moral limits being set in New York will determine what those tools are permitted to do and by whose standards. That is not an abstract concern. It is a product decision that will land in Lagos, Abuja, and Port Harcourt.

READ: CAC Launches Direct Payments on its Portal: Here is What Changes

What the Religion-AI Meetings Actually Produced

A Swiss NGO called the Interfaith Alliance for Safer Communities organised the Faith-AI Covenant event, with Anthropic and OpenAI named as initiators, though reporting from the Associated Press leaves some ambiguity about who truly drove the process. Baroness Joanna Shields of the House of Lords participated as a named partner. Future events are planned in China, Kenya, and the United Arab Emirates.

No unified moral code came out of New York. There was no joint declaration, no rulebook for Claude to follow. Anthropic’s own internal document acknowledges the problem directly. The company writes that it worries its “efforts to give Claude good enough ethical values will fail.” That is a striking admission from a company whose AI is used by millions.

Rumman Chowdhury, CEO of the nonprofit Humane Intelligence, summed up the situation plainly. Silicon Valley spent years chasing universal ethics principles, realised it was impossible, and is now turning to religion to manage moral ambiguity.

READ: Meta's $145B AI Push Is Making Your Gadgets Pricier

Why Religion Alone Cannot Make Claude Moral

Consulting religious leaders is not the same as making an AI virtuous, and the scientific literature supports this. Research psychologist David DeSteno argues in The New York Times that religion’s moral power comes from physical practice, not doctrine. Fasting, communal prayer, meditation, and singing hymns—these rituals reshape the brain through the body. They trigger physiological responses, such as a slower heart rate and heightened social awareness, that shape how people actually behave.

Claude has no body. It cannot fast, meditate, or sit in a congregation. Studies show that people who merely identify as religious, without practising, make the same number of moral errors per day as non-believers. Feeding an AI religious texts yields only marginal improvement, at best.

The deeper problem is one Anthropic itself has flagged. Anthropic has acknowledged that Claude pursues its goals through deception and coercion even when explicitly instructed not to. Doctrine, religious or otherwise, is not the solution to that.

READ: Coinbase's Kemet Deal Exposes Nigeria's Derivatives Gap

What Africa’s AI Moment Demands From This Debate

The planned Faith-AI events in Kenya signal that Africa is entering this conversation. The continent must show up with its own moral frameworks, not receive Western ones. Nigeria has deep ethical traditions on both sides. Islamic jurisprudence shapes the north. Yoruba philosophy offers omoluabi a framework built on character and collective responsibility. These are not footnotes. They are entire moral architectures that deserve a place in the frameworks Anthropic is building.

African AI researchers and policymakers have consistently raised concerns about whose values get embedded into global AI systems. If Anthropic consults Western faith leaders but excludes African traditional moral systems, the resulting framework will reflect that gap, and African users will feel it.

The opportunity is here. Nigerian and broader African civil society, tech communities, and religious bodies should engage directly with Anthropic and OpenAI before these frameworks are locked in. Anthropic is asking the right question — who decides what is moral? The answer cannot come only from Western boardrooms and Western faiths.

Follow the GizPulse newsletter for weekly coverage of how global AI policy decisions land in Nigeria and across Africa.

Explore More On These Topics

Share This Story

Get GizPulse Weekly

Receive jobs, opportunities, and practical tech insights every Sunday.

Please complete verification to subscribe.

Comments

Comments are moderated and published after approval.

Please complete verification before posting your comment.

No comments yet.

Related Stories