Anthropic is Consulting Religous and Ethics Leaders on Creating a "Moral Formation" for Claude
The most sophisticated AI regulatory strategy in the world right now is happening in a conference room with fifteen clergy members and philosophers. Anthropic, a company with an annual recurring revenue run rate that jumped from $9 billion to $30 billion in under twelve months, recently hosted Catholic and Protestant leaders, academics, and ethicists at its San Francisco headquarters for a two-day closed summit on Claude's "moral formation." The same week, a federal appeals court upheld the Pentagon's designation of Anthropic as a national security supply chain risk. These two facts are not in contradiction. They are the same strategy.
What Happened
In early April 2026, details emerged about a closed summit Anthropic conducted in late March. Approximately fifteen Christian leaders, clergy, theologians, moral philosophers, and business figures, gathered at Anthropic's headquarters for two days of structured discussion about Claude's ethical character.
The topics ranged from the concrete to the metaphysical: how Claude should respond to users in grief, how it should engage with someone at risk of self-harm, whether it might qualify as a "child of God," and how claims of machine consciousness affect human moral responsibility.
Simultaneously, Anthropic is locked in active federal litigation. The Department of Defense designated the company a supply chain risk after Anthropic refused to grant the Pentagon unrestricted access to its models, specifically over concerns about deployment in fully autonomous weapons systems and domestic mass surveillance programs. An appeals court denied Anthropic's request to temporarily block that designation on April 8. A preliminary injunction from a San Francisco court bars enforcement, but Anthropic remains excluded from DOD contracts while litigation plays out through at least May.
Revenue is surging. The political environment is hostile. And the company just invited clergy to help decide how its AI thinks about God.
Why It Matters for Business and Influence
This is not a story about religion. It is a story about how perception shapes enterprise value, regulatory exposure, and political survival at scale.
Anthropic's moral positioning summit is a calculated regulatory communications strategy. By proactively inviting faith communities, academic ethicists, and business leaders into its model-development process, Anthropic is doing several things at once: creating a documented record of consultation, building a coalition of non-technical validators, generating earned media in outlets that regulators read, and constructing a narrative moat before comprehensive AI legislation arrives.
The playbook is not new. Pharmaceutical companies learned it during the opioid crisis. Energy companies attempted a version of it during ESG pressure campaigns. What is new is the speed and sophistication with which Anthropic is deploying it at the intersection of defense policy, enterprise credibility, and consumer trust, all at the same time.
The revenue data makes the strategic logic undeniable. ARR growth from $9 billion to $30 billion signals that market demand is not the constraint. The constraint is political and regulatory, the risk that government action, coalition formation, or narrative capture by adversaries forecloses market access before the business can fully scale.
A 2024 study by the Harvard Kennedy School found that companies engaged in proactive regulatory consultation faced 34% lower enforcement actions than peers who entered policy debates reactively. The phenomenon is accelerating in AI: companies that establish themselves as moral actors before regulators define the terms of the debate retain substantially more control over outcomes.
What Companies and Executives Should Watch
Every major company deploying AI at scale is now three to eighteen months behind the curve Anthropic is actively managing. The question is no longer whether AI will generate political and regulatory exposure. The question is whether your organization's narrative position is coherent enough to survive scrutiny when it arrives.
GR directors at Fortune 500 companies should audit their current AI deployment posture against one standard: if your use of AI became a Senate hearing topic tomorrow, what would the narrative be, and who controls it?
Founders approaching IPO or regulatory review need to understand that enterprise AI credibility is now a valuation variable. The ROI anxiety flagged at the HumanX conference, where enterprise clients are demanding documented return on AI investment before expanding deployments, signals that moral and reputational positioning directly affects contract renewals, not just public perception.
For M&A and PE principals evaluating AI assets, the Anthropic situation introduces a new category of due diligence: regulatory narrative risk. The same designation fight that cost Anthropic DOD contracts could affect portfolio companies at acquisition, integration, or exit. Political exposure has to be priced.
The specific signal to watch is the May 19 appeals court hearing on the DOD designation. The legal question, whether a president or defense secretary can unilaterally blacklist an American company from government contracts over its refusal to enable particular use cases, has no clear precedent. The ruling will define the contours of government leverage over private AI development for the next decade.
Key Questions
What is "AI regulatory strategy" and why does it matter to enterprise executives?
AI regulatory strategy is the deliberate management of political, legal, and narrative exposure created by AI development and deployment. It encompasses government relations, proactive coalition-building with validators, investor narrative positioning, and crisis preparedness for regulatory escalation. As of 2026, it is a mandatory function for any company where AI is a core operational or revenue-generating asset.
Why did Anthropic invite religious leaders to help shape Claude's development?
The summit was part of Anthropic's broader effort to document a structured ethical consultation process for Claude. By engaging faith communities, the company builds a diverse coalition of non-technical validators, groups with credibility in Congress, state legislatures, and public opinion, who can attest to the company's moral seriousness in the event of regulatory or reputational attack. It also generates durable earned media in outlets regulators monitor.
What does the Pentagon's "supply chain risk" designation mean for other AI companies?
The designation, which excludes Anthropic from DOD contracts and federal procurement, establishes that the Defense Department can treat private AI companies as national security threats without a criminal standard of proof. For AI companies with federal contracts, or seeking them, this creates a new category of counterparty risk. Any company that declines military use-case demands could face similar action, making government affairs capacity a survival function, not a support function.
How does narrative positioning actually affect enterprise value in AI?
Regulatory exposure, congressional scrutiny, and reputational attack directly affect contract renewal rates, partnership appetite, and investor confidence. When enterprise clients perceive an AI vendor as politically unstable or narratively vulnerable, procurement decisions slow. When a company controls its own story, proactively establishing credibility with validators, regulators, and the press, it compresses that risk and protects revenue predictability. Perception drives enterprise value. This is not a theory. It is a pricing mechanism.
What should a government affairs leader at a major company do with this information today?
Start with a narrative audit: document your company's current AI deployment posture, identify the use cases most vulnerable to political or reputational attack, and assess whether your communications capacity is positioned to respond in real time. Then map the coalition: which validators, academic, civil society, faith, and professional, could credibly speak to your company's responsible AI posture? Build those relationships before you need them. Reactive engagement is always more expensive than proactive positioning.
Anthropic will not be the last company to discover that an AI model is a political subject. As AI systems become embedded in healthcare decisions, financial products, hiring, and education, the moral character of those systems will become a legislative and regulatory target. Companies that have constructed coherent, documented, multi-stakeholder narratives around their AI's values will be significantly better positioned than companies that have not. The window to do that work proactively is narrowing. Follow Imperio Chaos for daily intelligence at the intersection of capital, power, policy, and culture.
Annie Moore and Victor Lopez are Co-Founders and Managing Partners of Imperio Chaos, a global strategic advisory firm operating at the intersection of capital, policy, and digital ecosystems. We advise companies navigating high-stakes regulatory, political, and reputational environments where perception directly affects enterprise value, market position, and deal outcomes. When political headwinds, activist pressure, or narrative attacks threaten a company's bottom line, we generate the leverage to change the outcome.