In early January, roughly 90 political, religious, labor, and academic leaders quietly checked into a New Orleans Marriott for a conference on artificial intelligence. Nobody knew who else would be there until they walked in. What resulted from those meetings — and the months of drafting that followed — was released last week, and the coalition behind it is difficult to explain with a straight face.
Steve Bannon and Susan Rice. Glenn Beck and Ralph Nader. Richard Branson and the AFL-CIO. The Writers Guild and the Congress of Christian Leaders. They’ve all signed the same document.
33 Principles, One Shared Alarm
The Pro-Human AI Declaration, released March 4 by the Future of Life Institute, lays out 33 specific principles organized under five themes. The coalition’s premise is blunt: Silicon Valley is in a race to replace humans — as creators, caregivers, decision-makers, and companions — and no one with actual power has agreed on what should stop that from happening.
The five themes are Keeping Humans in Charge, Avoiding Concentration of Power, Protecting the Human Experience, Human Agency and Liberty, and Responsibility and Accountability for AI Companies. Within those broad categories, the specific demands get pointed: an enforceable off-switch for powerful AI systems, a pause on superintelligence development until there is broad scientific consensus that it can be done safely, a prohibition on AI systems that replicate themselves or resist being shut down, and a ban on AI being granted legal personhood.
Also in the document: the right to have your data deleted from AI training sets, and legal liability for AI companies when their systems cause harm — two provisions with direct implications for how AI products currently operate.
The Principle Nobody Expected

One of the more striking items in the declaration is something the drafters call “avoiding enfeeblement.” The idea is that AI should make people more capable, not less — and that building systems designed to do things for people rather than with them is a form of harm worth naming.
It’s a reframing worth sitting with. The conversation about AI risk has largely centered on dramatic scenarios: job displacement at scale, autonomous weapons, systems that can’t be controlled. “Enfeeblement” is quieter. It’s the gradual erosion of skills, judgment, and agency that happens when tools start doing the thinking, the writing, the deciding — and humans stop practicing those things. The declaration treats that process as a risk to be prevented, not just an inconvenience to be managed.
Why This Coalition Is Unusual
The left–right composition of the signatories isn’t just a political novelty. It reflects something real about where opposition to unchecked AI development is actually coming from. Labor unions see automation eliminating jobs. Religious organizations see AI threatening human dignity and meaningful work. Conservatives see concentrated tech power as a threat to liberty and self-governance. Progressives see it as a threat to democratic accountability. Nobel economist Daron Acemoglu, who has argued there is little evidence that AI will deliver sweeping productivity gains anytime soon, is also among the signatories.
These concerns don’t normally occupy the same room. The fact that they’re now in the same document is significant regardless of what the document itself accomplishes.
What It Can and Can’t Do

The declaration has no legal force. It’s a statement of principles, not legislation, and the AI companies most relevant to its concerns — OpenAI, Google DeepMind, Anthropic, Meta — are notably absent from the signatory list. The Future of Life Institute, which convened the coalition, is running a parallel ad campaign called Protect What’s Human, and the declaration’s organizers have described it as the foundation for a broader political movement rather than an endpoint.
What the coalition does have is unusual political reach. When organized labor, major faith communities, and figures from both parties align on a policy direction, regulatory momentum tends to follow — even if slowly. The Overton window on AI governance shifted measurably last week, even if the law hasn’t caught up yet.
A Question Worth Bringing to the Table
A poll cited by the declaration found that Americans favor human control over AI development speed by an 8-to-1 ratio. That number suggests the coalition isn’t manufacturing public concern — it’s giving shape to anxiety that was already there.
The more interesting question, as agentic AI systems become part of daily work and life, is the one the enfeeblement principle raises: not just who controls the technology, but what kind of people we want to be after it’s done with us.