Artificial Intelligence April 2, 2026

OpenAI’s Pentagon Deal Sparks #QuitGPT Revolt with Millions Joining

Within hours of Anthropic refusing to grant the Pentagon unrestricted access to its AI technology, OpenAI stepped in and signed its own classified defense contract – and the fallout has been seismic. The February 28, 2026 deal ignited one of the fastest-growing tech boycotts in recent memory: the #QuitGPT movement, which has now surpassed 2.5 million supporters through a combination of subscription cancellations, app deletions, and social media pledges organized through QuitGPT.org.

The backlash isn’t just digital noise. ChatGPT uninstalls in the United States surged 295% day-over-day on the day the deal was announced. Anthropic’s Claude app rocketed to the number one position on Apple’s App Store. OpenAI’s head of robotics resigned in protest. And over 900 employees from OpenAI and Google signed an open letter demanding their employers reject Pentagon surveillance contracts. What began as a contract announcement has become a full-blown crisis of trust – one that exposes deep fractures in how the public, the tech workforce, and the AI industry itself view the marriage of artificial intelligence and military power.

How the Deal Came Together

The chain of events leading to OpenAI’s Pentagon contract began with Anthropic CEO Dario Amodei’s refusal to proceed without legally binding guarantees. Amodei insisted that Anthropic’s Claude models would not be used for mass surveillance of Americans or for fully autonomous weapons – systems capable of killing without human oversight. “We cannot in good conscience accede to their request,” Amodei said publicly.

The Pentagon’s response was extraordinary and punitive. Defense Secretary Pete Hegseth had summoned Amodei to the Pentagon earlier that week, giving him a Friday deadline. Before it even expired, President Donald Trump declared on Truth Social that Anthropic was “A RADICAL LEFT, WOKE COMPANY” and directed every federal agency to immediately cease using its technology. Hegseth then designated Anthropic a “supply chain risk” – a classification previously reserved for corporate extensions of foreign adversaries – effectively barring any military contractor from doing business with the company.

Hours later, Sam Altman announced OpenAI had reached its own agreement for classified AI deployment. He claimed the contract included the same red lines Anthropic had demanded: no mass domestic surveillance and no autonomous weapons. The speed of the deal immediately raised suspicions.

The “Any Lawful Use” Problem

The central criticism of OpenAI’s contract comes down to three words: “any lawful use.” Sources familiar with the Pentagon’s negotiations confirmed that OpenAI’s deal is significantly softer than what Anthropic had been pushing for. The Pentagon refused to abandon its desire to collect and analyze bulk data on Americans, and the contract’s restrictions essentially boil down to: if it’s technically legal, the military can use OpenAI’s technology to carry it out.

This distinction matters enormously. Over the past decades, the U.S. government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs. The contract states that handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, and Executive Order 12333. But all of these laws were on the books before the Snowden revelations of 2013 – and none of them prevented the NSA from collecting phone records on millions of Americans.

Security researcher Leo Gao put it bluntly, describing the contract language as “so obviously just ‘all lawful use’ followed by a bunch of stuff that is not really operative except as window dressing.”

Sarah Shoker, who led OpenAI’s geopolitics team for three years before departing in June 2025, offered a more detailed critique on her Substack. She noted that there isn’t a consensus over what it means in practice to have adequate “human supervision” or “meaningful human control” in autonomous weapons systems. “Policy and law are not free-floating static ‘things,'” she wrote. “The borders of the law are fuzzy and filtered through political ideology.”

The #QuitGPT Movement by the Numbers

The consumer response was swift and measurable. QuitGPT.org – organized by a group of democracy activists concerned about AI companies contributing to authoritarianism – became the central hub for the boycott. As of early March, the platform reported over 2.5 million participants based on website signatures, social media shares, and app usage data. The site has since updated its count to over 4 million who have “taken action” as part of the boycott.

Metric Data Point Timeframe
QuitGPT supporters 2.5 million+ (now 4M+) First week of March 2026
ChatGPT uninstall spike (U.S.) 295% day-over-day increase February 28, 2026
Claude App Store ranking #1 free app (U.S. Apple App Store) By March 1, remained through March 8
Employee petition signers 900+ from OpenAI and Google Late February 2026
ChatGPT weekly user base 900 million As of announcement

While 2.5 million represents a fraction of ChatGPT’s 900 million weekly users, the movement’s velocity – and its concentration among younger, more progressive users who form ChatGPT’s core demographic – signals a deeper problem for OpenAI than raw numbers suggest. Sensor Tower data confirmed the uninstall surge alongside a flood of one-star reviews beginning the day the contract was signed.

Internal Revolt at OpenAI

The backlash wasn’t confined to consumers. Caitlin Kalinowski, OpenAI’s head of robotics who had been recruited from Meta in 2024, announced her resignation publicly. “AI has an important role in national security,” she wrote on X. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”

She wasn’t alone. Research scientist Aidan McLaughlin posted simply: “i personally don’t think this deal was worth it.” Safety researcher Cameron Raymond replied that he felt similarly. Technical staffer Clive Chan took a more measured approach, stating he believed the contract barred mass surveillance but was advocating for the company to share more information. “If we later learn this is not the case, then I will advocate internally to terminate the contract,” Chan wrote. Another employee told CNN that many staffers “really respect” Anthropic for refusing the Pentagon’s terms.

The nearly 900 former and current OpenAI and Google employees who signed a joint petition before the deal was even finalized had warned explicitly: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.”

Protests Hit the Streets

Physical demonstrations materialized outside OpenAI’s Mission Bay headquarters in San Francisco on March 3, 2026. The crowd of approximately 40 to 50 people – software engineers, designers, and tech workers – carried placards warning “Sam Altman is watching you” and left chalk messages on the sidewalks urging the company not to facilitate government surveillance.

One 26-year-old worker from Oakland told reporters that the prospect of a private company building large-scale surveillance infrastructure for the government was “fundamentally irrational and dangerous.” Protester Sarah Gao went further, accusing Altman of living in a “super villain’s mansion” and using his “billionaire buddies” to help Trump with “his disastrous budget bills that stole trillions of dollars from everyday Americans just to line their pockets.”

The grievances extended beyond the Pentagon contract. Graphic designer Jennifer Keith highlighted the environmental footprint of AI and the perceived theft of creative intellectual property used for model training. Organizers announced a larger AI accountability march scheduled for March 21, 2026, with a planned route from Anthropic’s offices on Howard Street to the headquarters of both OpenAI and xAI, calling on Altman, Amodei, and Elon Musk to enact a formal pause in the AI development race.

The Political Donations That Fueled the Fire

Perceptions of political alignment intensified the backlash considerably. OpenAI President Greg Brockman and his wife donated $25 million to MAGA Inc. in 2025 – making them the largest donors in the super PAC’s latest year-end report. CEO Sam Altman separately donated $1 million to Trump’s 2025 Inaugural Fund. QuitGPT organizers highlight that OpenAI leadership gave Trump 26 times more than any other major AI company.

These donations reframed the Pentagon deal in the public imagination. What might have been viewed as a pragmatic business decision instead appeared to many as the culmination of a deliberate strategy to cozy up to the Trump administration – especially given the speed with which OpenAI moved to fill the void left by Anthropic’s principled exit.

Altman’s Damage Control Efforts

Sam Altman’s response evolved rapidly as the crisis deepened. On the day after the announcement, he fielded questions publicly on X, admitting the process “was definitely rushed, and the optics don’t look good.” He explained: “We really wanted to de-escalate things, and we thought the deal on offer was good.”

By March 2, Altman released an internal memo – later shared publicly – announcing that OpenAI had revised the contract to include clearer safeguards. The amendments explicitly prohibited the Pentagon from using OpenAI models for mass domestic surveillance and barred intelligence agencies like the NSA from accessing the technology without a separate agreement. Altman also added prohibitions on using OpenAI’s technology on “commercially acquired” data, a gap in the original terms.

“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

OpenAI’s head of national security partnerships, Katrina Mulligan, argued on LinkedIn that much of the discussion assumed “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract.” She emphasized that deployment architecture – specifically limiting models to cloud API access rather than integrating them into weapons hardware – matters more than contract language alone.

Legislative Response Falls Short

The controversy reached Capitol Hill. California Democratic Rep. Sam Liccardo introduced an amendment to the Defense Production Act that would prohibit the Defense Department from retaliating against developers for instituting safeguards on high-risk technologies – a direct response to the Pentagon’s treatment of Anthropic. “When the company that designs and builds the jet fighter tells us when to use the brakes, we should listen,” Liccardo said during the committee meeting. “Instead, the Pentagon’s bureaucrats and lawyers believe they know better. They think they can fly the plane without brakes.”

The amendment failed on a 16-25 vote in the House Financial Services Committee, signaling limited legislative appetite for constraining military AI access.

What This Means Going Forward

The #QuitGPT movement represents something larger than a product boycott. It marks the moment when consumer choice, employee organizing, and public ethics converged to challenge the AI industry’s relationship with state power in a tangible, market-moving way.

The competitive dynamics are telling. Anthropic’s principled refusal – initially positioned as commercially suicidal given the Pentagon’s supply-chain threat – became a strategic advantage as Claude’s downloads surged and it claimed the top App Store position. OpenAI, despite winning the contract, found itself defending against employee departures, consumer flight, and a public relations crisis that forced contract amendments within days.

The unresolved question at the heart of this crisis is whether contractual language – however carefully crafted – can meaningfully constrain a government that has historically stretched legal definitions to justify expansive surveillance. As one analysis noted, the idea that vague contractual language and a “safety stack” will prevent Defense Secretary Pete Hegseth from taking a maximalist view of the Pentagon’s rights to OpenAI’s intellectual property is “either impossibly naive, or outright deceptive.” Only Congressional oversight and legislation can establish ground rules that apply to all government use of AI, regardless of whose models are deployed. Until that happens, the tension between AI innovation and democratic accountability will continue to play out in boycotts, resignations, and the fragile court of public trust.

Sources