Senate Authorizes ChatGPT, Gemini, and Copilot for Official Use—While Claude Remains Locked Out
A March 9 memo gives Senate staff access to three commercial AI platforms for drafting legislation and briefings, formalizing a policy split between Congress and the Pentagon over foundation model adoption.
The Senate Sergeant at Arms authorized ChatGPT, Gemini, and Copilot for official use with Senate data on March 9, 2026, marking the first time commercial large language models have been cleared for Tier 2 deployment in the upper chamber. The memo, issued by the Chief Information Officer, designates the three platforms as approved for use under the Senate AI governance framework established in October 2025, a policy architecture that separates non-sensitive research uses from official data integration.
The tools may be used for “drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis,” per Business Insider. Microsoft Copilot Chat is available now for all Senate employees at no cost, while each employee will receive one license for either Google Workspace with Gemini Chat or OpenAI ChatGPT Enterprise at no cost, with licensing details to follow within 30 days.
3
~10,000
All staff
Under evaluation
The Claude Exception
Claude was among several AI tools still under evaluation, according to an internal Senate IT website. The omission carries weight. Anthropic is in the middle of a dispute with the Trump administration over restrictions on using Claude for mass surveillance and autonomous weapons, and President Donald Trump has ordered federal agencies to stop using Anthropic’s technology—though that order does not apply to the legislative branch, according to Business Insider.
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that if the company does not allow Claude to be used “for all lawful purposes,” the Pentagon would cancel Anthropic’s $200 million contract and designate Anthropic a “supply chain risk”, a classification typically reserved for foreign adversaries. Anthropic rejected the Pentagon’s latest offer, saying the proposed changes “was paired with legalese that would allow those safeguards to be disregarded at will”, per CNN.
The Pentagon designated Anthropic a supply chain risk, requiring defense vendors and contractors to certify they don’t use Claude in their work with the Pentagon, and Anthropic sued the Trump administration on Monday, calling the actions “unprecedented and unlawful”, according to CNBC.
| Chamber | Approved Platforms | Claude Status | Policy Date |
|---|---|---|---|
| Senate | ChatGPT, Gemini, Copilot | Under evaluation | March 2026 |
| House | ChatGPT, Gemini, Copilot, Claude | Approved | September 2024 |
Bifurcated Federal AI Strategy
The Senate authorization exposes a two-track federal AI approach. Anthropic signed a contract valued up to $200 million with the Pentagon last summer, and Claude was the first model brought into the Pentagon’s classified networks, while OpenAI’s ChatGPT, Google’s Gemini and xAI’s Grok are all used in unclassified settings and have agreed to lift guardrails for Pentagon work, reports Axios.
On February 28, OpenAI announced a deal allowing the US military to use its technologies in classified settings, with CEO Sam Altman saying negotiations were “definitely rushed,” and the company published a blog post explaining its agreement protected against use for autonomous weapons and mass domestic surveillance, per MIT Technology Review. The reason OpenAI made a deal when Anthropic could not was less about boundaries but about approach—”Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” Altman wrote.
The Senate previously allowed ChatGPT, Google Bard, and Microsoft’s Bing AI chat in 2023 at “moderate” risk levels for research and evaluation or use with non-sensitive data, but the new approvals are less restrictive on the type of data that can be ingested, per FedScoop. The March memo represents the first Tier 2 authorization under the framework formalized in October 2025.
Enterprise Configurations and Security Theater
Copilot Chat does not have access to Senate data unless explicitly shared within a prompt, does not search internal drives or shared folders on its own, operates in Microsoft’s secure government cloud, and meets federal and Senate cybersecurity requirements with data staying within the Microsoft 365 Government environment, the memo states. OpenAI’s ChatGPT Enterprise explicitly offers features including no use of enterprise inputs/outputs to train public models by default, SOC 2 compliance attestations, and data residency options for eligible customers, according to vendor documentation reviewed by Windows Forum.
The Senate policy advises users not to enter personally identifiable information or physical security information into AI tools, per the POPVOX Foundation, a nonprofit tracking congressional AI adoption. Senate offices and committees often operate as their own domains with senators and committee chairs dictating their own rules for staff, and the chamber has not made its rules of the road for AI usage public, leaving open how staffers dealing with sensitive or classified information might be asked to approach use of the products, notes The Spokesman-Review.
- Legislative branch exemption from executive AI procurement orders creates parallel governance tracks
- Senate adoption validates enterprise LLM readiness for sensitive institutional workflows
- Claude exclusion signals policy coordination between chambers despite constitutional separation
- Government-cloud configurations (GCC, FedRAMP) now baseline for federal AI procurement
- Free licensing model may accelerate adoption across state legislatures and local governments
Procurement Signal for Federal Agencies
Nearly 90% of federal government agencies are planning to or already using AI, with security and adversarial risk cited as the single biggest blocker to AI adoption, impacting 48% of all agencies, according to a Google Public Sector survey of 250 federal IT leaders conducted by Government Executive in early 2026. OMB Memorandum M-26-05, issued in January, calls for a more agile, risk-based approach to technology adoption, but while policy is accelerating, execution is not—on the ground, AI efforts are still slowed by fragmentation, redundancy, and long delivery cycles, per FedScoop.
The Senate’s enterprise-focused deployment bypasses many of those friction points. In late 2025, the House signed an enterprise-wide agreement with Microsoft to provide 6,000 Copilot licenses to all Member offices and committees that opt in, requiring offices to establish an internal use policy outlining approved use cases, though offices report the system appears to refuse requests to generate content that are political in nature or policy positions, limiting usefulness for constituent correspondence, notes the POPVOX Foundation.
What to Watch
The Anthropic litigation will test whether executive branch supply-chain risk designations can extend to legislative branch contractors. The House has already approved ChatGPT, Copilot, Gemini, and Claude for official use—if Senate evaluation clears Claude while executive agencies remain barred, it would cement a three-way split in federal AI Governance.
Monitor state legislature AI policy. State legislatures and parliaments worldwide are watching how Congress handles AI adoption, and making these policies public with an affirmative vision would position Congress as a leader rather than a laggard, the POPVOX Foundation argues. The Senate’s no-cost licensing model and integration with existing Microsoft 365 environments provides a template for rapid state-level deployment.
Track vendor compliance claims. Data shared with Copilot Chat stays within the secure Microsoft 365 Government environment and is protected by the same controls that safeguard other Senate data—but enforcement mechanisms remain opaque. The absence of public Senate AI policies means oversight will be decentralized across 100 offices with varying risk tolerances, creating natural experiments in institutional AI governance that may surface edge cases before they hit executive agencies operating under stricter OMB mandates.