California Judge Denies Musk’s Bid to Block AI Training Data Disclosure Law
Federal court rejects xAI's constitutional challenge to transparency statute, clearing path for unprecedented disclosure requirements across the AI industry.
Elon Musk’s xAI failed to convince a California federal judge to halt enforcement of a state law requiring AI companies to disclose details about their training data, marking a significant defeat in the emerging battle over AI transparency.
According to CGTN, U.S. District Judge Jesus Bernal in Los Angeles ruled on March 5 that xAI had not demonstrated it was likely to prove California’s AB 2013 violated its free-speech rights or was otherwise unconstitutional. The decision allows the transparency law, which took effect January 1, 2026, to remain in force while xAI’s underlying lawsuit continues.
The ruling represents a pivotal moment in AI governance. California’s law, enacted by Governor Gavin Newsom in September 2024, requires generative AI companies to publicly post summaries of the datasets used to train their systems—information the industry has historically guarded as among its most valuable trade secrets. Crypto Briefing notes this defeat sends ‘a clear signal to the broader industry: California is not backing down from its push to force transparency into a sector that has historically operated with minimal disclosure requirements.’
The Constitutional Challenge
xAI filed its lawsuit on December 29, 2025, just days before the law took effect. The company advanced two primary constitutional arguments: that mandatory disclosure constitutes compelled speech in violation of the First Amendment, and that the law effects an unconstitutional taking of trade secrets under the Fifth Amendment without compensation.
According to The National Law Review, xAI characterized AB 2013 as a ‘trade-secrets-destroying disclosure regime that hands competitors a roadmap to learn how companies like xAI are developing and training their proprietary AI models.’ The company argued that revealing dataset composition would allow rivals to immediately acquire the same sources, nullifying competitive advantages worth billions.
$200B+
$6B
$50B
The Legal arguments faced an uphill battle from the start. As Institute for Law & AI observed, the Fifth Amendment claim depends on whether AB 2013 actually requires disclosure of information qualifying as trade secrets—but OpenAI and Anthropic have already published compliant disclosures without apparent difficulty. Both companies posted high-level summaries that avoid naming specific proprietary datasets.
What the Law Requires
AB 2013 applies to any generative AI system made available to Californians since January 1, 2022, regardless of whether it’s free or paid. The law mandates disclosure of 12 categories of information, including dataset sources, the number of data points, types of data used, whether copyrighted materials or personal information were included, and whether synthetic data was employed in training.
Critically, the statute calls for a ‘high-level summary’ but provides no guidance on what level of detail satisfies compliance. This ambiguity formed the basis for xAI’s additional argument that the law is unconstitutionally vague. However, as Goodwin notes, early disclosures from OpenAI and Anthropic suggest developers can comply using categorical descriptions without revealing competitively sensitive specifics.
- High-level summary of datasets and their sources
- Number of data points and types of data used
- Whether datasets include copyrighted material or personal information
- Dates datasets were first used during development
- Whether synthetic (AI-generated) data was employed
- Licensing or purchase status of training data
The law includes narrow exemptions for systems used solely for security and integrity, aircraft operation in national airspace, or national security purposes available only to federal entities. Notably absent: any exemption for trade secrets.
Industry Split Response
The divergent approaches among AI developers highlight the strategic calculations at play. OpenAI, Anthropic, and Google each published required documentation by the January 1 deadline. OpenAI’s disclosure is particularly brief, providing categorical descriptions without identifying specific datasets. Anthropic adopted a more structured format with contextual explanations.
xAI stands alone among major developers in mounting a legal challenge. This isolated position became more conspicuous following a separate courtroom loss one day earlier. On February 25, 2026, a federal judge dismissed xAI’s lawsuit against OpenAI alleging trade secret theft. As Crypto Briefing notes, the twin defeats ‘underscore a broader irony’ – xAI simultaneously argues its training data constitutes sacred trade secrets no government should compel disclosure of, while claiming a competitor stole those same secrets.
AB 2013 is part of California’s broader AI regulatory push. In 2024 alone, the state enacted 18 AI-related bills. Other 2026 measures include SB 942, requiring large AI platforms to provide free content detection tools, and regulations on automated decision-making in employment. California’s aggressive stance comes amid federal regulatory gridlock, positioning the state as the de facto AI regulator for the nation’s largest economy.
Enforcement Remains Uncertain
While the law is now in effect, enforcement mechanisms remain unclear. AB 2013 contains no standalone penalty provision. Legal analysis suggests enforcement would likely proceed through California’s Unfair Competition Law, enabling both public and private actions. However, the California Attorney General’s office has not publicly detailed how aggressively it intends to pursue noncompliant companies.
During the February 26 hearing on xAI’s injunction request, Judge Bernal reportedly pressed the Attorney General’s office on enforcement plans. According to Crypto Briefing, the state’s failure to provide a timely response may have paradoxically weakened xAI’s case for emergency relief—courts generally require concrete enforcement threats, not hypothetical ones, to grant preliminary injunctions.
California Attorney General Rob Bonta celebrated the ruling. A spokesperson told CGTN the department ‘celebrates this key win and remains committed to continuing our defense’ of the law. Bonta has separately announced his office is building an AI accountability program to strengthen oversight amid federal regulatory gaps.
What to Watch
The immediate question is whether other major AI companies that have not yet published disclosures—including Meta and smaller developers—will comply or follow xAI into litigation. The precedent established by OpenAI and Anthropic’s minimalist disclosures provides a template for compliance without catastrophic trade secret exposure.
Other states are watching closely. New York, Illinois, and Colorado have introduced AI governance proposals in recent legislative sessions. California’s ability to withstand a well-funded constitutional challenge will likely embolden similar efforts. As Crypto Briefing notes, ‘California’s success in defending AB 2013 could have ripple effects that extend well beyond the Golden State’s borders.’
For xAI, the underlying lawsuit continues despite the denial of preliminary relief. The company must now decide whether to appeal the injunction denial or proceed with the constitutional challenge on its merits—a process that could take years. Meanwhile, the disclosure requirements apply immediately.
The broader legal framework remains unsettled. A December 2025 executive order from President Trump proposed federal preemption of state AI laws deemed inconsistent with federal policy, though implementation remains unclear. How courts balance transparency objectives against trade secret protections will shape AI Regulation for years to come, with implications extending beyond AI to disclosure requirements across financial, environmental, and health sectors.