AI · · 7 min read

OpenAI Fires Employee Over Prediction Market Trading Using Confidential Information

The termination marks the first confirmed enforcement action by a major AI company against insider trading on cryptocurrency-based betting platforms, establishing a governance precedent as tech firms confront a regulatory vacuum.

OpenAI terminated an employee for using confidential company information to trade on prediction markets including Polymarket, following an internal investigation that revealed the staffer violated policies prohibiting the use of proprietary data for personal financial gain.

The dismissal, disclosed internally by OpenAI CEO of Applications Fidji Simo earlier this year and confirmed to Wired on February 27, represents the first documented case of a major technology company taking formal action against an employee for prediction market insider trading. OpenAI spokesperson Kayla Wood stated that the company’s policies “prohibit employees from using confidential OpenAI information for personal gain, including in Prediction Markets.” The company declined to identify the employee or provide specifics on the trades.

The incident exposes a governance vacuum at the intersection of AI development and financial speculation. Prediction markets—platforms like Polymarket and Kalshi that allow users to bet on real-world outcomes—have exploded in popularity, with approximately $28 billion traded in 2025 according to Littler. Unlike traditional securities, prediction market contracts exist in a regulatory grey zone where insider trading prohibitions under the Securities Exchange Act of 1934 do not clearly apply, leaving corporate policy as the primary enforcement mechanism.

Prediction Market Trading Volume
2025 Total Volume$28B
Polymarket 2024 Volume$3B+
Kalshi (CFTC-regulated)First US Exchange

Pattern of Suspicious Activity

Blockchain analysis suggests the OpenAI termination may address only one node in a broader network of insider trading. According to Gizmodo, financial data platform Unusual Whales flagged 77 positions across 60 wallet addresses as suspected insider trades on OpenAI-related events since March 2023. The trades targeted product launches including Sora, GPT-5, and the ChatGPT Browser, as well as CEO Sam Altman’s employment status.

One pattern particularly caught investigators’ attention: in the 40 hours before OpenAI launched its browser, 13 brand-new wallets with zero trading history appeared and collectively bet $309,486 on the correct launch date, according to Unusual Whales CEO Matt Saincome. “When you see that many fresh wallets making the same bet at the same time,” Saincome told Wired, “it raises a real question about whether the secret is getting out.”

Context

Polymarket operates on the Polygon blockchain, making its trading ledger pseudonymous but traceable. This transparency has enabled investigators to identify suspicious patterns, but attribution to specific individuals remains challenging without cooperation from platforms or employers conducting internal investigations.

The problem extends beyond OpenAI. According to analysis by former Google DeepMind researcher Peter Liu, AI systems can now detect suspected insiders with “super-human” efficiency. When Liu’s Compound AI tool analyzed Polymarket data, it found accounts “oddly good at predicting OpenAI launch dates,” with at least one exclusively trading OpenAI events. Similar patterns emerged around Google product launches, suggesting coordinated networks rather than isolated actors.

Legal Ambiguity Creates Corporate Liability

Traditional insider trading law criminalizes trading securities based on material nonpublic information obtained through breach of fiduciary duty. But prediction market contracts are regulated by the Commodity Futures Trading Commission as event derivatives, not the SEC as securities. The CFTC’s Rule 180.1 prohibits trading on material nonpublic information, but requires proof of a breached “pre-existing duty”—a higher bar than securities law, according to Philippe Dubach. The CFTC has brought zero enforcement actions for prediction market insider trading to date.

This regulatory vacuum places the burden on companies. OpenAI’s action signals it will treat prediction market trades using confidential information as equivalent to illegal stock trades—a fireable offense that violates corporate ethics policies. The decision aligns with voluntary commitments OpenAI made in 2023 to “invest in cybersecurity and insider threat safeguards” and establish “a robust insider threat detection program consistent with protections provided for their most valuable intellectual property,” as detailed in the company’s AI governance framework.

Key Enforcement Dynamics
  • OpenAI’s termination is corporate policy enforcement, not criminal prosecution—no charges have been filed
  • Prediction markets offer anonymity through crypto wallets, complicating detection compared to regulated securities trading
  • Other tech giants including Google, Meta, and Nvidia declined to comment on their prediction market policies when contacted by Wired
  • Kalshi this week reported multiple suspicious cases to the CFTC, including a MrBeast employee fined $20,000

Industry Response and Policy Evolution

The OpenAI termination arrives as prediction markets face intensifying scrutiny. Earlier this week, Kalshi announced it had reported several suspicious insider trading cases to the CFTC, including suspending a MrBeast employee for two years and fining them $20,000 for trades related to the YouTuber’s activities, according to reporting on the enforcement actions. Kalshi’s rules now explicitly prohibit insiders from trading markets where they have nonpublic information or influence.

Polymarket has remained silent on enforcement, declining multiple requests for comment. The platform’s CEO Shayne Coplan previously told Axios that insider trading “creates this financial incentive to divulge information to the market,” suggesting the company viewed it as a feature rather than a bug. That position appears increasingly untenable as corporate partners demand cleaner markets.

Employment law specialists are advising clients to update policies. According to Littler, employers should consider revising internet policies to restrict prediction market use on company devices, updating confidentiality agreements to explicitly prohibit trading on proprietary information via prediction markets, and monitoring platforms for unusual trading activity related to company events. Compliance platform Ethena has already incorporated prediction market guidance into its corporate code of conduct training.

Regulatory Framework Comparison
Aspect Traditional Securities Prediction Markets
Regulator SEC CFTC (Kalshi) / Offshore (Polymarket)
Insider Trading Law Securities Exchange Act § 10(b) CFTC Rule 180.1 (requires duty breach)
Enforcement Actions Hundreds annually Zero by CFTC to date
User Anonymity KYC/AML required Crypto wallets pseudonymous
Corporate Monitoring Blackout periods, pre-clearance No established framework

What to Watch

OpenAI’s enforcement action establishes a precedent that may reshape corporate governance across the technology sector. Industry observers expect other AI companies to implement explicit prediction market policies within weeks, similar to the personal trading restrictions that investment banks have maintained for decades. Some firms may ban employee participation in prediction markets entirely for contracts related to their employer or competitors.

The criminal enforcement question remains unresolved. Former SDNY U.S. Attorney Jay Clayton suggested at the Securities Enforcement Forum on February 5 that wire fraud statutes could apply to prediction market insider trading, telling attendees to expect enforcement actions. Whether prosecutors will pursue cases depends partly on platform cooperation—Polymarket’s blockchain-based architecture and offshore operations complicate subpoena compliance.

The broader challenge is structural. As one analyst told Wired, “the data tells me this is happening all over the place.” With AI companies holding information asymmetries worth billions—model capabilities, partnership negotiations, regulatory decisions—and prediction markets offering liquid anonymous betting on those outcomes, the incentive structure virtually guarantees continued violations absent either robust corporate enforcement or federal intervention. OpenAI just demonstrated the former is possible. Whether it proves sufficient remains the open question.