Chinese Official’s ChatGPT Diary Exposes Global Intimidation Network
A law enforcement operative inadvertently documented transnational repression operations using OpenAI's platform, revealing how Beijing weaponizes LLMs for surveillance and coercion.
A Chinese law enforcement official used ChatGPT as a personal diary to document a sprawling global intimidation campaign targeting dissidents, foreign politicians, and journalists, according to a CNN report citing OpenAI’s latest threat intelligence assessment. The disclosure marks the first detailed case study of an authoritarian government operative inadvertently exposing state-backed coercion operations through commercial AI infrastructure.
The official treated ChatGPT like a journal, recording procedures for what he called “cyber special operations,” including impersonating U.S. immigration officials to threaten Chinese dissidents and forging U.S. court documents to suppress social media accounts. The operation involved hundreds of Chinese operators and thousands of fake online accounts across multiple social media platforms. OpenAI banned the user after matching his ChatGPT logs to real-world online activity, including a fabricated obituary for a Chinese dissident that spread across the internet in 2023.
From Planning to Execution: LLMs as Operational Infrastructure
ChatGPT served primarily as a journal to track the covert network, while content was generated by other tools and distributed through social media accounts and websites. The operative’s queries reveal a methodical approach: he asked ChatGPT to design a multi-part plan to smear Japan’s incoming prime minister, Sanae Takaichi, by exploiting anger over U.S. tariffs. In October 2025, the same user requested assistance planning a coordinated campaign against Takaichi after she criticized China’s human rights record in Inner Mongolia.
Operators disguised themselves as U.S. immigration officials to warn a U.S.-based dissident that their public statements had “broken the law,” according to OpenAI’s investigation. In another case, they used forged documents from a U.S. county court to attempt takedowns of dissident social media accounts. OpenAI’s investigators corroborated the ChatGPT logs by tracing described operations to verifiable online events, including the false death rumor that Voice of America reported in Chinese in 2023.
Broader Pattern of AI-Enabled Statecraft
“This is what Chinese modern transnational repression looks like.”
— Ben Nimmo, Principal Investigator, OpenAI
This revelation is the most recent in a pattern of Chinese state actors integrating generative AI into coercion operations. In June 2025, OpenAI disclosed that it had disrupted 10 covert operations, four of them tied to China. One operation, dubbed “Sneer Review,” used ChatGPT to create internal performance reviews “describing, in detail, the steps taken to establish and run the operation,” with social media behaviors closely mirroring documented procedures.
Another China-linked operation posed as journalists and geopolitical analysts, using ChatGPT to translate correspondence from Chinese to English and analyze data, including correspondence addressed to a U.S. Senator regarding a government nominee. While OpenAI could not confirm delivery, the intent to influence or surveil U.S. lawmakers was evident.
China’s AI-enabled Surveillance and censorship systems have grown substantially more sophisticated since 2024, according to a December 2025 Australian Strategic Policy Institute report, which found that “AI lets the CCP monitor more people, more closely, with less effort.” The government has mandated AI adoption across courts, prisons, and local governance, deploying facial recognition, risk scoring algorithms, and predictive policing tools at scale.
Geopolitical Escalation and AI Competition
“This clearly demonstrates the way that China is actively employing AI tools to enhance information operations,” Michael Horowitz, a former Pentagon official focused on emerging technologies, told CNN. “US-China AI competition is continuing to intensify,” said Horowitz, now a University of Pennsylvania professor. “This competition is not just taking place at the frontier, but in how China’s government is planning and implementing the day-to-day of their surveillance and information apparatus.”
The incident underscores a fundamental asymmetry in AI governance. Beijing’s 2023 regulations require that generative AI content “embody the Core Socialist Values” and avoid subverting national sovereignty, leading Chinese LLMs to be trained on smaller, more restricted datasets than Western counterparts. Yet Chinese operatives routinely access U.S.-based platforms like OpenAI for operational tasks that domestic models cannot or will not support due to political constraints.
- LLMs are being used not just to generate influence content, but as operational infrastructure for planning, documentation, and internal performance evaluation.
- Commercial AI platforms provide visibility into state operations that intelligence agencies typically lack, creating novel counterintelligence opportunities.
- The diary-style usage suggests operational security failures within China’s transnational repression apparatus, but also reveals sophisticated multi-platform coordination.
- AI-enabled coercion operations remain largely ineffective at scale, according to OpenAI, which found limited real engagement despite thousands of fake accounts.
Governance Gaps and Detection Challenges
The ChatGPT case reveals both the promise and limits of platform-level detection. OpenAI’s threat intelligence team identified the operation through usage pattern analysis and banned associated accounts, but only after operations were documented and potentially executed. OpenAI noted operations were “largely disrupted in their early stages” and “didn’t generally see these operations getting more engagement because of their use of AI.”
Yet the operational details logged by the Chinese official demonstrate that LLMs lower barriers to entry for complex influence campaigns. OpenAI’s investigation revealed hundreds of operators and thousands of fake accounts working across platforms to silence opposition through psychological pressure, forged documents, and coordinated smear campaigns.
| Operation Type | AI Application | Target |
|---|---|---|
| Transnational repression | Operational planning, performance tracking | Overseas dissidents |
| Diplomatic coercion | Smear campaign design, content strategy | Japanese PM Sanae Takaichi |
| Intelligence collection | Translation, data analysis, impersonation | U.S. Senator, policymakers |
| Surveillance tool development | Marketing materials, technical documentation | Social media monitoring systems |
What to Watch
The inadvertent documentation suggests Chinese operatives lack robust operational security protocols for commercial AI use, creating intelligence opportunities for Western platforms and governments. Expect OpenAI and competitors to expand threat intelligence teams and formalize information-sharing frameworks with law enforcement, following existing cybersecurity partnership models.
Watch for regulatory responses in the U.S. and EU targeting foreign government access to frontier AI systems, potentially through tiered access controls or mandatory user verification for high-risk features. The incident will likely accelerate calls for mandatory AI provenance standards and content authentication mechanisms to combat forged documents and synthetic media used in coercion campaigns.
The geopolitical stakes extend beyond individual operations. In 2026, Chinese Disinformation campaigns are expected to focus less on overt propaganda and more on shaping narratives around crises, contesting attribution for cyber incidents, and eroding trust in decision-making processes. As LLMs become embedded in diplomatic and intelligence workflows globally, the ability to detect, attribute, and counter AI-enabled statecraft will determine which governments maintain informational sovereignty and which become targets of algorithmic influence at scale.