AI Technology · · 8 min read

LiteLLM Supply Chain Attack Exposes API Keys Across Enterprise AI Deployments

Compromise of widely-adopted Python package reveals systemic fragility in AI infrastructure as credential stealer reaches 97 million monthly installations.

A supply chain attack on LiteLLM—a Python package serving as the unified API gateway for enterprise LLM integrations—has exposed credentials across financial services, healthcare, and defense sectors building on Claude, GPT-4, and other large language models.

On March 24, 2026, threat actor TeamPCP published malicious versions 1.82.7 and 1.82.8 of LiteLLM to the Python Package Index (PyPI), according to FutureSearch AI. The compromised packages, uploaded at 10:52 UTC, contained a multi-stage credential stealer harvesting API keys, cloud credentials for AWS, GCP, and Azure, Kubernetes secrets, SSH keys, and database passwords. With approximately 97 million monthly downloads and deep integration into AI agent frameworks and LLM orchestration tools, LiteLLM’s compromise creates a master-key scenario: a single package serves as the credential manager for organisations’ entire LLM infrastructure.

Context

LiteLLM functions as a unified abstraction layer enabling developers to call 100+ LLM providers—OpenAI, Anthropic, Google, Cohere, and others—through a single interface. Enterprises use it to route requests, manage API keys, track costs, and enforce rate limits across multiple models. Its role as the central credential store makes it an optimal target for harvesting every API key in an organisation.

Attack Chain: From CI/CD Poisoning to Package Registry

The compromise originated from TeamPCP’s March 19 attack on Trivy, a vulnerability scanner used in LiteLLM’s continuous integration pipeline. Stolen PyPI publishing credentials from the Trivy breach enabled direct upload of malicious LiteLLM versions to the package registry, bypassing code review entirely. The Register confirmed that incomplete credential remediation following the Trivy incident allowed attackers to reuse compromised service accounts.

Version 1.82.8 deployed a novel execution mechanism: a litellm_init.pth file that triggers malware on every Python interpreter startup, requiring no import statement. Version 1.82.7 embedded malicious code in proxy_server.py, activating only when the package was explicitly imported. The .pth technique expands the attack surface beyond traditional code execution, running automatically whenever Python initialises in any environment where LiteLLM is installed.

19 Mar 2026
Trivy Compromise
TeamPCP breaches Aqua Security’s vulnerability scanner, harvesting CI/CD credentials including PyPI tokens used by downstream projects.
23 Mar 2026
Checkmarx KICS Attack
Using credentials from Trivy, TeamPCP compromises Checkmarx’s GitHub Actions workflows, escalating access to additional security tooling.
24 Mar 2026
LiteLLM Backdoor Published
Malicious versions 1.82.7 and 1.82.8 uploaded to PyPI at 10:52 UTC, containing credential harvester targeting API keys and cloud credentials.

Credential Exfiltration and Lateral Movement

The payload executes a three-stage attack sequence. First, it harvests credentials from filesystem locations—SSH keys in ~/.ssh/, cloud provider configs, Kubernetes manifests—and polls cloud metadata endpoints for temporary credentials. Exfiltrated data flows to attacker-controlled domain models.litellm.cloud, designed to blend with legitimate LiteLLM traffic.

Second, in Kubernetes environments, the malware deploys privileged pods to escalate access across the cluster, according to The Hacker News. Third, it installs a persistent systemd backdoor polling command-and-control server checkmarx.zone/raw for follow-on instructions. The combination enables attackers to pivot from compromised development environments to production infrastructure.

“The Open Source Supply Chain is collapsing in on itself. Trivy gets compromised → LiteLLM gets compromised → credentials from tens of thousands of environments end up in attacker hands → and those credentials lead to the next compromise. We are stuck in a loop.”

— Gal Nagli, Head of Threat Exposure, Wiz

TeamPCP’s coordinated campaign across three major security tools within five days demonstrates deliberate targeting of supply chain infrastructure. The Python Packaging Authority issued an advisory stating that any organisation that installed or ran the compromised versions should assume credentials available to the LiteLLM environment have been exposed and must rotate them immediately.

Systemic Fragility in AI Infrastructure

The attack exposes structural weaknesses in the AI development ecosystem. The top 100 AI projects on GitHub reference an average of 208 direct and transitive dependencies, with 11% relying on 500 or more packages, according to 2025 research from Endor Labs. Fifteen percent contain 10 or more known vulnerabilities. Open-source maintainers lack resources for security audits, incident response, and credential rotation—creating asymmetric advantage for attackers who can methodically compromise upstream dependencies.

LiteLLM Compromise By the Numbers
Monthly Downloads97 million
Compromised Versions1.82.7, 1.82.8
Attack Duration5 days
Attack Chain Length3 tools

LiteLLM’s role as an API gateway magnifies impact. Unlike a generic utility library, it exists specifically to manage credentials for every LLM provider an organisation uses. Compromising it is analogous to breaching a password manager—every secret stored within becomes accessible. Awesome Agents noted the irony: attackers weaponised the one package designed to secure API key management.

GitHub repositories maintained by LiteLLM’s developer, BerriAI, were defaced with “teampcp owns BerriAI” messages. An issue (#24512) reporting the compromise was closed as “not planned” by attackers using a compromised maintainer account, demonstrating control over the project’s communication channels and delaying public awareness.

Broader AI Supply Chain Risks

The LiteLLM incident parallels growing concerns across AI model distribution. Hugging Face, hosting 2.5 million models as of February 2026, had 352,000 unsafe or suspicious issues flagged across 51,700 models in April 2025, according to research from Protect AI. The volume overwhelms manual review, creating opportunities for malicious model injection.

Traditional software supply chain defences—software bills of materials (SBOMs), dependency scanning, provenance verification—remain underdeveloped in AI tooling. ML-BOMs, the AI equivalent, have limited adoption. The velocity of AI development, where frameworks release weekly and models update daily, outpaces security controls designed for slower release cycles.

Key Takeaways
  • Compromising security tools first (Trivy, Checkmarx) yields credentials enabling downstream package poisoning at scale.
  • LiteLLM’s role as credential manager creates single point of failure—every API key across an organisation’s LLM stack becomes vulnerable.
  • Open-source maintainers lack resources to detect and respond to sophisticated CI/CD attacks, creating asymmetric advantage for persistent threat actors.
  • AI development velocity (weekly framework updates, daily model releases) exceeds threat detection capabilities designed for traditional software cycles.

What to Watch

Expect expanded credential rotation mandates across enterprises using LiteLLM, particularly in financial services and healthcare where API keys provide access to sensitive data processing. Cloud providers may implement automated detection for suspicious metadata endpoint access patterns, a key technique in the exfiltration phase. Python packaging infrastructure faces pressure to adopt signing requirements and multi-factor authentication for high-impact packages—measures that could have prevented direct uploads following the Trivy breach.

TeamPCP’s public statements suggest the campaign continues. The group claimed via Telegram that compromised credentials will enable access to “new partners,” indicating harvested secrets are being used for lateral movement into additional targets. Security teams should audit LiteLLM deployments for signs of the malicious .pth file, review logs for connections to models.litellm.cloud and checkmarx.zone, and rotate all credentials accessible to environments where versions 1.82.7 or 1.82.8 were installed.

The incident will likely accelerate calls for mandatory SBOMs and provenance attestation in AI tooling, mirroring post-SolarWinds reforms in traditional software. Whether the ecosystem can implement such controls at the pace of AI development remains the central question. Until then, each compromised security tool serves as the entry point for the next wave of supply chain attacks.