AI Technology · · 9 min read

AI-Powered No-Code Platforms Are Mass-Producing Data Breaches

Thousands of applications built with Lovable, Replit, and similar services ship with exposed databases, hardcoded credentials, and missing authentication—creating a systemic enterprise security crisis.

AI-powered no-code platforms have created a vulnerability factory: 89.5% of applications built with tools like Lovable, Replit, and Bolt ship with critical security flaws, exposing customer data, API keys, and payment credentials across thousands of hastily-deployed apps.

The scale of exposure is staggering. A Superblocks security audit disclosed CVE-2025-48757, a vulnerability in the Lovable platform that left 170+ databases completely exposed—no authentication required. Unauthenticated visitors could query sensitive tables directly using a public API key and retrieve user lists, payment records, and database credentials. In one audited application, VibeEval found 18,697 user records exposed, including full names, email addresses, and internal business communications.

This is not an isolated incident. It represents a structural failure in how democratized development tools handle security. The promise of no-code platforms—build production apps without writing a line of code—has collided with a reality where AI models optimize for speed and functionality, not threat mitigation. The platforms lack secure-by-default configurations. Citizen developers lack security training. And the applications they build become attack surfaces the moment they go live.

Vulnerability Landscape
AI-built apps with vulnerabilities
89.5%
Codebases with critical flaws
92%
Apps storing secrets in plaintext
78%
Exposed databases (Lovable audit)
170+

The Anatomy of AI-Generated Vulnerabilities

The pattern repeats across platforms. Sherlock Forensics evaluated AI-generated codebases from January to April 2026 and found 92% contained critical vulnerabilities, with an average of 8.3 exploitable findings per application. The most common failures: disabled row-level security policies (RLS), hardcoded API keys embedded in public repositories, and missing authentication controls.

AI code assistants inherit these weaknesses from their training data. IOActive tested 27 AI models across 730 prompts in 27 programming languages. Average security performance: 59%. Infrastructure and DevOps code—the kind that configures databases and API access—exceeded 70-97% vulnerability rates. Authentication, rate limiting, and cryptographic implementations failed systematically.

“AI code assistants optimize for functionality, speed and developer satisfaction. Security is a constraint that conflicts with those goals.”

— Sherlock Forensics, 2026 AI Code Security Report

The Moltbook social network breach in January 2026 demonstrated how quickly these flaws turn into live incidents. Built on Lovable and deployed with a misconfigured Supabase database, the platform exposed 1.5 million API tokens and 35,000 email addresses within three days of launch. The database had no RLS policies—meaning any visitor could query any table. The Next Web reported that 90% of audited Lovable applications shared the same five critical vulnerabilities, with an average security score of 52 out of 100.

Enterprise Risk and Regulatory Exposure

The low-code market is projected to reach $47.3 billion by 2025, with Gartner estimating 70% of enterprise applications will use low-code or no-code platforms. But shadow IT—applications built outside formal IT governance—is responsible for roughly 30% of security breaches. When citizen developers deploy customer-facing apps with exposed databases, the legal and financial consequences fall on the enterprise.

IBM’s 2023 data breach cost analysis pegged the average incident at $4.45 million. GDPR violations for inadequate data protection can exceed tens of millions in fines. SOX Compliance failures expose executives to personal liability. And in regulated industries like healthcare and finance, exposed PII triggers mandatory breach disclosure and regulatory scrutiny.

Context

The vulnerability patterns are not hypothetical. GitHub reported 39 million leaked secrets in 2024, a 67% year-over-year increase. Toyota left private keys exposed in public repositories for five years. The 2016 Uber breach stemmed from hardcoded AWS credentials. According to Pentera, these exposures create attack vectors from code repositories to developer laptops to production networks—and compliance frameworks like NIS2, SOC2, and ISO27001 require organisations to prevent exactly these failures.

The supply chain dimension compounds the risk. When a vendor-built application contains hardcoded credentials or missing access controls, it becomes an entry point into the client’s infrastructure. Vibe App Scanner, which continuously monitors Lovable, Bolt, Cursor, Replit, and v0 deployments, has identified hardcoded OpenAI API keys (sk-proj-*) in public asset bundles across hundreds of applications. These keys grant access to accounts, usage data, and in some cases, internal system prompts.

Platform Design Failures

The platforms themselves have struggled to enforce secure defaults. In April 2025, Computing.co.uk disclosed a broken object-level authorization vulnerability in Lovable’s API. Free-tier users could access any other user’s profile, source code, and database credentials with five API calls. The flaw remained unpatched for 48 days after initial disclosure via HackerOne. During that window, any attacker with basic scripting knowledge could enumerate accounts and extract credentials.

Amichai Shulman, CTO of Nokod Security, told Help Net Security that hard-coded secrets and missing RLS policies are the most common no-code vulnerabilities. “Exposed secrets in applications accessible to users” are difficult to rotate because platform abstraction layers obscure where credentials are stored and how they propagate through deployed instances. When a database key leaks, it often requires rebuilding and redeploying the entire application—a process most citizen developers cannot execute safely.

March 2025
Lovable RLS Vulnerability Reported
CVE-2025-48757 disclosed to platform; 170+ databases with disabled row-level security exposed user data via unauthenticated API queries.

April 2025
Broken Object-Level Authorization Flaw
Free-tier users could access any account’s source code and credentials. Unpatched for 48 days.

May 2025
Public Disclosure of CVE-2025-48757
Superblocks publishes technical analysis showing full scope of data exposure across Lovable-built apps.

January 2026
Moltbook Social Network Breach
1.5 million API tokens and 35,000 emails exposed within three days of launch due to misconfigured Supabase instance with no RLS.

February 2026
VibeEval Audit Findings
Scan of 1,645 Lovable applications reveals 170+ fully exposed databases, 18,697 user records compromised in single app, average security score 52/100.

Why AI Models Fail at Security

The root cause is training data. AI models learn from billions of lines of code scraped from GitHub, Stack Overflow, and public repositories—much of it insecure. Endor Labs found that 40% of AI-generated code contains flaws, with input validation omitted by default. Models hallucinate dependencies that don’t exist or suggest outdated libraries with known CVEs. Architectural drift occurs when AI cobbles together patterns from different contexts without understanding threat models.

The OpenSSF published guidance for secure AI coding practices, noting that GitHub Copilot has suggested MD5 hashing—a cryptographically broken algorithm—for password storage. The guide emphasises that users write more insecure code with AI assistance unless explicitly prompted for security controls. Compliance integration (HIPAA, PCI-DSS) requires developers to understand what the AI is generating and why—a skill citizen developers by definition do not possess.

Key Vulnerabilities
  • Disabled row-level security (RLS): databases accessible to any unauthenticated user via public API endpoints
  • Hardcoded API keys: OpenAI, Stripe, and cloud provider credentials embedded in public JavaScript bundles
  • Missing authentication: endpoints that should require login accept requests from any source
  • SQL injection: 54% of AI-generated database queries vulnerable to injection attacks
  • Missing audit logs: 91% of applications lack logging for security events, making breach detection impossible

Governance Gap and Remediation Challenges

The OWASP Citizen Development Top 10 codifies the risks: blind trust in platform security, injection flaws, orphaned applications (deployed then abandoned), and missing logging. A Microsoft Power Apps misconfiguration exposed 38 million records across multiple organisations—a case study in what happens when abstraction layers hide complexity from non-technical users.

Zenity has proposed an enterprise governance framework: cross-platform inventory of all citizen-developed apps, continuous risk assessment, automated remediation playbooks, and identity controls to prevent impersonation and data exfiltration. But implementation requires buy-in from business units that view IT governance as friction—and platforms that expose security controls in ways citizen developers can understand and act on.

Escape, a security firm focused on no-code vulnerabilities, raised $18 million specifically to address this gap. The company has identified over 2,000 high-impact vulnerabilities and hundreds of exposed secrets in live production systems. The market signal is clear: traditional AppSec tools cannot scan AI-generated code effectively, and manual code review does not scale when apps are deployed in minutes.

What to Watch

Regulatory pressure is mounting. The NIS2 Directive in Europe and forthcoming SEC cybersecurity disclosure rules will force enterprises to account for shadow IT and vendor-introduced vulnerabilities. Expect insurance carriers to begin excluding coverage for breaches originating from ungoverned low-code deployments—or pricing policies to reflect the elevated risk.

Platform vendors face a choice: implement secure-by-default configurations that may slow development velocity, or accept liability for systemic data exposure. Lovable has released RLS templates in response to the February audit, but adoption requires developers to understand what RLS is and why it matters. Replit, Bolt, and v0 have not publicly addressed how they prevent credential leakage or enforce authentication.

For enterprises, the immediate action is inventory: identify every application built on no-code platforms, assess whether it handles sensitive data, and audit for exposed credentials and missing access controls. The assumption that “it’s just a prototype” or “it’s not customer-facing” no longer holds. VibeEval’s scanning shows that 89.5% of deployed applications are exploitable on day one—and attackers are already scanning for them.