Meta approved 1,052 illegal financial ads in one UK week, exposing regulatory arbitrage strategy
Reuters investigation reveals platform blocks identical scam ads in Australia but permits them in Britain, where enforcement penalties remain delayed until 2027.
Meta’s UK platform approved 1,052 illegal financial advertisements during a single week in November 2025—unregistered forex brokers, unlicensed investment products, and crypto schemes—despite public commitments to combat fraud, according to Britain’s Financial Conduct Authority.
The findings, reported by Reuters, expose a systemic moderation failure tied not to technical limitations but to regulatory arbitrage: Meta rigorously blocks illegal financial ads in jurisdictions with meaningful penalties while permitting the same content in markets where enforcement remains toothless.
A Reuters test crystallised the dynamic. The same deliberately suspicious investment ad—promising 10% weekly returns—sailed through Meta’s UK verification process but was immediately blocked in Australia, where the platform faces fines up to A$50 million under mandatory financial advertiser verification laws. In Britain, Meta faces zero financial penalty for hosting fraudulent ads.
Where enforcement exists, moderation works
Meta’s global financial advertising policies now mandate verification in 38 countries, up from 12 in 2024, according to Reuters. Each market requires documentation specific to local regulators—FCA registration for Britain, BaFin for Germany, SEC or FINRA credentials for the United States.
Yet the November audit found 56% of illegal ads came from repeat offenders already flagged to Meta by the FCA. When the regulator repeated its review in December, it found the same small group of bad actors still advertising, with no material improvement in Meta’s enforcement approach despite regular engagement.
Internal Meta documents reviewed by Reuters reveal the company bans advertisers only when 95% certain of fraud. For suspicious accounts below that threshold, Meta charges higher advertising rates rather than blocking them—extracting premium revenue from probable scams while maintaining plausible deniability.
“This is a financial problem. If you spend enough money, you can stop the scammers, and we need to change the economics.”
— Martin Lewis, consumer rights campaigner
Revenue constraints cap fraud enforcement
Meta’s vetting team operates under a hard constraint: fraud enforcement measures cannot exceed 0.15% revenue impact, according to internal documents reported by Press Gazette. The guardrail effectively limits how aggressively the company can police fraudulent advertisers when doing so would materially reduce income.
The financial stakes are substantial. Meta projected that ads for scams and banned goods would generate approximately $16 billion in 2024—roughly 10% of total revenue. Digital rights group Reset Tech estimates Meta could host 29,000 scam ads referencing UK banks annually, generating 53.6 million exposures to British users.
Meta increased the revenue share from verified advertisers to 70% in 2025, up from 55% at the end of 2024. The shift demonstrates the company’s capacity to tighten verification when commercial incentives align—verified advertisers represent a growing, premium revenue stream the platform prioritises retaining.
Britain’s Online Safety Act came into force in March 2025, but provisions enabling authorities to penalise platforms for hosting paid scam advertisements have been delayed until at least 2027. The FCA can act against unauthorised advertisers but cannot directly penalise Meta. Oversight falls to Ofcom, which currently lacks authority to enforce action against paid fraudulent ads. The gap creates a penalty-free environment where Meta faces no consequence for systematic moderation failure.
Pattern extends beyond financial fraud
The UK Gambling Commission reported in January 2026 that Meta is “turning a blind eye” to illegal online casinos advertising on its platforms, per Reuters. Tim Miller, the commission’s executive director, stated the platform’s inaction “could leave you with the impression they are quite happy to turn a blind eye and continue taking money from criminals.”
Meta’s own safety staff estimated in May 2025 that company platforms were involved in one-third of all successful scams in the United States. Digital bank Revolut identified Meta’s platforms as “the single biggest source of authorised fraud” reported to the institution.
- Meta blocks identical scam ads in Australia (A$50m penalty exposure) while approving them in UK (zero penalty)
- 95% fraud certainty threshold means suspicious-but-uncertain advertisers pay higher rates instead of facing bans
- Internal revenue guardrail caps fraud enforcement at 0.15% revenue impact, limiting aggressive action
- UK Online Safety Act fraud provisions delayed until 2027, creating penalty-free window for platform inaction
What to watch
Britain’s fraud provisions under the Online Safety Act are scheduled to take effect in 2027, introducing potential fines up to 10% of global revenue. Whether Meta’s moderation practices shift before enforcement teeth materialise will indicate whether voluntary compliance commitments hold weight absent financial consequences.
The UK’s Financial Services and Markets Act crypto regulations, published in February 2026, commence full enforcement in October 2027 following a transitional application period. How Meta adjusts verification requirements for crypto advertisers during this window will test whether the platform front-runs regulation or waits for penalties to force action.
Fraud Minister David Hanson stated he expects Meta to “go further and faster in standing up to this threat.” The company’s response—Ryan Daniels, a Meta spokesperson, said “any suggestion that we ignore FCA reports misrepresents our ongoing efforts to protect people”—offers no concrete enforcement changes. The distance between that rhetoric and the 1,052 illegal ads approved in a single week will narrow only when regulatory penalties make inaction more expensive than compliance.