AI Geopolitics · · 7 min read

Meta Ray-Ban Glasses Under Fire as Kenyan Contractors Review Intimate User Footage

Swedish investigation reveals data workers in Nairobi have viewed videos of users in bathrooms, bedrooms, and other private moments—exposing critical flaws in how wearable AI handles data through global supply chains.

Workers at a Kenya-based subcontractor for Meta have been reviewing intimate footage captured by Ray-Ban smart glasses users—including videos of people using toilets, undressing, and engaging in sexual activity—according to a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten published February 27.

The investigation found that video data from Meta’s AI glasses ends up with data annotators at Sama, a data services company contracted by Meta, in Nairobi, Kenya, where workers train Meta’s AI systems by labeling, describing, and categorizing objects in images and videos. The Swedish journalists spoke to more than 30 employees at Sama, who described viewing highly sensitive material that users likely never intended to share.

Multiple workers told the Swedish journalists about video clips showing people walking out of bathrooms naked, changing clothes, or having sex while the glasses were running. Other recordings show accidentally filmed bank cards or people watching pornography while wearing the glasses. One annotator told the newspapers they saw a man place his glasses on a bedside table, only for his wife to enter moments later and change clothes, completely unaware she was being filmed.

Market Growth
Units sold 20257 million
Units sold 2023-20242 million
Growth rate+250%

How the Data Pipeline Works

Meta’s Ray-Ban Display glasses come with an AI assistant feature—when a user activates it to ask questions or get information about what they’re looking at, the glasses capture that footage and send it to Meta’s servers for processing. When Swedish reporters tested the glasses themselves and tried to avoid sending data to Meta by disabling the internet connection, the AI functions stopped working entirely—the glasses require a live connection to Meta’s servers to operate.

Former Meta employees in the US confirmed to the Swedish journalists that sensitive data is not supposed to be used for AI training, and faces in annotation data are automatically blurred—but the data annotators in Kenya say the anonymization doesn’t always work, with faces that should be obscured sometimes visible. Workers and former Meta employees told the paper that the anonymization breaks down in poor lighting, leaving identifiable faces visible.

The workers have signed extensive non-disclosure agreements, cameras are everywhere in the offices, and personal phones or recording devices are banned—anyone who asks questions risks losing their job. One contractor told the Swedish outlets: “You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone.”

“We see everything—from living rooms to naked bodies. Meta has that type of content in its databases.”

— Sama data annotator, Nairobi

A Troubled Contractor With History

Sama, the contracted data services company, has a troubled track record from previous work for OpenAI and Meta, where Kenyan workers had to label disturbing content for roughly $2 per hour. After reports exposed worker trauma and alleged union-busting at Sama’s Nairobi office, the company ended its content moderation work for Meta in 2023 and shifted its focus to computer vision data annotation—the exact type of work now tied to the AI glasses.

In 2023, 43 moderators at Meta’s Nairobi hub filed a lawsuit against the company and its local partner Sama for unfair termination—the moderators, who are now 184 in number, claim that they were fired in retaliation for complaints about working conditions and attempts to form a union. In September 2024, the Court of Appeal ruled Meta can be sued in Kenyan courts over the dismissal claims.

The workers at Sama earn roughly $2 per hour, operate under strict NDAs, and work in offices under constant camera surveillance, according to The Decoder. The working conditions reflect a broader pattern: those in the industry say that the exploitation of content moderators in the Global South follows a pattern of multinational corporations and their race to the bottom for cheap labour—Nairobi has become an epicentre of the AI outsourcing race, largely due to high levels of unemployment coupled with an increasingly educated youth population, and the capital’s high rate of English speakers.

Regulatory Scrutiny Intensifies

Kenya currently has no EU adequacy decision, meaning Meta needs strong contractual protections in place for data transfers from European users—the Irish Data Protection Commission has been contacted with questions about whether Meta is compliant. Lawmakers in the European Parliament are pressing the European Commission for clarity after the reports, with concerns intensified when Swedish outlets reported that Ray-Ban AI glasses captured and uploaded sensitive footage in violation of strict consent requirements under the EU’s General Data Protection Regulation.

Data protection lawyer Kleanthi Sardeli, from the non-profit None Of Your Business (NOYB), said there is a “clear transparency problem” regarding the AI assistant’s recording triggers, arguing that GDPR rules would require explicit consent if such data is used for training. Petter Flink, a security specialist at the Swedish Authority for Privacy Protection, added that users have “really no idea what is happening behind the scenes,” arguing that the data Meta collects is more valuable to the company than the profit from selling the glasses themselves.

Context

According to Help Net Security, Meta made AI camera and voice use the default setting in an April 2025 policy update, effectively burying the previously clear opt-out for voice recording storage. Voice recordings triggered by the wake word are now stored in the cloud by default and can be kept for up to a year to help improve AI systems.

Meta declined to comment directly to the Swedish newspapers beyond referring them to its terms of service. After two months of requests for interview, a spokesperson said: “When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.” The company’s AI terms of service state that “in some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”

Supply Chain as Liability Shield

The revelations expose how technology companies use complex global supply chains to distance themselves from controversial labour practices. Meta has outsourced content moderation to firms like Sama in Kenya, where moderators have reported psychological trauma, poverty wages, and the suppression of union organising—conditions that would not be tolerated under US or European labor laws.

This setup exploits legal loopholes across borders, where one country’s lax enforcement shields tech companies from accountability by placing responsibility on third-party vendors operating in regulatory grey zones. Meta has consistently argued it is not the employer of workers at Sama, attempting to avoid direct liability for working conditions and data handling practices at the subcontractor.

A TIME investigation found in 2023 that OpenAI paid microworkers in Kenya between $1.32 and $2 an hour to review toxic content, labelling data and removing harmful, violent and graphic content. The practice extends across the AI industry: firms like Google, Amazon, and Microsoft, through vendors such as Appen, have also been linked to exploitative data-labelling jobs paying as little as $1.77 per task.

Key Takeaways
  • Sama workers earning $2/hour review intimate footage including bathroom use, undressing, and sexual activity captured by Meta Ray-Ban glasses
  • Automated face-blurring fails in poor lighting conditions, exposing identifiable faces in training data
  • Workers operate under strict NDAs with office surveillance and risk termination for raising concerns
  • Kenya lacks EU data adequacy status, raising GDPR compliance questions for European user data transfers
  • Irish Data Protection Commission contacted about potential violations as EU lawmakers demand answers

What to Watch

The Irish Data Protection Commission’s response will determine whether Meta faces significant GDPR penalties for transferring European user data to Kenya without adequate safeguards. In May 2025, privacy advocacy group NOYB sent Meta a cease-and-desist letter alleging unlawful use of EU personal data for AI training, threatening potential collective redress actions under the EU Collective Redress Directive.

Broader implications extend beyond one company. Internal Meta documents suggest the company views the current moment as opportune for launching facial recognition features on these same devices—capabilities previously avoided on ethical grounds. If regulators fail to act decisively, wearable AI devices with even more invasive capabilities could proliferate before meaningful safeguards exist.

The Kenya court cases against Meta and Sama—now involving 184 workers—proceed toward trial following September 2024 appeals court rulings. Those outcomes could establish precedents forcing technology companies to accept direct liability for subcontractor practices, fundamentally disrupting the global AI training supply chain that depends on legal distance between Silicon Valley and the workers who make their systems function.