A new class action lawsuit, Doe v. Perplexity, has been filed against generative AI company Perplexity, along with Meta and Google, alleging the unauthorized disclosure of users’ chatbot conversations to third parties for targeted advertising. The suit brings legal scrutiny to an AI industry practice aimed at monetizing highly personal data gathered through chatbot interactions.
Allegations of Data Sharing Without Consent
The plaintiff, identified as John Doe, claims that Perplexity shared transcripts of his chats with its AI chatbot, including sensitive financial and legal information, with Meta and Google using tracking technologies such as Meta Pixel and Google DoubleClick. According to the complaint, this occurred even when users engaged in Perplexity’s “Incognito Mode,” which promises that conversations will not be saved but allegedly does not disclose sharing data with third-party advertisers.
The lawsuit asserts that users were not warned about these disclosures and were unable to easily access any terms or privacy policies outlining such practices. It also highlights the deeply personal nature of information shared with chatbots—including health, relationship, and financial details—which could increase the value of related advertising data.
Legal Claims and Privacy Concerns
Filed in California, the lawsuit cites multiple legal frameworks, including the California Invasion of Privacy Act (CIPA), the California state constitution’s right to privacy, and federal wiretapping laws. These claims focus on whether Perplexity obtained informed user consent before sharing communications with adtech companies, drawing parallels between digital tracking tools and traditional recording devices.
The complaint further accuses the defendants of violating California’s Unfair Competition Law by engaging in unfair business practices such as misrepresenting the nature and privacy of their services. Additional charges include deceit, negligence, and unjust enrichment, particularly emphasizing the alleged deceptive use of Incognito Mode as a cover for data sharing.
AI Monetization and Industry Context
Generative AI companies face high development and operational costs, and most AI product users do not pay fees sufficient to cover these expenses. Against this financial backdrop, many companies are exploring advertising based on personal data extracted from AI interactions as a potential revenue source.
Meta has publicly indicated it is using chatbot interaction data for targeted advertising, a practice critics warn could commodify sensitive user information. This lawsuit exemplifies growing legal challenges to AI firms’ use of intimate data without transparent user consent.
Why it matters
This case could set significant legal precedents regarding user privacy and consent in the AI industry’s evolving business models. If courts find that sharing chatbot transcripts with ad platforms violates privacy laws, AI companies may face costly damages and stricter disclosure obligations. The decision will influence how companies balance monetization efforts with consumers’ expectations of confidentiality and data protection.
Sources
This article is based on reporting and publicly available information from the following source:
Read more Digital Policy stories on Goka World News.
