Keypoint: In this post: (1) Standing may depend on how specific plaintiffs’ complaint is; (2) the 2d Circuit adopts the 3rd and 9th Circuit’s narrower interpretation of PII under the VPPA; (3) Promises in privacy policies not to share user data can defeat consent defenses; (4) class action waivers in privacy agreements may face enforceability challenges in California; (5) courts closely scrutinize technical specifics in claims involving PHI.
This is our twenty-fourth installment in our data privacy litigation report covering decisions from the previous month.
There are many courts currently handling data privacy cases across the nation. Although illustrative, this update is not intended to be exhaustive. If there is another area of data privacy litigation about which you would like to know more, please reach out. The contents provided below are time-sensitive and subject to change.
Finally, for an overview of current U.S. data privacy litigation trends and issues, click here.
Five Privacy Litigation Takeaways from May 2025
- Standing may depend on what level of specificity the plaintiff alleges.
Two recent federal court decisions—one from the Northern District of California and one from the Central District of California—demonstrate how the level of factual detail in a privacy complaint can determine whether a case survives a motion to dismiss for lack of standing.
In a May 13 decision from the Northern District, the plaintiffs alleged a news website collected information such as browser and device data, IP addresses, and other identifying information for advertising and analytics purposes. Plaintiffs claimed three types of concrete injury: (1) invasion of privacy, (2) economic harm, and (3) heightened risk of future harm.
The court systematically rejected each theory. First, it found that disclosure of an IP address does not give rise to a privacy injury, citing a line of cases holding there is “no legally protected privacy interest in IP addresses.” Second, the court found the complaint failed to allege why Politico’s use of the plaintiffs’ IP addresses would be unjust, analogizing to retailers who profit from home addresses without being unjustly enriched. Third, the court found the complaint did not specifically allege any facts showing a risk of future harm. In short, the lack of detail in the complaint about what was collected, how it was used, and why it mattered doomed the plaintiffs’ standing.
A contrasting result came just two days later from a Central District of California court. The court had previously dismissed the plaintiff’s claim for lack of standing, noting the original complaint alleged only that she was “a person who visited Defendant’s Website” and failed to provide basic facts such as when or how many times she visited, what information she provided or believed was collected, or any reason to believe she was de-anonymized.
In her amended complaint, however, the plaintiff provided the missing specifics: she alleged she visited the website on a particular date and time when the defendant operated TikTok software, which automatically collected device and browser information, geographic information, referral tracking, and URL tracking from every visitor. The court found these allegations sufficient to state a concrete injury and denied the renewed motion to dismiss for lack of standing.
These decisions reinforce that, in privacy litigation, the difference between surviving and failing at the pleading stage often turns on how specifically the plaintiff pleads the facts. Courts continue to scrutinize whether plaintiffs allege not just that information was collected, but what was collected, when, how, and why it matters. Plaintiffs and defendants alike should pay close attention to the level of detail in complaints and motions to dismiss, as the line between actionable and non-actionable privacy claims remains highly fact-dependent.
Seven months after expanding the VPPA’s application to a broad definition of “consumers” in Salazar v. National Basketball Association, the Second Circuit issued a decision in May that adopts the Third and Ninth Circuits’ narrow definition of “personally identifiable information” subject to the Act.
In the case, the plaintiff alleged that the defendant, a video streaming service operator, violated the VPPA by utilizing the Meta Pixel to send plaintiff’s “personally identifiable information” to Facebook each time she streamed a video. According to the complaint, this information included (1) lines of code that, if correctly interpreted, would identify the title and URL of the video plaintiff watched; and (2) the plaintiff’s Facebook ID, a unique sequence of numbers linked to her Facebook profile. Under the VPPA, personally identifiable information “includes information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.”
In September 2023, a district court granted the defendant’s Rule 12(b)(6) motion to dismiss. The district court found that the information defendant allegedly shared did not constitute “personally identifiable information” under the VPPA to state a claim. The plaintiff appealed, setting up the Second Circuit to interpret the scope of “personally identifiable information” for purposes of the VPPA.
A three-judge panel in the Second Circuit affirmed the dismissal, agreeing with the lower court that the plaintiff had failed to plausibly allege that the information shared with Facebook contained “personally identifiable information” under the Act. To reach its conclusion, the panel first identified two different approaches other circuits have taken to define “personally identifiable information”: (1) applying a “reasonable foreseeability standard” that looks to whether the disclosed information is “reasonably and foreseeably likely to reveal” which videos a plaintiff has obtained (adopted by the First Circuit); and (2) applying an “ordinary person standard” which looks to whether the disclosed information “would readily permit an ordinary person to identify a [plaintiff’s] video-watching behavior” (adopted by the Third and Ninth Circuits). The distinction between each approach, the Second Circuit explained, is whether the VPPA should be read to apply to information that could be used to identify an individual’s video-watching habits by either a technologically sophisticated third party or an ordinary person.
After analyzing both approaches, the Second Circuit adopted the ordinary person standard of the Third and Ninth Circuits. The panel found that although “personally identifiable information” includes information that can be used to identify a person (not just information that, by itself, identifies a person), Congress did not intend for VPPA liability to depend on the level of sophistication of third parties with whom information is shared, circumstances outside a video tape service provider’s control. According to the panel, “[t]he ordinary person standard is a more suitable framework to determine what constitutes personally identifiable information because it ‘better informs video service providers of their obligations under the VPPA,’ while not impermissibly broadening its scope to include the disclosure of technological data to sophisticated third parties.” The panel explained that the VPPA’s focus is on what information is shared by the disclosing party (i.e., video tape service provider), not what a third party (e.g., Facebook) is capable of learning from that information. The decision notes that the law’s passage was intended to create liability where a video clerk leaked an individual’s video rental history, not where a “third party is able to ‘assemble otherwise anonymous pieces of data to unmask the identity of individual users.’” Finally, the panel also found that the VPPA’s history supported the ordinary person standard because the law’s definition of “personally identifiable information” had never been amended to incorporate concepts of internet privacy from more recent laws.
Applying the ordinary person standard to the case, the Second Circuit concluded that the plaintiff’s complaint did not plausibly allege that the information disclosed by defendants—lines of code that included a video title and the plaintiff’s Facebook ID—would permit an ordinary recipient, with little or no extra effort, to identify the plaintiff’s video-watching habits. The panel analyzed a screenshot in the plaintiff’s complaint that claimed to show what information the defendant sent via the Meta Pixel. The panel observed that the screenshot consisted of 29 lines of computer code that interspersed a video title among many characters, numbers, and letters that an ordinary person could not plausibly decipher. Similarly, the panel concluded that the plaintiff’s Facebook ID, also embedded in many lines of code, could not plausibly be identified by an ordinary person. The Second Circuit therefore affirmed the dismissal of the case.
The Second Circuit’s adoption of the “ordinary person standard” provides welcoming clarity and potential defenses to businesses facing VPPA claims in the circuit, especially those utilizing the Meta Pixel. The decision also emphasizes that to survive the pleading stage, complaints must assert specific facts that plausibly allege an ordinary person could discern the plaintiff’s personal video-watching history from the information being disclosed.
- Statements in privacy policies that companies will not share users’ information can undermine arguments that users consented via the privacy policy disclosure.
A May 19 decision from the Northern District of California highlights how companies’ own privacy policy language can undercut their consent defenses in privacy litigation. In this putative class action, defendants argued users had impliedly consented to the sharing of their personal information simply by continuing to use the application after being exposed to materials that disclosed the challenged data practices.
The court rejected this broad approach to implied consent, reiterating that under Ninth Circuit precedent, consent must be “actual”—meaning the disclosures must “explicitly notify” users of the specific conduct at issue. The burden was thus on the defendants to provide evidence that users were on notice and actually consented to the particular data sharing practices, not merely that they could have been on inquiry notice.
Significantly, the court scrutinized the privacy policies and related disclosures offered as evidence of user consent. The defendants cited portions of the privacy policies that stated the company “may share information” with third parties such as Facebook and Google. However, the court noted that these excerpts were presented out of context, omitting nearby language in the same policies that assured users the defendant would not “sell or rent any of your personal information to third parties” and that disclosures would be made only “to help provide, understand and improve our application.” In some versions, the policies explicitly stated that certain sensitive health information would not be shared at all, and in one instance, the defendant represented to users that it would “never share … any data related to your health with any third parties.”
The court found that these affirmative promises not to share or sell user information directly contradicted defendants’ argument that users had consented to broad data sharing. Where a privacy policy assures users that information will not be shared, that statement can negate any inference that users consented to such sharing, even if other parts of the policy mention third-party disclosures.
Companies defending privacy litigation should be mindful that assurances in their own privacy policies—such as promises not to share or sell user data—may undermine their ability to argue that users consented to the very conduct at issue. Courts will look closely at the full context of privacy policy language, and selective quotations or out-of-context disclosures are unlikely to persuade.
- Companies may not be able to rely on class action waivers to avoid privacy class litigation in California.
The enforceability of class action waivers in privacy litigation remains unsettled, particularly under California law after a May 19 decision from the Northern District of California.
The defendant’s terms of service stated all claims would be litigated individually and that the parties would not seek class treatment unless previously agreed in writing. The plaintiffs challenged the waiver as unconscionable, and the court agreed, applying California’s two-pronged test for unconscionability: (1) procedural unconscionability, which examines oppression or surprise due to unequal bargaining power, and (2) substantive unconscionability, which looks at whether a contract term is overly harsh or one-sided.
The court found a high degree of procedural unconscionability. The waiver was buried as a single sentence at the end of the terms of use, under a generic “Miscellaneous” heading, and was neither visually highlighted nor set apart from other provisions—unlike other liability-limiting terms that were presented in all caps or with clear headings. The agreement was offered on a “take it or leave it” basis, which the court viewed as a hallmark of a contract of adhesion.
Given this procedural unfairness, the court required less evidence of substantive unconscionability and found that bar was also met. The waiver effectively denied only the defendant’s customers a procedural benefit—since app developers like the defendant rarely sue customers in class actions. As a result, the court found the provision was “manifestly and shockingly one-sided.” The court noted that such waivers serve as a disincentive for companies to avoid conduct that might lead to class action litigation, compounding the one-sided nature of the term.
Practical Implication: While class action waivers are not categorically unenforceable in California, companies seeking to rely on such provisions in privacy litigation face substantial headwinds—particularly where the waiver is presented in a contract of adhesion, is not clearly disclosed, and operates in a one-sided manner. Companies should anticipate close judicial scrutiny of the procedural and substantive fairness of these waivers.
- To determine whether a website shared Personal Health Information, courts look at the specific information alleged to have been shared.
A May 27 decision from the Northern District of Texas underscores that courts require precise allegations about what information was actually transmitted before finding that a website unlawfully shared Personal Health Information (“PHI”) with third parties.
In this putative class action, the plaintiffs alleged use of the Meta pixel by an eyewear store that had both physical locations and an online marketplace resulted in the unlawful sharing of PHI with Meta, in violation of federal and state wiretap statutes, as well as under common law and contract theories. The court’s analysis focused on whether the plaintiffs had plausibly alleged PHI was both provided to the defendant and then shared with Meta.
For several plaintiffs, the court found the allegations insufficient. Plaintiffs described their subjective intent to seek prescription eyewear or schedule eye exams, but did not allege that they actually entered any health-related information on the website. The court emphasized that a visitor’s motives for visiting a website, nor the mere identification of a nearby store location, do not constitute sharing PHI. The court noted that any location data shared with Meta was already available through plaintiffs’ Facebook accounts, and that browsing retail offerings—including prescription and non-prescription eyewear—did not, without more, amount to a disclosure of PHI.
For two plaintiffs who did purchase prescription eyewear and entered prescription information into the site, the court took a closer look at the technical details. Plaintiffs argued that adding prescription eyewear to a digital cart triggered a Meta Pixel “AddToCart” event, which shared data with Meta. The court examined the actual data transmitted, however, and found it included only product names, IDs, prices, frame materials, and fulfillment details—none of which revealed prescription information or whether the lenses were prescription or non-prescription. The court found that conclusory allegations that PHI was “intercepted” were insufficient where the specific data fields transmitted did not include any actual health information.
The court also rejected the argument that it was enough to allege actions that “would have resulted” in PHI being sent to Meta, absent factual allegations that the website’s Pixel events actually captured and transmitted such information. By contrast, the court noted that in other cases where PHI sharing claims survived, the data captured and transmitted included information that itself disclosed a health condition or treatment.
Practical Implication: This decision highlights that, in privacy litigation involving alleged sharing of health information, courts will closely scrutinize the technical details of what information was actually transmitted to third parties. Plaintiffs must plead with specificity not only that they provided PHI to the website, but also that the website’s data sharing mechanisms actually transmitted that PHI. Generalized or speculative allegations are unlikely to survive a motion to dismiss. For defendants, this decision provides a roadmap to challenge privacy claims by focusing on the actual data fields shared, rather than plaintiffs’ intentions or assumptions about what might have been disclosed.