The rapid adoption of AI notetaking and transcription tools has transformed how organizations (and individuals) capture, analyze, and share meeting and other content. But as these technologies expand, so too do the legal and compliance risks. A recent putative class action lawsuit filed in federal court in California against Otter.ai, a leading provider of AI transcription services, highlights the potential pitfalls for organizations relying on these tools.
The Complaint Against Otter.ai
Filed in August 2025, Brewer v. Otter.ai alleges that Otter’s “Otter Notetaker” and “OtterPilot” services recorded, accessed, and used the contents of private conversations without obtaining proper consent. According to the complaint, the AI-powered notetaker:
- Joins Zoom, Google Meet, and Microsoft Teams meetings as a participant and transmits conversations to Otter in real time for transcription.
- Records meeting participants’ conversations even if they are not Otter accountholders. The lead plaintiff in this case is not an Otter accountholder.
- Uses those recordings to train Otter’s automatic speech recognition (ASR) and machine learning models.
- Provides little or no notice to non- accountholders and shifts the burden of obtaining permissions onto its accountholders.
The lawsuit asserts a wide range of claims, including violations of:
- Federal law: the Electronic Communications Privacy Act (ECPA) and the Computer Fraud and Abuse Act (CFAA).
- California law: the California Invasion of Privacy Act (CIPA), the Comprehensive Computer Data and Fraud Access Act, common law intrusion upon seclusion and conversion, and the Unfair Competition Law (UCL).
The plaintiffs allege that Otter effectively acted as an unauthorized third party eavesdropper, intercepting communications and repurposing them for product training without consent.
Key Legal Takeaways
The Otter.ai complaint underscores several important legal themes that organizations using AI notetakers should carefully consider:
- Consent Gaps Are a Liability
Under California wiretap laws, recording or intercepting communications typically requires the consent of all parties. The complaint emphasizes that Otter sought permission only from meeting hosts (and sometimes not even them), but not from all participants. This “single-consent” model is risky in states like California that require all-party consent.
- Secondary Use of Data Raises Privacy Risks
Beyond transcription, Otter allegedly used recorded conversations to train its AI models. Even if data is “de-identified,” the complaint notes that de-identification is imperfect, particularly with voice data and conversational context. Organizations allowing vendors to reuse data for training AI models should scrutinize whether proper disclosures and consents exist.
- Vendor Contracts and Shifting Responsibility
Otter’s privacy policy placed responsibility on accountholders to obtain permissions from others before capturing or sharing data. Courts may find this approach insufficient, especially when the vendor is the party processing and monetizing the data.
- Unfair Business Practices
Plaintiffs also claim that Otter’s conduct violated California’s Unfair Competition Law by depriving individuals of control over their data while enriching the company. This theory—loss of data value as a consumer injury—could gain traction in privacy-related class actions.
Broader Risks for Organizations Using AI Notetakers
Even if an organization is not the technology provider, using AI notetaking tools in the workplace creates real risk. Companies should consider:
- Employee and Third-Party Notice: Are employees, clients, or customers clearly informed when AI notetakers are in use? Does the notice satisfy federal and state recording laws?
- Consent Management: Is the organization obtaining and documenting consent where required? What about meetings that cross jurisdictions with differing consent rules?
- Confidentiality and Privilege: If a meeting involves sensitive legal, HR, or business discussions, does the use of third-party AI notetakers risk waiving attorney-client privilege or exposing trade secrets?
- Data Use, Security, and Retention: How does the vendor store, use, and share transcription data? Who has access to them? Do they contain personal information that must be safeguarded? Can recordings be deleted upon request? Are they used for training or product development?
- Comparative Practices: Some vendors offer features that allow any participant to pause or prevent recording—an important safeguard. Organizations should evaluate whether their chosen tool provides these protections.
Practical Steps for Risk Mitigation
Organizations should take proactive measures when adopting AI notetakers:
- Conduct a Legal Review: Assess whether recording practices align with ECPA, state wiretap laws, and international requirements (such as GDPR).
- Update Policies: Ensure meeting and privacy policies address the use of AI notetakers, including requirements for notice and consent.
- Review Vendor Agreements: Negotiate contractual limits on data use, retention, and training.
- Consider Potential Use Cases: The nature and content of the discussion captured by the AI notetaker can trigger a range of other legal, compliance, and contractual obligations. Additionally, consider the organization’s position when third parties, such as customers or job applicants, use AI notetakers during a meeting.
- Enable Safeguards: Where possible, configure tools to require pre-meeting notices and allow participants to decline recording.
- Train Employees: Make sure staff understand when and how to use AI transcription tools appropriately, especially in sensitive contexts.
Conclusion
The Brewer v. Otter.ai complaint is a reminder that AI notetaking tools carry both benefits and significant risks. Organizations leveraging these technologies must balance efficiency with compliance—ensuring that recording, consent, and data-use practices align with evolving privacy and other laws.