Welcome to this month’s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security, & Data Protection practice.
State & Local Laws & Regulation
CPPA Finalizes Cybersecurity Audit, ADMT, and Privacy Risk Assessment Regulations: The California Privacy Protection Agency (“CPPA”) finalized a set of regulations under the California Consumer Privacy Act (“CCPA”) that address cybersecurity audits, risk assessments, and automated decision-making technology (“ADMT”). The regulations set forth requirements when businesses use ADMT to make a “significant decision” about a consumer. A significant decision is one that results in the provision or denial of financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities, compensation, or healthcare services. Businesses that use ADMT to make a significant decision must provide consumers with a pre-use notice at or before the point of collection that provides a plain language explanation of the specific purpose for which the business plans to use the ADMT. The rules also provide consumers with certain opt-out and access rights with respect to ADMT. The regulations also require annual cybersecurity audits for businesses whose processing of personal information presents a “significant risk” to consumers’ privacy or security. The audit must assess a comprehensive list of cybersecurity controls, including multifactor authentication, encryption, access controls, data inventory, secure configuration, patch management, vulnerability scanning, logging, and training. Compliance with the cybersecurity audit portion of the regulations is phased in based on business size, with the earliest audits due by April 1, 2028, for the largest businesses. For more details on the new regulations, see our client alert here.
Minnesota’s Comprehensive Privacy Law Takes Effect: Minnesota’s Consumer Data Privacy Act (“MCDPA”) took effect on July 31, 2025. The MCDPA adds a number of significant obligations not required under previous U.S. state privacy laws, including the requirement to maintain data inventories and designate a Chief Privacy Officer or other individual designated to handle consumer data protection. The MCDPA also provides for new consumer rights, allowing consumers to challenge and obtain additional information about the profiling of their personal information.
California Civil Rights Council Gets Final Approval for AI Employment Discrimination Regulations: The California Civil Rights Council announced that it had secured final approval for regulations to protect against potential employment discrimination resulting from the use of artificial intelligence (“AI”), algorithms, and other automated-decision systems. The new regulations are intended to provide increased clarity on how existing anti-discrimination laws apply to the use of AI in employment decisions. The regulations require employers, employment agencies, labor organizations, and apprenticeship training programs to maintain automated-decision data for a minimum of four years and affirm that automated-decision system assessments, including tests and questions that elicit information about a disability, may constitute an unlawful medical inquiry, among other things. The new rules are set to go into effect on October 1, 2025.
Federal Laws & Regulation
White House Releases AI Action Plan: The Trump Administration released America’s AI Action Plan (the “Plan”), a comprehensive strategy to attempt to “win” the race to achieve global dominance in AI. The Plan identifies over 90 federal policy actions that the Trump Administration will take in the coming weeks and months through a three-pillar approach: (i) accelerating AI innovation, (ii) building American AI infrastructure, and (iii) leading in international diplomacy and security. The Plan, developed in coordination with multiple federal agencies and industry stakeholders, sets forth a broad policy agenda aimed at fostering innovation over regulation, revitalizing critical industries, and safeguarding national security interests in the rapidly evolving AI landscape. For more information on the Plan, please see our Client Alert about the Plan here.
White House Releases Multiple AI Executive Orders: President Trump issued three Executive Orders seeking to promote innovation and development in the field of AI. The first order, Accelerating Federal Permitting of Data Center Infrastructure, aims to facilitate the rapid and efficient buildout of AI data centers and supporting infrastructure by providing federal funding, streamlining environmental reviews, fast-tracking permitting, providing tax incentives, and making federal lands available for development. The second order, Preventing Woke AI in the Federal Government, seeks to eliminate ideological biases and social agendas from AI models to prevent them from distorting their quality and accuracy. This order specifically targets diversity, equity, and inclusion, which it alleges “displaces the commitment to truth in favor of preferred outcomes”. Finally, Promoting the Export of the American AI Technology Stack aims to “ensure that American AI technologies, standards, an governance models are adopted worldwide to strengthen relationships with our allies and secure our continued technological dominance.” The order seeks to achieve these goals by establishing the American AI Exports program to support the development and deployment of U.S. full-stack AI export packages and provide new federal funding tools to assist in the development of these packages. Although these orders are likely to support continued AI development in the United States, they may create new content moderation issues for companies seeking to obtain federal funding or offer new AI tools to government agencies.
U.S. Senate Committee Discusses Framework for Federal Privacy Legislation: The U.S. Senate Subcommittee on Privacy, Technology, and the Law held a hearing to discuss the need for a comprehensive federal privacy law. Lawmakers from both parties expressed support for establishing clear, nationwide privacy protections for consumers. The hearing focused on several foundational principles for a federal privacy framework, including data minimization, transparency, consumer rights, accountability, and enforcement, including considering the role of the Federal Trade Commission (“FTC”)and state attorneys general in enforcing privacy laws. Lawmakers debated whether a federal law should preempt state privacy laws and whether individuals should have the right to sue for privacy violations. There was no consensus, but these issues remain central to ongoing discussions.
Representatives Introduce Don't sell My DNA Act: Representatives Zoe Lofgren (D-CA) and Ben Cline (R-VA) introduced the bipartisan “Don’t Sell My DNA Act,” aimed at strengthening consumer privacy protections around genetic data. The legislation seeks to prohibit direct-to-consumer genetic testing companies from selling or sharing individuals’ DNA data without explicit consent. This bill responds to growing concerns about the misuse of sensitive genetic information by third parties, including insurers, employers, and law enforcement, in light of the recent 23andMe bankruptcy. The bill emphasizes transparency and accountability, requiring companies to clearly disclose how genetic data is used and to obtain informed consent before any data transfer.
Telecommunications Companies Urge FCC to Rescind Cybersecurity Requirements: Several telecommunication trade groups have urged the Federal Communications Commission (“FCC”) to reconsider and rescind its Declaratory Ruling, which interprets Section 105 of the Communications Assistance for Law Enforcement Act (“CALEA”) to impose certain cybersecurity requirements on telecommunications carriers. The Declaratory Ruling, issued after the Salt Typhoon cyberattack linked to Chinese state actors, requires carriers to secure their networks and annually certify their cybersecurity plans. The telecommunication trade groups argue that the FCC overstepped its authority, misinterpreted the law, and failed to follow proper notice and comment procedures when issuing the ruling. They claim the new requirements are overly burdensome and vague, potentially undermining existing public-private cybersecurity efforts. The groups highlighted that current FCC Chair Brendan Carr opposed the ruling, suggesting his dissent should guide future policy. They have formally petitioned the FCC to rescind the decision, warning it could have negative public policy consequences. The FCC has not yet responded to these concerns.
U.S. Litigation
AG Coalition Secures Privacy Protections for 23andMe Data in Bankruptcy Sale: A coalition of states Attorneys General announced that they had secured a series of privacy-focused conditions for the sale of 23andMe’s customer genetic data. In 2023, 23andMe filed for bankruptcy after suffering a cybersecurity breach and financial decline. The bankruptcy raised significant concerns about the potential sale or misuse of sensitive consumer genetic data, sparking objections from multiple states attorneys general. As a result of these objections, TTAM, the purchaser of 23andMe’s assets in the bankruptcy proceedings, agreed to (1) not transfer consumer genetic data; (2) provide consumers with the right to permanently delete their data at any time; (3) not resell data in the future without full privacy compliance measures being taken; (4) prohibit the sharing of genetic data with foreign adversaries; (5) create a consumer privacy advisory board; and (5) submit to ongoing state oversight upon request.
Texas Federal Court Reverses Biden-Era Medical Debt Credit Report Rule: A Texas federal judge has struck down a Biden-era Consumer Financial Protection Bureau (“CFPB”) rule that would have removed approximately $49 billion in medical debt from consumer credit reports. The decision follows a consent agreement between the CFPB and two major trade groups—the Consumer Data Industry Association and the Cornerstone Credit Union League—who sued the agency, arguing the rule exceeded its statutory authority. U.S. District Judge Sean D. Jordan approved the consent judgment, finding the rule violated the Fair Credit Reporting Act, which explicitly allows consumer reporting agencies to include medical debt information, provided it is coded to conceal specific health details. The now-invalidated rule was set to take effect after a 90-day stay.
FCC Petitions 5th Circuit for Rehearing of Decision that FCC Could Not Fine Telecommunications Company: The FCC has petitioned the Fifth Circuit for an en banc rehearing in its case against AT&T, aiming to revive a $57 million fine over alleged unlawful sales of user location data. The move challenges an April panel ruling that deemed the FCC’s in-house adjudication process unconstitutional under the Seventh Amendment, asserting that AT&T was entitled to a jury trial. Central to the FCC’s argument is the Supreme Court’s recent decision in McLaughlin Chiropractic Associates v. McKesson, which the agency claims invalidates the precedent set by United States v. Stevens. The FCC argues that McLaughlin confirms defendants can now challenge both factual and legal aspects of forfeiture orders in district court, resolving the constitutional concerns raised by the panel. The FCC also invokes the “public rights exception,” asserting that enforcement actions against telecom licensees like AT&T fall within the scope of agency adjudication without jury trials. The commission emphasized AT&T’s role as a spectrum licensee, arguing that violations of licensing terms should be subject to administrative enforcement.
U.S. Enforcement
Department of Justice Settles False Claims Act Alleging Medical Device Cybersecurity Violations: The U.S. Department of Justice (“DOJ”) announced that biotechnology company Illumina Inc. (“Illumina”) had agreed to pay $9.8 million to resolve allegations that it violated the False Claims Act (“FCA”) when it sold certain genomic sequencing systems with cybersecurity vulnerabilities to federal agencies. The DOJ alleged that Illumina sold government agencies genomic sequencing systems with software that had cybersecurity vulnerabilities, without having an adequate security program and sufficient quality systems to identify and address those vulnerabilities. Specifically, the United States contended that Illumina knowingly failed to incorporate product cybersecurity in its software design, development, installation, and on-market monitoring; failed to properly support and resource personnel, systems, and processes tasked with product security; failed to adequately correct design features that introduced cybersecurity vulnerabilities in the genomic sequencing systems; and falsely represented that the software on the genomic sequencing systems adhered to cybersecurity standards, including standards of the International Organization for Standardization and National Institute of Standards and Technology. The settlement resolves a lawsuit filed under the whistleblower provisions of the FCA, which permit private parties to sue on behalf of the government when a defendant has submitted false claims for government funds and receive a share of any recovery.
Nebraska Attorney General Files Lawsuit Against Auto Manufacturer for Deceptive Data Collection and Sale: Nebraska Attorney General Mike Higers announced that the State of Nebraska filed a lawsuit against General Motors LLC and OnStar LLC (collectively “General Motors”), alleging that General Motors collected, processed, and sold sensitive data from Nebraskans without their knowledge or consent. The lawsuit specifically alleges that General Motors collected a wide range of data from telematics systems in its vehicles and sold that data to third-party data brokers, who used the data to create driving scores for millions of drivers. The scores were later sold to insurance companies and used to raise rates, deny coverage, or cancel policies without Nebraskan drivers knowing the data was being used for such purposes. The lawsuit follows similar enforcement actions by the FTC and the Texas Attorney General.
Kentucky Attorney General Sues Chinese Online Shopping Platform: The Kentucky Attorney General filed a lawsuit against Temu for unlawful data collection, violations of customers’ privacy, and counterfeiting some of Kentucky’s brands. The complaint alleges that Temu collects users’ sensitive personal information without their knowledge or consent, as its app bypasses users’ cell phone security and monitors and records users’ activities across their phone (not just those within the app). The complaint also alleges that Temu allows access to such data to the Chinese government. The complaint notes that Temu is owned by PDD Holdings, a Chinese holding company, whose first retail app, Pinduoduo, was eventually banned from U.S. app stores for being malware. The complaint further claims that Temu steals the intellectual property of U.S.-owned companies, including some of Kentucky’s brands, and uses forced labor from Chinese ethnic minorities in clear violation of U.S. trade policies. The Nebraska Attorney General previously filed a similar lawsuit against Temu.
Utah Attorney General Sues Snapchat: The Utah Attorney General filed a lawsuit against Snap, Inc. (“Snap”) for its platform Snapchat. The complaint alleges that Snap has designed “addictive” and “dangerous” features into its platform (e.g., Snapstreaks to reward continuous use) to “exploit children’s psychological vulnerabilities” for financial gain, which constitutes an unconscionable business practice under Utah’s Consumer Sales Practices Act. The complaint also alleges that Snap violated such law by publicly positioning itself as a safe alternative to traditional social media while deceiving users and their parents about the platform’s safety and the resources Snap committed to protecting them. The complaint further alleges that Snap is violating the Utah Consumer Privacy Act by not informing consumers about its data collection and processing practices and failing to provide users or their parents with an opportunity to opt out of sharing sensitive data, such as biometric and geolocation information.
Connecticut Attorney General Settles with Online Ticket Marketplace: The Connecticut Attorney General has settled with TicketNetwork, Inc. (“TicketNetwork”) as a result of an investigation into violations of the Connecticut Data Privacy Act (“CTDPA”). The Connecticut Attorney General first sent TicketNetwork a notice to cure its violations of the CTDPA, identifying the company’s deficiencies in its privacy notice, such as the notice being unreadable, missing explanations of consumers’ rights under the CTDPA, and containing mechanisms to submit requests to exercise the rights under the CTDPA that were misconfigured or inoperable. Under the CTDPA’s cure period, TicketNetwork had 60 days to resolve the deficiencies. However, TicketNetwork did not resolve these deficiencies within the necessary timeframe. Under the settlement, TicketNetwork has agreed to comply with the requirements of the CTDPA, maintain metrics for consumer rights requests received under the CTDPA, provide a report of these metrics to the Connecticut Attorney General, and pay $85,000.
CPPA Fines Data Broker for Failing to Register under Delete Act: The CPPA ordered Washington data broker Accurate Append, Inc., to pay a $55,400 fine for failing to register and pay annual fees as required under the Delete Act. The data broker failed to register by January 31, 2024, for its 2023 activities, registering only after the CPPA had contacted the company during its investigation. In addition to the fine, the company agreed to injunctive terms, including agreeing to pay the Enforcement Division's attorney fees and costs resulting from any non-compliance. The fine is part of a continued investigative sweep of data broker registration compliance announced by the CPPA on October 30, 2024.
Montana Attorney General Opens Investigation into Media Company for Data Breach: The Montana Attorney General has launched an investigation into Lee Enterprises following a data breach the company experienced in February 2025. According to reports, the data breach affected nearly 40,000 employees and subscribers, and the data involved included first and last names, as well as Social Security numbers. The Montana Attorney General has issued a Civil Investigative Demand (“CID”) against Lee Enterprises, requiring the company to provide details about the February 2025 data breach and any other breaches the company experienced since January 1, 2024. Lee Enterprise has a month to respond to the CID.
HHS Settles with Surgery Center over Ransomware Incident: The U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) announced it had settled with Syracuse ASC, LLC, doing business as Specialty Surgery Center of Central New York (“Syracuse ASC”), for potential violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security and Breach Notification Rules. The settlement stems from a ransomware incident that led to the breach of the PHI of 24,891 individuals. OCR’s investigation found that Syracuse ASC never conducted an accurate and thorough risk analysis to determine the risks and vulnerabilities to the ePHI it held and failed to timely notify individuals of the breach. Under the terms of the settlement, Syracuse ASC agreed to implement a corrective action plan that OCR will monitor for two years and paid $250,000 to OCR. Under the corrective action plan, Syracuse ASC committed to take steps to ensure compliance with the HIPAA Rules and protect the security of ePHI, including conducting an accurate and thorough assessment of the potential security risks and vulnerabilities to the confidentiality, integrity, and availability of its ePHI; developing and implementing a risk management plan to address and mitigate security risks and vulnerabilities identified in its risk analysis; reviewing, and to the extent necessary, revising, certain written policies and procedures to comply with the HIPAA Rules; and providing annual training for workforce members on its written HIPAA policies and procedures.
HHS Settles with Behavioral Health Provider: The U.S. Department of Health and Human Services (“HHS”), Office for Civil Rights (“OCR”) reached a settlement with Deer Oaks – The Behavioral Health Solution (“Deer Oaks”), a behavioral health provider, over potential Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) violations related to two security incidents. The first incident involved a coding error for an online patient portal, which exposed the electronic protected health information (“ePHI”) of 35 individuals to search engine providers. The second incident involved a ransomware attack that compromised 171,871 individuals’ ePHI. HHS OCR found that Deer Oaks failed to comply with HIPAA by not conducting risk analyses. As part of the settlement, Deer Oak must pay $225,000 and provide annual training to employees, annually review and update its risk analysis, and implement and maintain written policies and procedures to comply with HIPAA, including a risk management plan.
International Laws & Regulations
China Proposes New Global AI Cooperation Organization: China has proposed the creation of a new global organization dedicated to AI cooperation, aiming to provide an alternative to U.S.-led approaches to AI governance. The proposal calls for international collaboration on AI standards, ethics, and risk management, emphasizing inclusivity and the participation of developing countries. Chinese officials positioned the initiative as a counterbalance to what they describe as the United States’ “low regulation” and “exclusive” strategy set forth in the Trump Administration’s AI Action Plan. The move is seen as part of a broader effort by China to assert leadership in the global AI landscape and influence the rules governing emerging technologies.
European Commission Affirms No Delay in AI Act Implementation: The European Commission (the “Commission”) has confirmed that there will be no delay in the implementation of the EU Artificial Intelligence Act (“AI Act”), maintaining its original timeline despite calls from some industry groups and member states for postponement. The Commission emphasized the importance of the AI Act in establishing clear rules for the development and use of AI across the EU, aiming to ensure safety, transparency, and fundamental rights protections. The law is set to take effect in a phased manner, with certain provisions applying as early as 2025 and full implementation expected by 2026. Overall, the message is that the EU remains committed to its AI regulatory agenda and will not alter the implementation schedule.
Final Version of EU General Purpose AI Code of Practice Published: The European Commission has received the final version of the General Purpose AI Code of Practice (the “GPAI Code”), a voluntary framework designed to guide the responsible development and deployment of general-purpose AI systems in the EU. The GPAI Code was developed collaboratively by industry stakeholders, civil society, and regulators, and aims to provide practical guidance on transparency, risk management, and accountability for general-purpose AI providers and deployers. The GPAI Code is voluntary for organizations looking to demonstrate compliance with the EU AI Act. Member states and the Commission will need to officially endorse the code. Once that is completed, providers of general-purpose AI can voluntarily sign on to the code and adhere to its requirements to help demonstrate compliance with the EU AI Act.
NOYB Files Data Access Complaints Against Chinese Apps: Privacy rights NGO NOYB filed formal complaints against three major Chinese apps—TikTok, AliExpress, and WeChat—alleging violations of the EU’s General Data Protection Regulation (“GDPR”). The complaints, lodged in Greece, Belgium, and the Netherlands, accuse the companies of failing to adequately respond to user data access requests, as required under the GDPR. NOYB claims TikTok provided incomplete data, AliExpress sent a broken file, and WeChat ignored the request entirely. The organization criticized the lack of automated tools for European users to easily download their personal data, stating that Chinese apps are “even worse than U.S. providers” in terms of compliance. This latest action by NOYB continues its aggressive push for stronger enforcement of data rights across global tech platforms, spotlighting ongoing concerns about transparency and accountability in cross-border data handling.
United States and Indonesia Reach Trade Deal Including Cross Border Data Transfer Rules: The U.S. and Indonesia announced a trade deal, including an agreement to finalize commitments on digital trade, services, and investment. Specifically, “Indonesia will provide certainty regarding the ability to move personal data out of its territory to the United States through recognition of the United States as a country or jurisdiction that provides adequate data protection under Indonesia’s law.” This will facilitate the ability to transfer data between the United States and Indonesia without the implementation of Indonesia-specific data protection agreements or terms.
[View source.]