AI Regulation: Are There Regulations on AI?

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

This update is part of our AI FAQ Series. Learn more at our AI Law Center.

1. What is AI regulation?

AI regulation refers to the laws, guidelines and policies established by governments and regulatory bodies to oversee the development, deployment and use of AI technologies. Key aspects of AI regulation include safety and reliability, ethical use, privacy and data protection, transparency, accountability, security, human oversight, and innovation and competition.

AI regulation seeks to ensure that AI technologies are developed and used in ways that are safe, ethical and beneficial to society in a responsible way that protects public interests and individual rights.

2. Are there laws and regulations that apply to AI?

Yes. Pre-existing laws and regulations will often apply to the ways in which AI is developed and used. For example, pre-existing consumer protection laws are being interpreted to impose fair and non-deceptive obligations on deployers of AI tools. In addition, many jurisdictions have enacted AI-specific laws and regulations designed to address the unique risks presented by AI.

Please visit our AI Law Center for more information on U.S. AI laws and the EU AI Act.

3. What do I need to know about the EU AI Act?

The EU AI Act has been formally adopted, and implementation of its obligations is staggered over a few years. For developers of certain AI systems and General-Purpose AI Models on the market before relevant obligations take effect, the EU AI Act also grants additional time to comply.

Please visit our AI Law Center for a summary of the EU AI Act and read our EU AI Act Series, which includes key takeaways for businesses using and developing AI.

4. How will a federal privacy law impact AI? 

The potential enactment of a federal privacy law in the United States could have various impacts on AI. Legislation is likely to introduce enhanced data minimization requirements and establish greater accountability for algorithms. Such legislation may necessitate transparent data practices and compel organizations to conduct impact assessments, ensuring responsible AI deployment. Entities using AI might also have to implement mechanisms for individuals to opt-out of automated decisions. Additionally, the law would likely emphasize the protection of civil rights, aiming to prevent bias in AI systems.

5. Which U.S. states have AI laws?

As of 2025, most states in the United States have introduced or passed laws aimed at regulating the development, use and impact of AI in certain circumstances. While there is no federal AI law in the United States yet, states are taking individual action to address various aspects of AI, such as ownership over outputs, liability for resulting harm, generation of deepfakes and CSAM, and the use of AI in higher risk domains (such as critical infrastructure, education, employment, healthcare, insurance, real estate, government, and automated decision-making). Please visit our AI Law Center for a complete list of enacted state AI laws in our AI Law Tracker.

6. What are our ethical responsibilities for using AI?

Your ethical responsibilities for using AI will vary depending on your industry and the specific context in which AI is applied. Generally, you should consider adopting a principles-based approach to the use of AI technologies as part of an overall AI governance strategy and strive to use AI transparently and fairly, with a commitment to accountability. It’s important to prioritize privacy and data security, and to actively work to mitigate bias and discrimination in AI systems. Aligning AI use with the ethical standards and values of your organization and profession is also crucial. For tailored advice, consider consulting with a professional body or ethics committee relevant to your field.

7. How do we address transparency requirements?

Addressing transparency obligations involves clearly communicating the use of AI to stakeholders and users, including the purpose, scope and nature of data being processed. Depending on the context, it may also involve disclosing the logic involved in decision-making processes, especially in sectors where explanations are legally required, such as credit decisions. Moreover, best practices may include disclosing AI-use, whether required by law or contract, and, in some cases, obtaining express consent from users to process their data through automated technologies. We recommend reviewing the applicable transparency requirements in each jurisdiction where AI systems will be used. Some purchasers are also adding a contractual requirement on their service providers to disclose the use of generative AI.

8. What does my privacy notice need to say about AI use?

As companies increasingly leverage AI in their operations, the obligations and expectations for AI-related consumer disclosures continue to evolve. As a result, companies seeking to use consumer-oriented AI face uncertainty about how to integrate AI disclosures into privacy notices.

Our article provides guidance on how to incorporate disclosures about AI into your privacy notice—along with an overview of the key legal obligations and regulatory expectations for AI-related privacy notice disclosures in the United States.

9. Do I have to be able to explain how my AI works?

The ability to explain how your AI works, often referred to as “explainability,” is increasingly important for compliance with regulations, building user trust, and potentially defending against legal claims. The level of explanation required may vary based on the application of the AI and the potential impact on individuals’ rights.

Moreover, explainability is required by certain laws, including the Colorado Privacy Act Rules (requiring notice when a consumer’s data is used for profiling in furtherance of decisions that produce legal or other similarly significant effects), 4 CCR 904-3-9.03, and the Illinois Artificial Intelligence Video Interview Act (requiring notice before using AI in employment video interviews), 820 ILCS 42/5.

The provisions of European laws, such as the General Data Protection Regulation, the Digital Services Act and the EU AI Act should also be taken into consideration when assessing the degree of explainability required in relation to automated decision making. Your use or distribution of AI systems and general-purpose AI models in the EU may also require the maintenance and sharing of certain information about such AI systems and general-purpose AI models with regulators and the public.

10. Do I need a human in the loop?

“Human-in-the-loop” (HITL) refers to a process where human judgment and intervention are integrated into the workflow of generative AI systems. This approach combines the strengths of both human expertise and AI capabilities to enhance the quality, accuracy and reliability of the outcomes.

Having a human-in-the-loop is becoming a mainstay within emerging AI legal and policy proposals. In the United States, federal legislators have encouraged further work into establishing best practices for the level of automation appropriate for a given task, including whether to have a human-in-the-loop at certain stages for some high-impact tasks. This is also a feature of the new EU AI Act for high-risk AI systems.

11. How do I deal with personal data deletion requests?

When addressing personal data deletion requests, it is important to establish a robust process for removing individuals’ data from your systems, including AI datasets, to comply with privacy laws like the CCPA and GDPR. The process should account for the challenges of extracting personal data from AI models, where deletion can be complex. In Europe and other jurisdictions, regulators are publishing guidance on how to respect the rights of individuals to access and control their personal data, which may be relevant depending on where a request has originated.

12. How will the outcome of current lawsuits impact my use of AI?

Current lawsuits involving AI, particularly those centered on copyright and ownership, are poised to influence the development and use of AI in the legal field. Outcomes could set critical precedents for liability, intellectual property rights and compliance with emerging regulations. Monitoring these cases and the evolving regulatory landscape is crucial, as they may require changes in how AI tools are deployed and managed within your organization. In-house counsel should work closely with their legal teams to ensure that the use of AI aligns with these legal developments.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Orrick, Herrington & Sutcliffe LLP

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide