Use of AI in Marketing and Digital Media - 2025 Playbook

BakerHostetler
Contact

BakerHostetler

Right now, we are all taking stock of the many important issues and challenges we saw crop up for clients last year, trying to predict what they will face in the coming year, and strategizing about how we can help. Without a doubt, one of the biggest issues we saw clients struggle with last year was how to address legal risks and requirements related to use of AI in connection with their advertising, marketing and digital media activities. And we expect this to continue to be a challenge in the coming year. Whether it’s the use of generative AI to develop ad creative content more efficiently and quickly; using predictive AI to better target, measure and optimize ad campaigns; the use of AI to dynamically generate and modify ad content on the fly; or one of the many other use cases we saw last year (and likely many new use cases we have yet to see), we believe adoption and implementation of AI in the advertising and digital media space will continue to increase. That means we need to be ready to support clients in these efforts and must be well positioned to help them understand the related legal requirements and compliance obligations, and help them identify and mitigate the risks. Those may include, e.g., intellectual property infringement risks, risks and privacy compliance obligations related to the use of personal data, and compliance obligations and requirements set forth in new and pending legislation, rules and regulatory guidance directed specifically to the use of AI. This is a complicated space that continues to evolve, and we have found it helpful for clients to develop legal playbooks they can reference as a starting point to address these issues. The following outlines some of the issues we would recommend covering in your playbook.

Privacy Compliance Obligations

Any AI use case involving the collection and processing of personal data may trigger privacy compliance obligations. When considering AI use cases involving personal data, you must determine how you will comply with obligations related to honoring data subject rights under existing privacy laws. For example, most privacy laws require that a data subject be provided notice and information about the type of data collected and how it is used. Consequently, any use of AI that leverages personal data may require updates to consumer privacy notices to account for collection of new types of data or new uses or processing of the data. Similarly, laws may require providing opt-outs or obtaining consent from consumers in the event certain sensitive data is processed, or where data is used for things like targeted advertising. Therefore, any new AI implementation involving personal data should include consideration of whether those requirements are triggered.

In addition, many privacy laws require organizations to provide certain notices and opt-outs (or to obtain consent), and to conduct a data protection assessment, when processing personal data for the purposes of profiling in furtherance of automated decisions having legal or similarly significant effects (ADM Requirements). Decisions that produce legal or significant effects generally include those regarding the provision or denial of financial or lending services, housing, and insurance coverage; as well as decisions related to education enrollment, criminal justice issues, employment opportunities, and healthcare services; or decisions regarding access to other essential goods or services. This may require organizations to conduct a contextual analysis to determine if the specific AI processing activity they are considering meets the “legal or similarly significant effect” threshold. For example, the use of AI to target advertising related to products and services in certain categories could potentially trigger these requirements. But it’s important to note the laws vary in terms of what types of activities could trigger these ADM Requirements. For instance, the requirements under the CCPA and Proposed Regulations on Automated Decision Making Technology are quite broad and may extend beyond decisions producing legal or significant effects. So careful consideration of the different ADM Requirements is recommended.

Requirements To Conduct Assessments

Aside from the requirements under privacy laws related to data protection assessments, organizations should be aware of other requirements for conducting impact assessments related to use of AI under new and pending legislation specifically directed to AI, which may be unrelated to the collection and processing of personal data.

Claims About Products and Services

Where AI is used to generate ad content, you should ensure that any claims made as part of that AI-generated content are truthful, accurate and properly substantiated. And, although this article focuses primarily on use of AI as a tool to assist in marketing your products and services, you should also be aware of issues that may arise where the products and services themselves leverage AI. For example, in the event you are marketing products or services that utilize AI, then, as with any other claims, you will need to ensure those claims are truthful, accurate and properly substantiated. As part of the FTC’s recent Operation AI Comply, the agency brought cases against companies making allegedly deceptive claims about their AI-driven services, including one company offering AI-powered legal services.

Bias Risk

One of the biggest concerns repeatedly raised in connection with use of AI is the risk that such systems may produce biased results if, for example, they are trained on biased data. That is a concern we see woven into many of the laws, rules and regulations addressing AI risks. Therefore, when considering any use of AI in connection with marketing efforts, organizations should generally consider whether there is a risk of bias or discriminatory outcomes, determinations or predictions, and how that would potentially impact consumers. Some things to think about include factors impacting the diversity of the training data, what negative impacts or consequences for consumers might arise given the use case, and whether there are ways to audit or test for bias in the training data or results, as well as what human oversight can be introduced to guard against such bias.

Intellectual Property Risks

Generative AI programs rely on data that trains a generative model to generate new content. But there may be questions about whether the owner/operator of the AI technology has obtained all necessary rights to use the training data to create such new content, or whether such activity is protected by the doctrine of fair use, and ultimately whether the generated content could be infringing the original works. On the flip side, there are currently questions about whether and to what extent the images and other content generated by or with artificial intelligence can benefit from copyright protection. Brands generally expect that their creative ad content will belong to them and be protected from infringement. And that may not be the case where AI is leveraged to generate that content. Therefore, prior to using generative AI in connection with creation of ad content, organizations must consider the effect such use may have on both their own infringement exposure and the protectability of their works.

Issues Related to Data Ownership/Use and Confidentiality

In some cases, information that is fed into an AI platform may become part of the database used for training its models. And that may trigger confidentiality concerns and, to the extent personal data is involved, additional privacy compliance obligations and privacy risks. Prior to using AI tools or vendors as part of your advertising, marketing and media buying strategy, you should fully understand how your data will be stored and used. You may want to either limit the data you are inputting into these systems or otherwise restrict how the data can be used (e.g., by including strong data usage restrictions and confidentiality/security obligations in contracts with providers).

Obligations To Disclose Use of AI

Another thing to consider is whether and to what extent you may need to disclose your use of AI-generated content to consumers. For example, where AI is used to generate ad content, you should consider whether use of that content without such a disclosure may be deemed deceptive under state and federal UDAP laws. That will require a fact-based analysis of the use case and its impact on consumers. There are also various laws that specifically require disclosures in certain contexts where consumers are “interacting” with AI systems, as well as where AI-generated content was used in certain advertising materials (e.g., political advertising) or where a consumer specifically requests information regarding the use of AI in generating content, and those laws should be considered as well.

Changing Laws, Regulations and Regulatory Guidance

This is a quickly developing area. Therefore, it’s important to review recently enacted laws, keep an eye on pending legislation, and watch out for other developments in the legal landscape around AI that could impact your advertising and digital media activities. Some developments we saw in 2024 include the EU AI Act entering into force (with most of its provisions taking effect gradually over the next few years), the enactment of the Colorado Artificial Intelligence Act, the enactment of the Utah Artificial Intelligence Policy Act, and the introduction of dozens of other proposed bills and laws related to regulating use of AI. Although many of these laws and proposed bills address issues unrelated to advertising and marketing, many of them contain provisions that could impact marketing activities. Therefore, it is important to keep a watchful eye on developments in this area.

This is especially true in the U.S., where, in the absence of comprehensive federal legislation on AI, there is a growing patchwork of current and proposed AI regulatory frameworks at the state and local levels. We have seen this movie before in connection with the development of privacy regulatory frameworks in the U.S. As was the case with privacy laws and regulations, certain common themes will emerge and develop across the different statutes and regulations. And as they have done with respect to privacy regulation in the U.S., organizations are going to have to keep abreast of the different state and local laws and decide how they want to approach compliance in an area where requirements and obligations may vary in certain significant ways across states.

But don’t forget about existing laws – several states, including Oregon and Texas, have made statements or issued guidance reminding organizations that existing laws may apply to the use of AI technologies. For example, in guidance promulgated by Attorney General Ellen Rosenblum of Oregon titled “What you should know about how Oregon’s laws may affect your company’s use of Artificial Intelligence,” the attorney general notes that although the state may not have any new laws with “AI” in the heading, the state’s existing Unlawful Trade Practices Act, Consumer Privacy Act and Equality Act, among others, all have roles to play. Finally, don’t forget about the standards-setting organizations. We’ve seen the release of numerous third-party standards in the past several years, including the NIST AI Risk Management Framework.

Agreement Terms (Vendors, Partners)

Provisions you should be thinking about addressing in your contracts related to leveraging AI in your marketing and digital media efforts include the following:

  • Provisions related to potential IP infringement issues, particularly in connection with any generative AI use cases. This includes addressing whether the vendor has the necessary rights and licenses to use any training data as well as any third-party and open source technology used to provide the AI solution.
  • Provisions addressing any disclosure requirements related to the use of AI, especially in connection with generative AI that is used to generate ad creative content, where you are marketing services that leverage AI, or implementations where consumers are interacting with an AI system (e.g., chatbots).
  • Provisions addressing ownership and usage rights for any generated output (including any deliverable created from that output), as well as obligations, rights and protections regarding any inputs or prompts – including provisions clarifying whether any secondary use of data is permitted.
  • In the event the use case may trigger a requirement to conduct a data protection assessment or other risk assessment, provisions dictating how that will be handled (particular attention should be paid to use cases that may result in harmful consequences to consumers because of bias or discrimination in the training data or inherent in the model).
  • Provisions to mitigate the risks associated with bias (if applicable) and data quality. This may include requesting a representation and warranty from a vendor addressing how the AI was trained and the sources of data used in that training.
  • Requirements to comply with laws. Where possible, rather than a general compliance with laws requirement, the contract should be more detailed about what the specific legal requirements are and who is responsible for ensuring compliance. For example, where personal data is involved, responsibility for providing required notices and opt-out options, obtaining any required consents, and handling data subject requests should be outlined.
  • Provisions that specify the allocation of liability for any resulting issue, including harm to the parties and third parties when an AI solution results in error or incurs liability. This allocation should vary depending on the source of the issue.
  • Provisions addressing the cutoff date for training data and whether there will be updates to the AI solution and/or the training data, along with the timing and delivery of such updates.
  • Provisions addressing restrictions applicable to use of the AI solution or its outputs – for example, identifying any prohibited use cases or restrictions regarding the kinds of content that may be generated using the solution.

Hopefully, this provides some useful suggestions and thoughts you can use in approaching your digital marketing and media executions in 2025 and potentially putting together a playbook you can use to guide the analysis. This is an area that is rapidly changing, and legal teams are going to have to stay abreast of changes and stay flexible and nimble. It’s an area we plan to keep an eye on and update you on as well.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© BakerHostetler

Written by:

BakerHostetler
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide