The recent American Association of Residential Mortgage Regulators Annual Conference included a presentation highlighting the rising use of Artificial Intelligence (“AI”) in the Financial Services industry. As this will clearly be an ongoing focus of all regulators, not just residential mortgage regulators, we thought a short summary of the presentation, and some or our reactions to the presentation, might be of interest.
The term AI has been used to encompass a wide array of technology aimed at approximating aspects of human cognition. The presentation focused on Generative AI (“GenAI”), a subset of AI techniques that involves generating new data or content. The financial industry’s adoption of GenAI has evolved as firms utilize more advanced levels of technology and automation to deliver services. The U.S. Government Accountability Office published a report in May 2025 highlighting current use cases for leveraging AI in finance, including executing automatic trades, evaluating creditworthiness, and identifying potential customer risks. In a survey of 420 global financial services businesses, Temenos, a leading banking technology provider, found that 75% of banks are exploring generative AI deployment, approximately half of which have already deployed or are in the process of deploying it.
Incorporating AI presents several opportunities. For companies, leveraging these technologies can raise profitability by lowering costs for products and services. On the consumer side, AI adoption can lead to greater convenience and financial inclusion by making platforms more accessible.
The presenters highlighted how AI could be used to improve efficiencies in originating a mortgage loan. For example, at the origination stage, chatbots can be used to answer customer questions and draft personalized loan offers. During underwriting, AI can be used to extract relevant data to assess default risk. Lastly, AI technology can expedite closing by summarizing documents. With the presenters having called attention to these scenarios, we would note that residential mortgage regulators can be expected to carefully examine these uses of generative AI.
While the opportunities are plentiful, regulators and industry are currently working together to identify, address, and mitigate risks. The presenters organized risks associated with incorporating generative AI into five broad categories: Data-Related Risks, Testing and Trust, Compliance, User Error, and AI Attacks. Within each category, they summarized the primary concerns as follows:
- Data-Related Risks: Confidentiality, Data Quality, and Intellectual Property Violations
- Testing and Trust: Accuracy, Bias, and Lack of Transparency
- Compliance: Privacy, Regulatory, and Ethics
- User Error: Lack of Expertise, Lack of Supervision, and Failure to Understand Capability
- AI/ Machine Learning (ML) Attacks: Data Privacy Breach, Training Data Poisoning, and Adversarial Inputs
Financial institutions are urging regulators to establish data privacy standards for internal AI models and provide guidance on how to avoid privacy violations and data bias. The Congressional Research Service has previously described the legal and regulatory framework applicable to financial institutions and activities as “‘technology neutral,’ meaning they do not take into consideration the specific tools or methods used by institutions.” For example, in their article on Artificial Intelligence and Machine Learning in Financial Services, they assert that “lending laws apply to lending whether the lender uses a pencil and paper or a cutting-edge AI-enabled model.” However, concerns remain regarding how specific laws, such as the Equal Credit Opportunity Act (ECOA) and Fair Credit Reporting Act (FCRA) apply when mitigating discrimination in AI.
In 2023, the Biden administration issued an Executive Order on “Safe, Secure, Trustworthy Development and Use of Artificial Intelligence.” The Order directed the Consumer Financial Protection Bureau (“CFPB”) to issue guidance on how ECOA, the Fair Housing Act, and Consumer Finance Protection Act (“CFPA”) apply to credit transactions through digital platforms. That September, the CFPB published circular 2023-03 addressing whether creditors may rely on the checklist of reasons provided in CFPB sample forms for adverse action notices when using artificial intelligence or complex credit models. Although this guidance has since been withdrawn, the Bureau emphasized that providing specific reasons for adverse actions is “particularly important when creditors utilize complex algorithms.”
The Bureau went on to warn that consumers “may not anticipate that certain data gathered outside of their application or credit file and fed into an algorithmic decision-making model may be a principal reason in a credit decision, particularly if the data are not intuitively related to their finances or financial capacity.” For example, if a creditor decides to lower the limit on, or close, a consumer’s credit line based on behavioral data, such as the type of establishment at which a consumer shops or the type of goods purchased, it would likely be insufficient for the creditor to simply state “purchasing history” as the principal reason for adverse action. Instead, they advised, that the creditor would likely need to disclose more specific details about the consumer’s purchasing history or patronage that led to the reduction or closure, such as the type of establishment, the type of goods purchased, or other relevant considerations.
In July of this year, the Trump Administration published “America’s AI Action Plan.” While the previous administration appeared to take a more cautionary approach, this Action Plan seeks to “cement U.S. dominance in artificial intelligence.” The plan does not mention Consumer Finance, but the push for enabling innovation and adoption signals a deeper integration of AI into all industries moving forward. Our summary of the plan can be found here.
At least one state regulator has made the decision to stand by the CFPB’s previous guidance, as we noted in our blog here. In relevant part, the Massachusetts Attorney General recently reached a $2.5 million dollar settlement with Earnest Operations LLC (“Earnest”), a Delaware- based student loan company. The AG alleged that Earnest’s use of AI models to make lending decisions violated consumer protection and fair lending laws. She argued that training their algorithmic models based on arbitrary, discretionary human decisions and including the federal student loan Cohort Default Rate in its data set resulted in disparate impact in approval rates and loan terms, specifically disadvantaging Black and Hispanic applicants. Under the terms of the settlement, among other things, Earnest will implement a detailed corporate governance structure and develop written policies to ensure responsible and legally compliant use of AI.
This decision highlights the importance of evaluating what governance approach companies use to implement effective and ethical AI deployment. In that regard, a 2025 KPMG Report surveyed Generative AI use among over 90 US board members. Seventy percent of board members reported developing responsible use policies for employees. Other popular initiatives included implementing a recognized AI risk and governance framework, developing ethical guidelines and training programs for AI developers, and conducting regular AI use audits.
The presenters provided the following list of best practices and considerations for developing AI governance tools:
- Defining what exactly is AI in your organization
- Developing a comprehensive Risk Management Framework
- Requiring disclosures of when/ where GenAI is being used
- Reviewing AI models for explainability
- Implementing a tiered Authorized Use policy
- Providing AI use training to employees
- Establishing vetting standards to improve vendor management
[View source.]