No AI Law? No Problem. How Massachusetts Attacked AI Underwriting Under Existing State Statutes

Hudson Cook, LLP
Contact

On July 10, 2025, the Massachusetts Attorney General (AGO) entered into an Assurance of Discontinuance (AOD) with a private student loan lender (the Company), resolving allegations that the Company's underwriting practices violated the state's unfair and deceptive act or practice (UDAP) law and federal fair lending laws. The AOD imposes sweeping reforms to the Company's use of artificial intelligence (AI) and algorithmic models in credit underwriting. The settlement underscores growing regulatory scrutiny of AI-based decisionmaking in consumer finance at the state level and provides a roadmap for governance expectations around automated lending systems.

At the center of the AGO's investigation were the Company's algorithmic and judgmental underwriting practices. The AGO alleged that the Company failed to prevent disparate outcomes in both types of underwriting, relied on variables in its AI models that allegedly produced discriminatory effects, and issued adverse action notices that were inaccurate; and, in doing so engaged in an unfair or deceptive act or practice.

Breaking Down the AI Allegations: From Inputs to Outcomes

The AGO's investigation revealed that the Company used artificial intelligence models—defined in the AOD as "machine-based systems that...make predictions, recommendations, or decisions influencing lending outcomes"—to automate loan approval and pricing decisions. The models operated in three stages: "prescreen decline," "quick decline," and "risk score," with each stage applying algorithmic assessments and "Knockout Rules" to screen applicants. In scrutinizing all three stages of the underwriting model's operation, the AGO underscored that compliance obligations apply not only to final underwriting decisions but to every automated stage that influences who advances in the application pipeline. Companies can reasonably take from this action that the failure to have compliant models (including those using AI) at all automated stages could result in a UDAP action.

Key AI-related allegations included:

  1. Use of Cohort Default Rate (CDR) Variable: The Company incorporated the U.S. Department of Education's CDR data into its Student Loan Refinance (SLR) model as a weighted input. The AGO alleged that this practice disproportionately penalized Black and Hispanic applicants, resulting in disparate impacts on approval rates and loan terms, in violation of the Equal Credit Opportunity Act (ECOA). According to the AGO, the use of this publicly available data was not the issue, but rather the alleged failure to test for disparate impact stemming from its use.
  2. Knockout Rules Based on Immigration Status: The Company automatically denied applicants lacking a green card during the "prescreen decline" stage, allegedly creating a disparate impact on national origin grounds, in violation of ECOA.
  3. Lack of Fair Lending Testing: The Company allegedly deployed AI models without implementing adequate safeguards to detect and mitigate discriminatory effects, including a failure to perform disparate impact testing on weighted inputs or to conduct transactional testing of judgmental underwriting—a process in which, according to the AGO, human evaluators exercise discretion in assessing an applicant's creditworthiness—in violation of ECOA.
  4. Opaque Adverse Action Notices: The AGO alleged that the Company's adverse action notices often failed to provide specific reasons for credit denials, partly due to algorithmic models that could not explain their decisionmaking logic, in violation of ECOA and its implementing Regulation B.

The matter was resolved without litigation. The Company neither admitted nor denied the allegations made by the AOD, including the ones detailed above. The Company agreed to pay $2.5 million, adopt extensive compliance measures, and submit periodic compliance reports to the AGO over a multi-year period, with the AGO retaining the right to request raw data and documentation.

Algorithmic Governance Mandates: The Compliance Blueprint

The AOD imposes an expansive governance framework for the Company's AI underwriting practices. It reflects a blueprint that is increasingly emerging from both federal and state regulators. Companies that engage in automated decision-making or incorporate AI into their underwriting processes should consider adopting similar strategies to mitigate compliance risk. A variation of this framework may also be valuable for companies that license models developed by third parties. Key elements of the framework include:

  • Written AI Policies and Procedures: Develop policies to ensure AI models comply with anti-discrimination and fair lending laws. These policies cover model design, development, deployment, monitoring, and updates.
  • Algorithmic Oversight Team: Establish an internal oversight team, with a designated chairperson, to manage fair lending testing, model inventories, and responses to bias concerns.
  • Annual Fair Lending Testing: Algorithmic underwriting models and knockout rules used to make decisions about loan applications must undergo annual disparate impact testing. Trigger events such as model updates or credible internal complaints require additional testing.
  • Model Inventories and Documentation: Maintain detailed records of algorithms, training data, parameters, active use dates, and fair lending testing results.
  • Interpretable Models for Adverse Action Notices: Use interpretable models or systems that enable accurate identification of reasons for credit denials.
  • Discontinuation of Problematic Variables: Understand how different data sets, including those gathered from publicly available information, are weighted and used within the model to make it easier to weed out problematic data sets.

Implications for Fintech and AI in Lending

The AOD highlights a growing trend among regulators, particularly at the state level, to hold lenders accountable for the outputs of AI systems, regardless of intent. It is a reminder to all companies that use models that reliance on opaque or "black box" models may expose institutions to risk if the models cannot be audited or explained. For fintechs and traditional lenders alike, the settlement underscores the importance of some additional fundamental compliance controls:

  • Conducting rigorous fair lending testing at every stage of model development and deployment.
  • Maintaining comprehensive documentation to support explainability and defendability of AI decisions.
  • Establishing robust governance frameworks to oversee AI systems, including clear roles for compliance, legal, and data science teams.
  • Anticipating that regulators may require detailed review of data sets to ensure compliance with law.

Looking Ahead: AI Risk Management as a Regulatory Imperative

This settlement adds to a growing body of regulatory actions addressing AI in consumer finance. It also aligns with broader initiatives to ensure AI systems are fair and transparent and that companies are held accountable for their use of these systems. With the steep monetary penalties and long-term regulatory oversight that we see more and more states employing, Companies deploying AI in credit underwriting should closely monitor evolving expectations and consider proactive enhancements to their AI governance programs to mitigate compliance risks.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Hudson Cook, LLP

Written by:

Hudson Cook, LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hudson Cook, LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide