AI Use by Financial Institutions: Québec’s AMF Publishes Draft Guidelines

Stikeman Elliott LLP
Contact

Stikeman Elliott LLP

On July 3, 2025, Québec’s financial institutions regulator – the Autorité des marchés financiers (“AMF”) – published draft guidelines (“Draft Guidelines”) on the use of artificial intelligence systems (“AI Systems”) by financial institutions (available in French only). The Draft Guidelines clarify the AMF’s expectations about the measures financial institutions should take to manage the risks associated with AI Systems and to ensure the fair treatment of customers. This post reviews key aspects of the Draft Guidelines and how they could affect financial institutions that are regulated by the AMF.

The AMF acknowledges that the expectations set out in the Draft Guidelines are not one-size-fits-all. Their application will depend on an institution’s nature, size, complexity and risk profile as well as the risk ratings of the AI Systems that the institution is using.

Interested parties may submit their comments on the Draft Guidelines by November 7, 2025.

Application of the Draft Guidelines

The Draft Guidelines apply to Québec authorized insurers, financial services cooperatives, authorized trust companies and other authorized deposit-taking institutions (together referred to as “Financial Institutions”).

AI Systems and Risk: The Basics

Definition

For the purposes of the Draft Guidelines, the AMF uses the definition of “AI System” used by the Organisation for Economic Co-operation and Development, namely:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

Risk ratings

The AMF expects Financial Institutions to assign a risk rating to each AI System they use, based on a variety of factors including:

  • The characteristics of the AI System and the data being used (e.g., the quality of the data and whether it includes personal information; and the explainability of the AI System’s results).
  • System controls (e.g., whether the AI System is isolated both upstream and downstream; the effectiveness of bias correction processes; the risk of re-identifying personal or confidential information, and the risk of inadvertently excluding a group of customers from participation due to bias in the data).
  • The institution’s exposure (e.g., how critical the AI System is to the Financial Institution; the type and volume of the clients affected in case of an issue; dependence on a third party for use of the AI System; and the Financial Institution’s reliance on AI Systems as a whole).

Given the rapid development of AI Systems, the AMF further expects a Financial Institution to periodically reassess the risk ratings that it has assigned.

Specific Expectations

The Draft Guidelines break down the AMF’s expectations into several categories, including (i) lifecycle risk management, (ii) governance mechanisms, (iii) policies and procedures; and (iv) protection of client interests.

Lifecycle risk management

The AMF expects Financial Institutions to develop processes and controls to manage risk at each stage of the AI System’s lifecycle, including risk relating to cybersecurity, biased outputs, use of discriminatory proxies, ethical misalignment, and hallucinations. For example:

  • Before proceeding with the design or procurement phases, Financial Institutions should assess whether using an AI System is the best solution for the problem in question, with due consideration to the AI System’s risk rating.
  • During the procurement and design phases, Financial Institutions should prioritize AI Systems that apply cybersecurity, explainability, and robustness targets.
  • During the design phase and on an ongoing basis, the Financial Institution should ensure that the AI System is being trained on high-quality data that is free of bias.
  • During the assessment and internal audit phases, the Financial Institution should assess the risks of discrimination and bias, cybersecurity breaches, hallucinations, ethical misalignment and intellectual property rights violations. When information necessary to complete the assessment of a high-risk AI System is missing, the AMF expects Financial Institutions to set appropriate limits on its use. Additionally, as part of their internal audits, Financial Institutions should also review the effectiveness of their governance mechanisms for managing the risks of the AI Systems.
  • Financial Institutions should continuously monitor the use and performance of their AI Systems to ensure that the quality of training data, outputs, and cybersecurity is maintained.

Governance mechanisms

The AMF expects Financial Institutions to establish governance mechanisms that ensure that responsibility for AI Systems is well-defined and that all stakeholders have knowledge, that is sufficient and appropriate to their roles, of how the AI Systems works, including its limitations and risks. For instance:

  • Each AI System should be under the direct responsibility of an individual who reports to the member of the executive with overall responsibility for all AI Systems used by the Financial Institution.
  • All users of an AI System should be aware of its operating limitations and any limits placed on its use by the Financial Institution.
  • The Board of Directors is responsible for developing a policy for managing risks related to the use of AI Systems, maintaining an adequate level of knowledge of AI Systems, and ensuring that the AI Systems are assessed periodically.

Policies and procedures

The AMF expects Financial Institutions to have in place policies, processes, and procedures related to the use of AI Systems that are commensurate with the nature, size, and complexity of the institution's activities and risk profile, as well as the risk rating of the AI System. These policies must include

  • Maintaining a centralized register that provides a complete picture of each of the AI Systems in use and all information necessary to make decisions concerning them.
  • Periodically assessing the risks related to the AI Systems and updating managers, users, AI system validation teams, and senior management on these risks accordingly.

Protection of client interests

To ensure that AI Systems do not unfairly impact clients, the AMF expects institutions to ensure that:

  • Their code of ethics upholds high standards for AI Systems use.
  • The use of discriminatory factors by AI Systems is documented, corrected and reported to senior management.
  • Discrimination and bias found in customer-impacting systems are promptly documented, corrected, and reported to senior management, as well as putting place in monitoring mechanisms.
  • Special attention is paid to the quality of secondary data sources used, especially when the results impact customers.
  • Measures are in place to ensure that personal data is up-to-date and accurate.
  • Informed consent is obtained from customers when their personal data is being used by an AI System.
  • Customers are informed when they are interacting with an AI System (e.g., a chatbot) and that a human is available promptly upon customer request.
  • If a decision is made by an AI System or by a human using information gathered from the AI System, clear and simple explanations of those decisions are available to customers on request.

Next Steps

In anticipation of the finalization of the Draft Guidelines, Financial Institutions should consider taking the following steps:

  • Make an inventory of AI Systems currently being used, assess them and assign them a risk-rating.
  • For AI Systems currently in the design or procurement phase, assess whether their use is justified given the risks and whether an alternative solution would be better suited.
  • Review existing governance mechanisms and update them as applicable to ensure that responsibility for the use of AI Systems is well defined and that the principles of fair treatment of customers are adhered to.

If you require the insertion of a link(s) that would be useful to a reader, provide it in the text or provide us with a list of links (or complete citations).

If inserting a link, avoid using “here” for link destinations and instead use a word or phrase that is descriptive of the linked content. For example, instead of “for Ontario’s latest COVID restrictions, click here,” use “See Ontario’s latest COVID restrictions.” Labelling the link destinations with a descriptive word or phrase improves search engine optimization and helps with our AODA compliance.

[View source.]

Written by:

Stikeman Elliott LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Stikeman Elliott LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide