On December 5, 2024, the Canadian Securities Administrators (CSA) released CSA Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets (the Notice). The Notice was issued in light of the continued growth in the use of artificial intelligence (AI) systems in capital markets, which the CSA noted has the potential to improve market efficiency but poses unique risks and challenges for investors.
The Notice outlines overarching themes relating to the responsible use of AI in capital markets and the considerations that market participants should take into account when implementing AI systems. It also provides guidance on the application of securities laws to the use of AI. With the Notice, the CSA also launched a public consultation ending March 31, 2025, to allow interested stakeholders to submit comments on the potential regulation of AI within capital markets.
Overarching themes relating to the use of AI
The Notice outlines 5 overarching themes that apply to the use of AI in capital markets:
- Technology and securities regulation. The CSA notes that securities laws are generally technology-neutral and therefore apply to the activity undertaken by a market participant, not to the technology used in the activity. This means that AI systems will be treated differently depending on their function.
- Governance and oversight. The CSA recommends that market participants adopt AI-focused governance practices that can mitigate the unique risks and challenges posed by AI. This could include having a “human-in-the-loop” tasked with monitoring a given AI system to ensure it operates as intended.
- Explainability. The CSA states that market participants should implement AI systems with high levels of explainability, meaning that the system’s reasoning is clear and comprehensible. The CSA discourages AI systems that engage “black box” processes that cannot be understood. The CSA has sought to emphasize that capital markets function on transparency and disclosure, and AI systems should align with these principles.
- Disclosure. The CSA encourages market participants to be transparent about their use of AI. Disclosure should not be boilerplate, nor should it contain false or misleading claims. Rather, market participants should provide robust disclosure to allow investors to make informed decisions, including transparency regarding how AI is used.
- Conflicts of interest. The CSA recognizes that AI systems can create new considerations related to conflicts of interest. Market participants must ensure the outputs of their AI systems do not result in decisions which favour their interests over those of their clients or investors.
Specific guidance for market participants on the use of AI
The Notice provides specific guidance to market participants on how AI systems can be implemented to support their individual roles in the capital markets. The CSA structures this guidance around existing regulatory requirements, outlining how AI may impact a market participant’s existing obligations.
- Registrants. Firms that are registered to provide certain investment-related activities are encouraged by the CSA to consider how the use of AI will impact the requirements of their registration. The CSA notes that registered advisers and dealers should be mindful of the fact that investors rely on their advice and that the use of AI should be monitored to ensure that it does not negatively impact the quality of such advice. The CSA also notes that investment fund managers should consider whether the use of AI in managing investment funds meets the requisite standard of care expected from them.
- Non-investment fund reporting issuers. The Notice provides extensive guidance to reporting issuers in respect of how their continuous disclosure obligations apply to the use of AI. The CSA explains that statements about the prospective use of AI in disclosures may constitute forward-looking information. The CSA reminds issuers to not disclose such statements unless they have a reasonable basis for doing so. Issuers are also cautioned against promotional statements regarding the use of AI that could be misleading to investors.
- Marketplaces. The CSA encourages marketplaces to develop robust controls in order to meet regulatory requirements relating to supervisory controls, policies, and procedures. The CSA also notes that marketplaces should develop policies providing for regular testing of AI systems, validation of outputs, and procedures for mitigating risks.
- Clearing agencies. Clearing agencies are subject to comprehensive requirements relating to risk management, systems design, operational performance, and regulatory performance, all of which are applicable to the use of AI. The CSA notes that clearing agencies should develop adequate controls in order to meet these requirements. Clearing agencies are also encouraged to undertake reviews and vulnerability assessments of their AI systems at least annually.
- Trade repositories. To identify and minimize the risks related to AI, the CSA notes that trade repositories must implement, maintain, and enforce appropriate controls and procedures. The CSA further encourages that trade repositories incorporate robust security measures into their AI systems to ensure that sensitive information is safeguarded against unauthorized access.
- Designated rating organizations and designated benchmark administrators. An appropriate degree of explainability and transparency of AI systems is encouraged by the CSA for both designated rating organizations and designated benchmark administrators. The CSA explains that this requires public disclosure of the use of AI, as well as disclosure of methodologies, models or assumptions used by the AI system.
Further consultation on AI in capital markets
Although the Notice does not create or modify current legal requirements for market participants, the CSA is seeking comments from market participants on whether securities laws should regulate the use of AI in capital markets. Interested stakeholders can submit responses and feedback until March 31, 2025.
The authors would like to thank Chloe Loblaw and Patrick Lajoie for their significant contributions to this article.