
The US Treasury has published several documents designed for the US financial services sector that suggest a structured approach to managing AI risks in operations and policy (see subheading ‘Resources and Downloads’ towards the bottom of the link). The CRI Financial Services AI Risk Management Framework (FS AI RMF) comes with a Guidebook [.docx] which gives details of the framework, developed by a collaboration among more than 100 financial institutions and industry organisations, with input from regulators and technical bodies.
The objective of the FS AI RMF is to help financial institutions identify, evaluate, manage, and govern the risks associated with AI systems and let firms continue adopting AI technologies responsibly.
Sector-specific framework
AI systems introduce risks that existing technology governance frameworks don’t address. Risks include algorithmic bias, limited transparency in decision processes, cyber vulnerabilities, and complex dependencies between systems and data. LLMs create concerns because their behaviour can be difficult to interpret or predict. Unlike traditional software, which is deterministic, an AI’s output varies depending on context.
Financial institutions already operate under extensive regulation and there is a raft of general guidance such as the NIST AI Risk Management Framework. However, applying general frameworks to the operations of financial institutions lacks the detail that reflects sector practices and regulatory expectations. The FS AI RMF is being positioned as an extension to the NIST framework, with additional sector-specific controls and practical implementation guidelines in its pages.
The Guidebook explains how firms can assess their current AI maturity and implement controls to limit their risk. Its aim is to promote consistent and responsible AI practices and support innovation in the sector.
Core structure
The FS AI RMF connects AI governance with broader governance, risk, and compliance processes already affecting financial institutions.
The framework contains four main components. The first is an AI adoption stage questionnaire that lets organisations determine the maturity of their AI use. The second is a risk and control matrix, which contains a set of risk statements and control objectives in alignment with adoption stages. The Guidebook explains how to apply the framework, while a separate control objective reference guide provides examples of controls and supporting evidence.
The framework defines a total of 230 control objectives organised according to four functions adapted from the broader NIST AI Risk Management Framework: govern, map, measure, and manage. Each function contains categories and subcategories that describe elements of effective AI risk management and governance.
Assessing AI maturity
The adoption stage questionnaire determines the extent to which an organisation is using AI. Some firms rely on traditional predictive models in limited applications for example, while others deploy AI in core business processes; others just use AI in customer-facing roles.
The questionnaire helps organisations determine where they sit in the spectrum of AI use currently, evaluating factors like the business impact of AI, governance arrangements, deployment models, use of third-party AI providers, organisational objectives, and data sensitivity.
Based on this assessment, organisations are classified into four stages of AI adoption:
initial stage: organisations that have little or no operational AI deployment. AI may be under consideration but is not embedded,minimal stage: limited AI use in low-risk areas or isolated systems.evolving stage: organisations running more complex AI systems, including applications that involve sensitive data or external services.embedded stage: where AI plays a significant role in business operations and decision-making.
These stages help institutions focus their efforts on controls appropriate to their maturity level. A firm at an early stage does not need to implement every control immediately, but as AI becomes more integrated, the framework introduces additional controls to address growing levels of risk.
Risk and control
The control objectives for each AI adoption stage address governance and operational topics including data quality management, fairness and bias monitoring, cybersecurity controls, transparency of AI decision processes, and operational resilience.
The Guidebook provides examples of possible controls and types of evidence institutions can use to demonstrate they’re compliant. Each firm must determine the controls that fit best.
The framework recommends maintaining incident response procedures specific to AI systems and creating a central repository for tracking AI incidents, processes that will help organisations detect failures and improve governance over time.
Trustworthy AI
The framework incorporates principles for trustworthy AI defined as validity and reliability, safety, security and resilience, accountability, transparency, explainability, privacy protection, and fairness. These provide a foundation for evaluating AI systems along their full lifecycle. In simple terms, financial institutions have to ensure AI outputs are reliable, that systems are protected against cyber threats, and that decisions can be explained when they affect customers or have regulatory relevance.
Strategic implications
For senior leaders in financial institutions of any nation, the FS AI RMF offers a guide to integrating AI into existing risk management frameworks. It states the need for coordination in different business functions in the organisation. Technology teams, risk officers, compliance specialists, and business units all need to participate in the AI governance process.
Adopting AI without strengthening governance structures may expose institutions to operational failures, regulatory scrutiny, or reputational damage. Conversely, firms that build clear governance processes will be more confident in deploying AI systems.
The Guidebook frames AI risk management as an evolving entity. As AI technologies develop and regulatory expectations change, institutions will need to update their governance practices and risk assessments accordingly.
For financial sector decision-makers, the message is that AI adoption must progress in step with risk governance. A structured framework such as the FS AI RMF provides a common language and method to manage the evolution.
(Image source: “Law Books” by seychelles88 is licensed under CC BY-NC-SA 2.0.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

