Artificial intelligence and its use in financial regulations. Jon Danielsson. Modelsandrisk.org

Artificial intelligence and its use in financial regulations

November 4, 2023
Banks are embracing artificial intelligence (AI) and the banking regulators must keep up or risk being outmanoeuvred. Although the rising use of AI in the financial system is generally beneficial, it also threatens the stability of the system by introducing new and poorly understood risks.

AI is especially helpful in areas such as risk management, compliance, credit allocation, fraud detection and microprudential regulations. These provide vast amounts of data that AI can train on. The rules are clear and non-changeable in the time span in which AI operates and the cost of mistakes is small. The human risk managers and financial supervisors are training their AI replacements.

The picture is different for macroprudential regulations that are concerned with financial stability. Data is scarce, the rules are unclear and the cost of mistakes catastrophic. But even then, AI will be of considerable help in the formulation and enforcement of these regulations, and in advising on the resolution of crises.

AI requires a large amount of data to be effective, but data on crises is extremely limited, as they are (thankfully) rare. Crises occur once every 43 years for the average OECD country and once in 14 years for the United Kingdom, the OECD’s most crisis-prone country. While all crises share the same fundamental vulnerabilities, leverage, self-preservation and complexity, the details for each are distinct. Meanwhile, the forces that lead to instability usually remain hidden until a crisis is already under way. The consequence is that AI cannot effectively learn from observable data and past crises and, even worse, such data can be significantly misleading about what might happen in future crises.

Furthermore, the analysis that AI excels in, such as risk management, has increasing returns to scale, so that competitive pressures drive outsourcing to a small number of cloud vendors, like Blackrock’s Aladdin. The greater the adoption of such risk management as a service (RMaaS), the higher the potential for herd behaviour in investment decisions, as more banks come to see and react to risk similarly. This will cause synchronised buying and selling, magnifying the financial cycle and increasing systemic risk.

AI can amplify stress in times of crisis when it increases the speed of making decisions, which, while helping banks to survive extreme stress, also directs investment decisions to the safest assets. This can lead to the hoarding of liquidity, bank runs and fire sales. Human decision-makers certainly cause these outcomes, but AI is particularly good at doing so. Even worse, the AI engine might find that the best way to meet its objectives is to maximise vulnerability, and it may be impossible to prevent that until it is too late. The potential for such damaging behaviour is amplified by how the AI engines in different firms work with one another.

Furthermore, when AI engines interact with their counterparts in other financial institutions, the end result can be undesirable coordination in manipulating the system for private gain. Their speed of analysis, coupled with unclear objectives and difficulties in attributing responsibility, makes such eventualities more likely than in a system run by human bankers.

Where the use of AI poses the highest risk is also where it is the most useful: resolving the most severe financial crises. These crises are very costly, running into hundreds of billions of pounds for the United Kingdom. We will do what it takes to contain them; history shows that when regulations and laws stand in the way of the best solutions, they get changed, as we learnt in 2008. Earlier this year, Switzerland had an emergency parliamentary session to change the law to resolve the Credit Suisse crisis. Also, crisis resolution demands significant public resources. Suspension of the law and significant resource reallocation mean only the political leadership – neither AI nor the central bank – has legitimacy in resolving a crisis. That creates particular problems for AI providing advice. Not only may it not understand the objectives of crisis resolution, except at the highest abstract levels, it also will not be able to learn from the intuitive way the various stakeholders – the political leadership, private sector, authorities and the judiciary – make decisions.

Nevertheless, AI will be crucial for resolving crises. It may be the only way to analyse the state of the system and run scenarios rapidly enough to be helpful to the leadership. And then, the AI analysis may not leave more than one tenable option, so AI becomes the decision-maker by stealth.

Ultimately, the mushrooming use of AI in the private sector leaves the authorities with no option but to follow, which they might find difficult to do given their rigidity and conservatism. The use of AI will both be essential and threaten stability, and authorities need to be as prepared as possible.


Artificial intelligence and financial stability
How AI can undermine financial stability

Models and risk
Bloggs and appendices on artificial intelligence, financial crises, systemic risk, financial risk, models, regulations, financial policy, cryptocurrencies and related topics
© All rights reserved, Jon Danielsson,