The McNamara fallacy and the financial Authorities. Jon Danielsson. Modelsandrisk.org

The McNamara fallacy and the financial Authorities

November 5, 2022
We missed the excessive build-up of risk before 2007 despite having all the numbers and models right in front of us. Now we have doubled down on models and numbers. Have the Authorities fallen for the McNamara fallacy?

The McNamara fallacy is named after Robert McNamara, Ford's president and then the US defence secretary during the early years of the Vietnam War. His strategy for fighting the war was based on what he developed at Ford, where everything was measured and quantified.

When it came to fighting the Vietnam War, he argued that "Things you can count, you ought to count", including body count. The problem was that the Department of Defense only measured what could be measured. Hence they thought they were winning the war, which they were — but only on paper.

McNamara explained his philosophy in 1967:

{% quote(author="") %} "It is true enough that not every conceivable complex human situation can be fully reduced to the lines on a graph, or to percentage points on a chart, or to figures on a balance sheet, but all reality can be reasoned about. And not to quantify what can be quantified is only to be content with something less than the full range of reason."

The problem with this approach was well demonstrated by the sociologist Daniel Yankelovitch in 1972, who called it the "McNamara fallacy":

{% quote(author="") %} "The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide."

OK, then, have the Authorities today fallen for the McNamara fallacy?

In many aspects, yes. They prefer to regulate by models, focusing on perceived risk and not actual risk.

Inevitably in policymaking, subjectivity is often seen as bad and objectivity good. If one is objective, one has to have something objective to measure and control. And that requires numbers and statistical models.

Take bank capital. Of course, there are myriad ways to determine bank capital, the Basel risk-weighted assets and the leverage ratio are just two. Each of these has a lot of problems, not the least because of the dependence on measurements and how the regulated entity interacts with the measurements.

A much simpler and possibly better way to set capital would be for the authorities to predetermine the amount of capital a bank has to hold using their subjective judgments.

But of course, that can never happen. We need scientific objectivity and are stuck with McNamara's fallacy.


When is a cyberattack systemic?
The illusion of control

Models and risk
Bloggs and appendices on artificial intelligence, financial crises, systemic risk, financial risk, models, regulations, financial policy, cryptocurrencies and related topics
© All rights reserved, Jon Danielsson,