The McNamara fallacy is named after Robert McNamara, Ford's president and then the US defence secretary during the early years of the Vietnam War. His strategy for fighting the war was based on what he developed at Ford, where everything was measured and quantified.
When it came to fighting the Vietnam War, he argued that "Things you can count, you ought to count", including body count. The problem was that the Department of Defense only measured what could be measured. Hence they thought they were winning the war, which they were — but only on paper.
McNamara explained his philosophy in 1967:
The problem with this approach was well demonstrated by the sociologist Daniel Yankelovitch in 1972, who called it the "McNamara fallacy":
OK, then, have the Authorities today fallen for the McNamara fallacy?
Inevitably in policymaking, subjectivity is often seen as bad and objectivity good. If one is objective, one has to have something objective to measure and control. And that requires numbers and statistical models.
Take bank capital. Of course, there are myriad ways to determine bank capital, the Basel risk-weighted assets and the leverage ratio are just two. Each of these has a lot of problems, not the least because of the dependence on measurements and how the regulated entity interacts with the measurements.
A much simpler and possibly better way to set capital would be for the authorities to predetermine the amount of capital a bank has to hold using their subjective judgments.
But of course, that can never happen. We need scientific objectivity and are stuck with McNamara's fallacy.