There is a desperate need to break the orthodox, widespread belief that all deficits and debt are equally undesirable.
The dust kicked up by the annual Budget “event” has long settled. What’s interesting is that the pre-Budget excitement always seems more intense and more drawn out than post-Budget.
Year after year, one point of tension in the build up to the Budget is whether or not the government will adhere to a low fiscal deficit target or will it reset the fiscal consolidation target.
A specific number turns into the centre of discussion in newspapers and TV channels; is 3.1 too low or 3.4 too high a deficit? This year, for instance, a few days prior to the Budget, one daily stated, “FRBM [Fiscal Responsibility and Budget Management] Panel offers little spending space to government” while another reported that the panel suggested the “government not to keep fiscal deficit below 3% (of GDP)”.
And then came the big day: the finance minister announced the deficit target at 3.2% for 2017-18. Stock markets reacted positively, the rating agencies seemed content and Budget analysis programmes did not have too much to argue about. Meanwhile, the general public was left feeling that economists, experts and international rating agencies have it all figured out. The government had done well to heed their advice and keep the economy on track.
But why are economists, and consequently politicians, so anxious about a number like 3%? Didn’t the US fiscal deficit touch almost 15% of GDP in 2009? We know it was in response to the global financial crisis, but what were the consequences of this massive deficit? Did the US economy collapse soon after? Obviously not. Given this fixation over the 3% target, it would be pertinent to ask where this number (3%) comes from? A little-known fact may help: the number was supposedly “invented” by Guy Abeille, an official in the French finance ministry, and adopted by the Stability and Growth Pact agreement between EU member states. In an interview, Abeille supposedly said, “…we came up with this number in less than an hour. It was born on the corner of a table, without any theoretical reflection.” Even as an obituary for the EU is being written by many, the 3% deficit target number seems as robust as ever. But if the figure is not spewed out by some complex dynamic stochastic general equilibrium model and was instead an (arbitrary) invention to ensure stability of the European Monetary Union, why is a country like India besieged by this enigmatic target number? The fiscal deficit target number of 3% of GDP has now come to be taken for granted as given; dangerously so because when we reach this point in economic analysis, we stop questioning its basis.
More recently, the FRBM Review Committee has suggested that the fiscal deficit target may adhere to a range rather than a fixed number. This flexibility in the deficit target has, however, been replaced by a preferred anchor: public debt (or the stock of accumulated deficits) at 60% of GDP. One critical factor taken into consideration while arriving at the debt target as articulated by N.K. Singh, chairman of the committee, is “the standard government solvency constraints.” This is line with the IMF’s position that “prudence dictates that countries target a debt level well below the limit, the limit delineates the point at which fiscal solvency is called into question.” To the general public, words like ‘budget’, ‘deficit’ and ‘debt’ are symbolically loaded, conjuring up images of insolvency, bankruptcy, unsustainable indebtedness and countries going broke or simply falling off a (fiscal) cliff.
Debts must also be repaid. And who bears the burden of debt at the time of repayment? The obvious answer is that future generations will have to take the tab by paying additional taxes to the government in their lifetime. Although economists have a rather archaic term for this, “Ricardian equivalence”, it often finds expression in more poignant phrases: “risking our future prosperity by sticking our children with the bill”, “stealing from future generations by running fiscal deficits”, “you don’t want to take from your children’s pocket” or “the unborn must share higher fiscal burden”. From such economic imagery arises the widespread belief that all deficits and debt – including those of the government – are equally undesirable and the necessity to rein them is unequivocal. Short-term deviations may be acceptable but in the longer term, these simply cannot be endured. And if this is true for you and me, then it must be true for the government too.
There is a desperate need to break this orthodox, neoliberal macroeconomics myth, which is unrelentingly being perpetuated in macroeconomic discourse. This is the case even as many of the tenets of neoliberal economics are being questioned under new populist-nationalist regimes. The fiscal deficit is the most important macroeconomic policy instrument available to the state and it is imprudent for sovereign governments not to appreciate and utilise it to the maximum. But this will happen only when we delve deeper into understanding the modern monetary system based on fiat currencies. Orthodox macroeconomics seems to be caught in a time warp, a period when currencies were based on a gold (or silver) standard wherein the state promised conversion of all its debt into gold (or silver) at a fixed rate. Money creation by the state was therefore clearly constrained by its ownership of precious metals. Since 1971, however, that monetary system has been buried when the world abandoned the gold standard and adopted inconvertible (into precious metals) fiat currencies. Even when countries do agree to full convertibility of their currencies into foreign ones, it is mostly not at fixed rate.
In an earlier article in The Wire, I had elaborated one of the most fundamental tenets of modern money theory (MMT): the state, unlike households and firms, does not face a budget constraint. Financially speaking, a state which issues its own fiat currency (like India, Japan or the US but not Greece or Spain) can run deficits in its own currency of 3%, 15% or even 150%. It does not face a solvency issue, although there will be other important economic repercussions. Nonetheless, as a first step, it is critical to dismiss the notion that setting a fiscal deficit target arises from concerns over solvency per se. If deficit targets are emanating from the other economic repercussions – specifically inflation and/or balance of payments deficits – then there is no necessity of a blanket rule on deficits or debt like 3% or 10% of GDP respectively. The deficit is a policy variable that must be contextualised. While a deficit of 1% could trigger inflation in one situation, a deficit of 10% may have little impact on the price level (as in the case of Japan over the last 25 years and the US more recently).
There has also been a situation in history – prior to 1971 – when “modern money” was actually administered for a few years in the UK during the First World War. Although war is an extreme event, it does illustrate some fundamental misconceptions about fiscal deficits and modern money which neoliberal macroeconomists continue to cling to. Issued just one day after Britain declared war on Germany, the Currency and Bank Act of 1914 “permitted the [UK] Government to print notes as legal tender in place of gold sovereigns and half-sovereigns.” The gold standard was de facto suspended. The notes or treasury currency were made legal tender so that obligations to the state (payment of taxes, duties, fees and fines) could be settled with these notes. But since printing of the currency would take time, the Act even allowed postal orders to “temporarily be current and legal tender in the United Kingdom in the same manner and to the same extent and as fully as current coins.”
But why was the gold standard suspended with such urgency? The answer is simple – the government needed resources to fight the war and these would have to be obtained from the private sector. By making government debt (IOUs) legal tender in settlement of obligations to the state, it ensured the private sector’s willingness to accept its IOU or debt. As MMTers argue, taxes drive money and it does not matter what the money thing is (even postal orders are money) as long as the state makes it legal tender or in other words, acceptable as a means of settling obligations due to the state. By abandoning the gold standard the UK government had gotten rid of its financial constraints and was able to spend in order to acquire or transfer real resources from the private sector to itself as the war demanded. It should therefore come as no surprise that the government’s share in the economy’s aggregate expenditure increased from 8% in 1913 to almost 40% by 1917.
A discerning reader will also realise that the UK government did not first raise financial resources through taxes or borrowings and then increase its expenditure. If that were so, what was the need for the Currency and Bank Act to be passed a day after the war began? Could it not have simply increased borrowings or taxes payable in gold sovereigns? Even present-day economic historians are caught in the same time warp as (orthodox) macroeconomists when they articulate the “exceptional nature of the expansion in government expenditure [to finance the War] … required an exceptional fund raising exercise by the government.” Another historian argues that “25% of the government’s financial resources were derived from taxes … borrowing from the public were the chief forms of war finance.” Although the government borrowed money from the public and taxes were collected in due course it is important to reiterate that it was not a precondition for government spending to happen. Put simply, spending through issue of new fiat currency preceded tax collection and borrowings.
Free from the constraints of the gold standard (that required conversion of currency to gold at a fixed rate), the government was then able to procure resources from the private sector to fight the war. From a small fiscal surplus in 1913, Britain’s faced a fiscal deficit of almost 48% of GDP in 1916-17 while the national debt to GDP ratio increased from 26% in 1913-14 to more than 127% in 1918-19 and monetary base (M0) more than doubled between 1913 and 1919 (all statistical data on the war in this article is drawn from here and here). But what were the economic consequences of such profligacy? First, Britain did eventually win the war and second, its economy did not collapse.
Nonetheless, unbridled issue of currency by the UK government and its burgeoning fiscal deficits did have serious economic repercussions both, positive and negative. Real GDP grew by 13% in the war years while employment grew by almost 5%. But this happened at a heavy cost to the private sector. Private sector consumption and investment expenditure as percentage of GDP contracted between 1913 and 1917, from 77% to 60% and 7.6% to 0.9% of GDP, respectively. While total employment grew, the share of civilian employment declined sharply by 15% even as military employment saw a ten-fold increase. The composition of output of selected items also showed an interesting trend – output of essential goods such as grain, meat and potatoes stagnated while supplies of arms and ammunition increased sharply. But these qualitative shifts were obvious, the government was transferring national resources to itself to fight the war.
The question that must be asked is why such action is considered inappropriate during peace times to meet an objective like full employment? Why cannot the government employ resources (specifically, labour) lying unutilised by the private sector by increased spending – the money for being simply “printed” or injected through the banking system? The answer is usually that deficits can trigger high inflation, unsettle the balance of payments as well as strain exchange rates. The impact of war spending by the UK government once again provides us with some useful insights in this regard. The retail price index more than doubled during the war years while imports grew sharply, taking UK’s current account from a surplus of £125 million in 1913 to a deficit of £204 million by 1919.
The trade-off between growth and employment from increased government spending on the one hand and inflation and external account on the other is evident. In fact, the concern over destabilising inflation became a major reason for neoliberal macroeconomists to suppress fiscal activism. It is necessary to mention that MMTers do not reject the possibility of inflation; however, it is necessary to appreciate that first, reasons for high inflation are contextual – especially whether there exists unemployment and surplus capacity across industries – and second, a normative judgment is required to decide the weights that can be assigned to each objective before we rule one out in favour of the other.
One other question remains unanswered. Why were taxes and borrowings at all required by the UK government if they were not required for government spending? MMT provides an answer: to curb excessive aggregate demand and consequent inflation. While tax revenues increased in absolute terms they were clearly inadequate to drain out the injections arising from increased government spending as seen from the fact that the fiscal deficit grew substantially during the war years. To siphon out the remaining injections of currency by the government and curb the possibility of runaway inflation, “borrowing” or accumulation of public debt was the way out. Not only was the bank rate increased from 3% on July 29 to 10% on August 1, 1914 but banks were even threatened with compulsory purchases of treasury securities if voluntary purchases proved inadequate. Banks supported the purchase of these securities through liberal advances to the public. Unfunded medium and long-term debt increased from a mere £50 million in 1913 to a whopping £4.5 billion by 1919. This drain of excessive money put in circulation by war expenditures through borrowings not only kept inflation in check but also provided the private sector with massive interest yielding assets at the end of the war.
Britain bravely announced its return to the gold standard in 1925 at the pre-war parity of $4.68 to the pound. However, the high inflation rates during the war years meant that the pound was grossly overvalued. The UK government began to witness an outflow of gold, which was stemmed through higher interest rates. The negative impact of this step on the domestic economy ultimately forced UK to abandon the gold standard in 1931.
For us, however, the lesson to be drawn from this historical episode is the power of “modern money” in allowing the state to achieve higher levels of real GDP and, more importantly, employment. The state does not finance its expenditures by raising money from the private sector. Instead, it transfers resources from the private sector by issuing currency. To quell the inflationary and external impact of the increased expenditure the state resorts to taxation and borrowing as a means to drain surplus liquidity from the system and in recent years, to ensure that the interest rate target set by the central bank (and perhaps the government, jointly) is achieved. These implications of modern money are critical if governments want to break free from arbitrary (self-imposed) constraints that originate from neoliberal macroeconomics.
With the growing concern over unemployment, discontent over the preoccupation of central banks with low and stable inflation as well as a recent turn towards fiscal activism in the West, it could be a matter of time before MMT becomes the basis for new populist-nationalist macroeconomic policy. And like so many instances in the recent past, what was once considered left/left-of-centre economics has now been appropriated by the right/right-of centre.
Sashi Sivramkrishna is author of In Search of Stability: Economics of Money, History of the Rupee, Manohar, New Delhi, 2015.