Lessons from history – Economics Observatory https://www.economicsobservatory.com Thu, 10 Nov 2022 15:01:48 +0000 en-GB hourly 1 https://wordpress.org/?v=5.8.6 How have economists thought about climate change? https://www.coronavirusandtheeconomy.com/how-have-economists-thought-about-climate-change Wed, 09 Nov 2022 01:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=15023 Greenhouse gases – water vapour, carbon dioxide, methane, ozone, nitrous oxide and chlorofluorocarbons – are what have made our planet habitable. They let sunlight pass through the atmosphere and prevent heat from the sun escaping. But human actions such as developing land, raising livestock and burning fossil fuels have affected the earth’s climate by increasing […]

The post How have economists thought about climate change? appeared first on Economics Observatory.

]]>
Greenhouse gases – water vapour, carbon dioxide, methane, ozone, nitrous oxide and chlorofluorocarbons – are what have made our planet habitable. They let sunlight pass through the atmosphere and prevent heat from the sun escaping.

But human actions such as developing land, raising livestock and burning fossil fuels have affected the earth’s climate by increasing the concentration of these greenhouse gases in the atmosphere, blocking outward heat and warming the planet. In many regions of the world, temperatures are already more than 1.5°C warmer than pre-industrial levels, which makes heatwaves, droughts and extreme weather events more likely. The more the planet warms, the greater the frequency and severity of these phenomena (NASA, 2019).

Climate change economics

Protection from the risk of climate change is a global challenge. But encouraging governments, businesses and individuals across the world to take action isn’t always straightforward. Some may be reluctant as the benefits of mitigation accrue globally, but the costs are borne by specific firms or countries that cannot stop others from ‘free riding’ on their investment. Economists refer to this as a public goods problem.

Further, efforts to halt climate change are likely to benefit future generations the most, while it is the current population who will incur the costs. In this regard, climate change is also an intergenerational public good. The net result of this public goods problem is that markets will not provide the public good cost effectively, as not all will be motivated to take action. This makes international cooperation – through protocols and aligned policies – necessary to address the problem.

The 2015 Paris Agreement (at COP 21) is an example of international cooperation, where it was agreed to limit warming to well below 2°C. This emblematic target has been traced to the early work of the Yale economist William Nordhaus. He based the target on observed historical average temperatures, noting ‘if there were global temperatures more than 2 or 3°C above the current average temperature, this would take the climate outside of the range of observations which have been made over the last several hundred thousand years’ (Nordhaus 1975, 1977).

The view that temperatures had not previously gone over this threshold was later supported by evidence from ice cores (Jaeger and Jaeger, 2011). Recent studies suggest that warming over 2°C could increase the risks of moving elements of the climate system beyond critical thresholds, and this could have severe consequences for ecosystems and humanity as a whole (Wunderling et al, 2021).

Some economists have been at the forefront of debates about climate change. For example there were several early studies published in the American Economic Review (Nordhaus, 1977; D’Arge et al, 1982; Lave, 1982; Kokoski and Smith, 1987).

But there have been some high profile attacks within the economic research community of its track record in addressing the issue of climate change (Oswald and Stern, 2019; Smith, 2021). For example, one study that analysed articles appearing in top economic research journals, pointed to the silence of economists on the topic, highlighting that by August 2019, only 57 articles tackled the subject out of around 77,000 published (Oswald and Stern, 2019).

Things have improved somewhat since Oswald and Stern’s call to arms. Compared with the 57 articles on the topic by 2019, using the same search terms on the Web of Science database, 141 articles focusing on ‘climate’, ‘carbon’ or ‘warming’ had been published by September 2021.

Something crucial that Oswald and Stern did not mention was that the coal face of research related to climate change is in the field journals of environmental economics and the related (but distinct) field of ecological economics, where more innovative research tends to be undertaken (Heckman and Moktan, 2020). These journals were omitted in their original search and have been included here. There were 1,595 articles on topics related to climate change published in these journals.

Table 1: Bibliometric search results

 Oswald and Stern 2019September 2021 search
Search term‘Climate or Carbon or Warming’‘Climate or Carbon or Warming’
General interest economic journals  
Quarterly Journal of Economics01
Economic Journal922
Review of Economic Studies310
Econometrica22
American Economic Review1965
Journal of the European Economic Association811
Economica49
Journal of Political Economy914
American Economic Journal: Applied Economics37
Total general interest57141
Field journals  
Journal of Environmental Economics and Management-306
Journal of the Association of Environmental and Resource Economics-92
Ecological Economics-1197
Total field01595
Total general interest and field571734
Source: Web of Science
Note: September 2021 search results also include papers and proceedings as these are based on papers presented at the annual conference

As Figure 1a and 1b show, there has been an increase in the publication of papers on climate change since the turn of the millennium. This is also reflected in the significant increase in publications in the fields of environmental and ecological economics over the past 20 years (Rath and Wohlrabe, 2015).

Analysis of four decades worth of publications in the top environmental economics journal shows an increase in articles related to climate change and citation of these studies from leading general interest economic journals (Kube et al, 2018). In addition, because of the increased demand there is also now a bespoke journal devoted exclusively to Climate Change Economics.

Figure 1: Number of journal articles on climate change

Panel A: Economics journals

Panel B: Environmental economics journals

Source: Web of Science
Note: Search terms (TS= (‘climate’ or ‘carbon’ or ‘warming’) and SO=(‘Journal Title’)), refined by: ‘document types: articles or proceeding papers’

Since the early 1990s, climate change has spurred a large body of economic research. Much of it focuses on the linkages between weather and economic outcomes in order to understand the possible economic consequences of future climate change (see Dell et al, 2014, for a review). One example is a study of the monthly relationship between weather and the crime rate across the United States over a 30-year period, used to predict the likely future relationship between increases in temperature and crime (Ranson, 2014).

Studies also use the historical record to examine the impacts of natural disasters and mitigation efforts in order to understand how past societies have adapted to changes in climate (Kahn, 2006 and Hornbeck, 2012). For example, one looks at how local economies adapted to the American Dust Bowl, an environmental catastrophe that resulted in severe soil erosion (Hornbeck, 2012).

Has there been any evolution of economic thinking about the climate?

In July 1991, the Economic Journal published a policy forum specifically on the Economic Aspects of Global Warming. With notable contributions from William Cline, the future economics Nobel laureate William Nordhaus, and the late David Pearce, of the three only Pearce would typically be identified as an environmental economist.

This timely publication came hot on the heels of the publication of the first Intergovernmental Panel on Climate Change (IPCC, 1990), which acknowledged the earth’s warming and that human activity is substantially increasing atmospheric concentrations of greenhouse gases. The themes covered are still relevant: assessing the scientific evidence; when to take action; and how to achieve efficient reduction in emissions. Each theme is addressed below to see how understanding of the issues has evolved since the early 1990s.

Global warming, then and now

The scientific consensus on global warming from the first IPCC report was that there was ‘unambiguous evidence’ of build-up of carbon dioxide in the atmosphere (Cline, 1991). Carbon concentrations had increased by 25%, from 280 parts per million (ppm) by volume before the Industrial Revolution to around 350 ppm by the late 1980s. Since the first IPCC report, the amount of carbon dioxide in the atmosphere has increased to 410ppm, a 46% increase from pre-industrial levels, predicting warming exceeding 1.5°C and 2°C by the end of the present century (IPCC, 2021).

Crucially, the research highlighted that carbon dioxide accumulation was irreversible in the short run. Reducing emissions to zero tomorrow will reduce the flow of greenhouse gases but will not reduce their concentration in the atmosphere (Cline, 1991).

In effect, climate change is a long-run problem. Cline critically assessed the evidence supporting the nascent theory of global warming and concluded that the ‘greenhouse science holds up well to scrutiny’, although he was critical of the time horizon of analysis to arbitrary doubling of greenhouse gas concentration rather than a specified time period. The 1991 IPCC report concluded that a ‘business as usual’ scenario would see warming of 2.5°C by 2025. This gave a horizon of 35 years compared with Cline’s proposed analysis of 250-300 years.

Although Cline highlighted the uncertainty evident in the first IPCC report, he noted that ‘uncertainty is not necessarily grounds for policy inaction’. The policy implications from Cline’s perspective depended on the attitude of policy-makers towards risk. If policy-makers were very concerned about the future risks, then they should attach higher weight to upper-bound warming risks. He concluded that if further evidence showed the greenhouse effect to be as serious as it appeared in 1990, that by the end of the decade ‘it will be time to fish or cut basin on the more painful process of implementing measures to cut carbon dioxide and other trace gas emissions’.

The issue of uncertainty is still a significant topic. While climate models have improved significantly, there is still ambiguity about the likely impact of further climate change and whether future damages will be modest, catastrophic or somewhere in between. For example, the 2021 IPCC report includes five estimates of likely warming – 1.9°C , 2.6°C, 4.5°C, 7°C, and 8.5°C – based on different scenarios of trends in carbon dioxide and other greenhouse gas emissions, as well as cooling from aerosols and land use.

The two highest warming scenarios assume that increases in carbon emissions are two or threefold by the end of the present century, while the lower bound ranges require substantial reduction (in the region of 150%) in annual emissions below current levels.

Some studies assign a low, but non-negligible, probability to the outcome in which climate change poses a direct existential threat to humanity, although with the proviso that climate change is more likely to have indirect effects on other existential risks, such as future pandemics and wars (Ord, 2020). Likewise, some economists urge caution when making policy based on low probability but high impact events such as human extinction from future climate change (Weitzman, 2009), although there is disagreement on the extent of the risk of catastrophe (Nordhaus, 2012).

When to take action?

Climate mitigation (reducing greenhouse gas emissions) is not costless. Cleaner technology requires substantial resources both for installation and future research and development (R&D). It also requires governments to make difficult decisions on spending and allocating resources – increased spending on climate mitigation may result in cuts to health or education spending.

Take the example of ‘robot trees’ recently installed in Cork in Ireland, which act as air purifiers. Each tree costs €350,000, which has sparked a huge debate over whether this was an effective use of resources or whether a more cost-effective solution would have been either to plant real trees or enforce traffic bans.

There is also a trade-off in terms of the fact that technological solutions to climate change might be more expensive to implement today but, as we learn how to develop technology, the costs fall over time; and, with economic growth, future generations will be wealthier. This means that there is a trade-off between investing today versus investing in the future when costs are lower.

Take the example of photovoltaic solar energy. In 1968, the economist Kenneth Boulding  noted that ‘up to now, certainly, we have not got very far with the technology of using current solar energy, but the possibility of substantial improvements in the future is certainly high’. Prices, per unit wattage and adjusting for inflation, fell from $100 in 1975 to under a $1 in recent years (Kavlak et al, 2018). This fall in price has led to wider adoption of PV solar globally, something that was prohibitively expensive in the 1970s.

Determining the most efficient (or lowest cost) approach to slowing emissions has been the central debate in climate economics. To do so, a simple model is used to weigh up the costs (of carbon reduction) and benefits (avoidance of future damages) of climate mitigation (Nordhaus, 1991). The approach follows what is now conventional wisdom: that the incremental cost of reducing emissions (marginal abatement costs) rise with each reduction in emissions.

This is thought to be the case because the ‘low-hanging fruit’ options are implemented first – such as switching from incandescent to LED lighting or insulation retrofits – although recent work has challenged the conventional wisdom (Vogt-Schilb et al, 2018). From the vantage of 1991, the low-hanging options were using low-carbon alternatives to coal, such as natural gas or nuclear power and energy conservation. Today, the emphasis is on renewable energy and replacing combustion engines with electric vehicles (ideally powered by renewables).

As the climate mitigation decisions involve a comparison of similar monetary investments today or at some point in the future, economists look at the cost of both decisions. To get the current value of a future sum of money, economists use discounting. The higher (lower) the discount rate, the lower (higher) the present value of future investments.

Using different discount rates from 0% to 4%, Nobel laureate William Nordhaus calculated the present value of climate damages from carbon dioxide equivalent emissions 200 years into the future (Nordhaus, 1991). The net result was a small cost to the US economy of 0.25% of total output and a global damage cost of 2% of total output.

Since 1991, much more sophisticated ways have been developed to estimate the future cost of climate change (for example Nordhaus and Boyer 2000). One of the most widely used is Nordhaus’s Dynamic Integrated Climate Economy (DICE) model, which is a tool for evaluating different climate economy scenarios (Nordhaus, 1992 and Nordhaus, 2017).

In this decision-making paradigm, the discount rate continues to be a key parameter. Within economics, the debate primarily relates to the choice of discount rate: whether we use lower discount rates (which dictates whether we should cut emissions now) or higher discount rates (wait and cut emissions later). Estimates of future damage costs vary enormously – equivalent to $350 versus $35 per tonne of carbon (Stern, 2007; Nordhaus, 2007; see Tol’s 2009, 2011, 2021 regular survey of estimates of future damages). In effect, the Stern Review raised the cost of doing nothing.

Recent meta-analysis of estimates of future damages lends support to the idea that the future damage costs are higher than currently assumed (Howard and Sterner, 2017). These estimates of future damages (including catastrophic risk) are 9-10% of GDP versus 2.4% from the DICE model. While direct damages to advanced economies have been interpreted as being relatively low because of the composition of economic activity, recent work has emphasised that likely economic damage comes from indirect exposure (de Winne and Peersman, 2021).

Deciding to tackle future problems today or letting future generations worry about the problems is a difficult decision as there are ethical arguments to be made for either side (Boulding, 1968). Supporting discounting the future at the lowest possible rate can be said to be favourable because of the uncertainty surrounding likely impacts centuries from now. Standard discounting would reduce the value of taking action to benefit the distant future to zero (Weitzman, 1998).

There are also grounds for considering other ethical issues such as inequality, risk and population ethics when choosing discount rates (Fleurbaey et al, 2019). Some uphold that discounting wellbeing over long time horizons is an incorrect application of economic methodology as it conflates monetary value with human wellbeing (Ord, 2020). The future benefits that are being considered in the case of a catastrophic risk are not monetary in nature, but they are much more fundamental values, such as whether civilisation is thriving, in ruins or extinct.

How to take action?

Economists have also discussed market-based policies to control pollution. Carbon taxes, such as a charge on the carbon content of fossil fuels, are possibly the most widely known of these. An alternative to this is tradable carbon emissions permits that work by placing a limit on the total amount of carbon that can be emitted and requires firms to hold permits for any emissions.

Research has highlighted how taxes would lead to an outcome that would minimise the cost of carbon control for society because it would enable firms to choose how to respond to the tax rather than imposing blanket regulatory standards across the economy (Pearce, 1991). Firms with lower costs of reducing emissions (for example, by adopting energy saving light bulbs) would reduce emissions rather than pay the tax.

Firms with higher costs of abatement (for example, if low-cost options had already been used and more costly options required greater investment) would pay the tax in the immediate term. By putting a cost on pollution, it would also act as an incentive to adopt less polluting technology and conserve energy in the future, what is known as the least cost theorem (Baumol and Oates, 1971).

Yet there are disadvantages to such taxes. Examples include the lack of a clear target for emissions reduction, deadweight losses created by market inefficiency and inequality in terms of who ultimately bears the burden of paying the tax. There are issues that are still barriers to adoption both domestically (fears of increased cost) and internationally (lack of cooperation and fear of being uncompetitive) (Pearce, 1991).

Taxes of this nature are now widely supported by most economists (Maiello and Gural, 2019). This was most strikingly shown by a letter signed by 50 eminent economists in the Wall Street Journal on 17 January 2019, advocating the introduction of carbon taxes.

In practice, emissions trading appears to have wider support from policy-makers and environmentalists – see, for example, International Carbon Action Partnership (ICAP). The basic premise of pollution permits originated with the idea of creating property rights (permits) to emit and a total emissions allowance for a country or region (Crocker, 1966; Dales, 1968; Banzhaf, 2020).

Emissions trading operates on a similar principle as the least cost theorem, and permits can create value for business with lower costs of reducing emissions. This means that firms with lower costs of reducing emissions can sell their excess permits to firms that have higher costs of reducing emissions. By putting a price on emissions, this also creates an incentive to invest in R&D to find ways of reducing emissions.

The European Union (EU) has led the way in this area, introducing an Emissions Trading System (ETS) in 2005 in response to obligations agreed at COP3 in Kyoto in 1997. The EU ETS places a cap on the total amount of greenhouse gas emissions that can be emitted by installations covered by the system and the cap is gradually reduced each year. Participating firms throughout Europe are required to monitor their emissions and have permits (allowances) in place to cover their emissions by March of the following year (see the European Commission’s guide to EU ETS reporting and monitoring).

Non-compliance with the EU ETS carries a heavy fine (€100 per tonne) and given that the prices of permits are less, this provides incentives for compliance. The EU ETS was subject to uncertainty in its initial years but after reforms to the system, it has found its feet; permit prices have risen to new highs this year (see Figure 2). There are now several ETS programmes operating across the globe, with China the latest country to adopt this approach.

Figure 2: ICAP allowance price explorer

Source: International Carbon Action Partnership

Conclusion

Writing in the American Economic Review in 1977, William Nordhaus concluded that ‘unlike many of the wolf cries, this one [climate change], in my opinion, should be taken very seriously’. This is reflected in the latest IPCC report: without doubt, the world is on a precipice with regards to future climate change.

Economists have engaged with the debates surrounding climate change for over 30 years and there is consensus that action is required (Howard and Sylvan, 2021). The main quarrel was not whether action was necessary to mitigate climate change, but when exactly it should be taken.

Thirty years ago, when the first IPCC report was published, the future seemed a long way away. But to quote Boulding, ‘tomorrow is not only very close, but in many respects it is already here’. Action at COP26 never seemed more pressing.

There is hope. International agreements are possible, most evidently from the 1987 Montreal Protocol, which led to the phasing out of chlorofluorocarbons, an industrial product that was harming the ozone layer but was also a dangerous greenhouse gas. The ozone layer is recovering, and this should give faith that future international agreements are possible when the world recognises the real and apparent danger of uncontrolled climate change and acts accordingly.

Where can I find out more?

Who are experts on this question?

  • Carolyn Fischer
  • William Nordhaus
  • Nicholas Stern
  • Gernot Wagner
  • Ottmar Edenhofer
Author: Eoin McLaughlin
PPhoto by NOAA on Unsplash

The post How have economists thought about climate change? appeared first on Economics Observatory.

]]>
The BRICS countries: where next and what impact on the global economy? https://www.coronavirusandtheeconomy.com/the-brics-countries-where-next-and-what-impact-on-the-global-economy Thu, 20 Oct 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=19763 Over the past two decades, there has been an astonishing restructuring of global economic power. This has been driven primarily by the rise of China and, to a lesser extent, the BRICS countries more broadly – which, in addition to the East Asian giant, encompass Brazil, Russia, India and South Africa. As this group has […]

The post The BRICS countries: where next and what impact on the global economy? appeared first on Economics Observatory.

]]>
Over the past two decades, there has been an astonishing restructuring of global economic power. This has been driven primarily by the rise of China and, to a lesser extent, the BRICS countries more broadly – which, in addition to the East Asian giant, encompass Brazil, Russia, India and South Africa.

As this group has become increasingly formalised and institutionalised – hosting regular summits and establishing collective bodies – many observers have worried that its growing influence might be accompanied by the normalisation of authoritarian forms of ‘state capitalism’, and even the unravelling of the liberal order.

Others have taken a more sanguine view, arguing that Eastern forms of state-led development appear superior in many ways to Anglo-American economic and political structures, and this is reconcilable with – and, indeed, depends on – an open global economy.

Either way, the concerns of many Western liberals have resurfaced following Russia’s invasion of Ukraine. Recent news that other mostly non-democratic states – Egypt, Iran, Saudi Arabia, Turkey (and Argentina) – have either applied to join the BRICS, or are considering doing so, is also cause for concern among Western governments.

What are the BRICS and why do they matter?

At one level, the idea of the BRICS is confected. The former Goldman Sachs economist (and now crossbench peer) Jim O’Neill coined the term BRIC in the early 2000s to describe the four fast-growing countries that, at the turn of the millennium, seemed most likely to begin catching up with the West (O’Neill, 2001).

Looking back, this notion appears quite problematic (Bishop, 2012). Brazil’s growth slowed dramatically around 2014 as it struggled with early de-industrialisation (Muzaka, 2017).

Russia was an erstwhile superpower. To describe it as ‘rising’ therefore was and remains questionable, particularly as its industrially atrophied post-communist economy has stayed small in global terms. Even today, as the country mounts a major military offensive, its GDP is still barely half that of the UK despite a population almost three times the size and the largest, most resource-rich territory on the planet.

India‘s growth has been impressive, but its relative share of global GDP has actually shrunk and its economy is now around one-fifth the size of China’s. One author describes the pair comparatively as the ‘slouching tiger’ and ‘roaring dragon’ (Kennedy, 2016).

South Africa is arguably not a rising power at all in global terms and was primarily tagged on to the grouping later for reasons of regional representativeness.

In fact, the story of the BRICS is largely about China’s staggering rise. Its development has been so sustained over such a long period of time that it has genuinely reshaped the distribution of global economic power (Bishop, 2016).

Figure 1: GDP by BRICS country (2000-21), in current dollars

Source: World Bank (Creative Commons License CC-BY 4.0)

As the chart shows, China’s GDP is more than double that of the other four BRICS combined: almost $18 trillion compared with Brazil ($1.6 trillion), Russia ($1.8 trillion), India ($3.2 trillion) and South Africa ($400 billion). For comparative purposes, GDP in the United States is $23 trillion but China’s economy is arguably the largest in the world in purchasing power parity (PPP) terms.

In the early research on the subject, there were fierce debates over which countries should and should not be included in the next generation of emerging powers (Cooper et al, 2006).

Some questioned whether the ‘MINT’ countries – Mexico, Indonesia, Nigeria and Turkey – might be next (O’Neill, 2013). Others stressed that a focus on large middle-income countries ignores the many small ones – such as New Zealand, Norway, Qatar and Singapore – that have achieved high levels of income and development while exercising power in unconventional ways (see Long, 2022).

There have always been question marks over which countries, exactly, should comprise the rising or emerging powers, and what criteria best capture this. What does it even mean to ‘rise’ or ‘emerge’, and how do we know when a state has finally ‘risen’ or ‘emerged’?

By the same token, as some rise, other must decline relatively: do we even have a language to describe developmental decline – or ‘undevelopment’ – in supposedly ‘developed’ countries (Bishop and Payne, 2019)?

These questions aside, the BRICS have, over time, become more deeply institutionalised. Since 2009, the countries have held annual summits with an increasingly widened agenda.

The establishment of the New Development Bank (NDB) in 2014 with $50 billion of start-up capital was another milestone. So too was the simultaneous creation of the BRICS Contingent Reserve Arrangement (CRA), a liquidity mechanism that provides support for members facing short-term balance of payments squeezes or currency instability (see Cooper, 2017).

It is in this context, at a time when much of the world is still reeling from the pandemic, that this institutionalisation process may be intensifying.

Why are countries seeking new forms of cooperation?

The motivations of prospective BRICS members are complex. Take Saudi Arabia: it is already an exceptionally wealthy country, it has a close security relationship with the United States and it can mobilise substantial sovereign wealth for investment in pursuit of economic expansion (see Trudelle, 2022). But domestic political modernisation remains slow, and the economy is highly dependent on energy and threatened in the long run by global decarbonisation.

A degree of diplomatic isolation has also followed the murder of the journalist Jamal Khashoggi. Like all authoritarian states, Saudi Arabia has to search for sources of domestic and international legitimacy. Increasingly dense trade links with China will be critical to this endeavour, by simultaneously diversifying the economy and sources of international support (Martin, 2021).

Similar stories apply to Egypt, Iran and Turkey, where the respective regimes have all experienced volatile economies, political conflict and diplomatic isolation in recent history. The Arab Spring of a decade ago still casts a very long shadow over the Middle East as a whole, leaving rulers feeling insecure and repression enduring (Josua and Edel, 2021).

Another potential BRICS member, Argentina, is a little different. Growth has fluctuated constantly since the dramatic financial crisis of 2001 (Stiglitz and Weisbrot, 2022). Periods of expansion and policy experimentation have occurred in tandem with buoyant export revenues during the late 2000s and early 2010s (Grugel and Riggirozzi, 2012).

But inflation has remained high and the currency has tanked. One US dollar bought one peso prior to 2001; a decade later it bought eight to ten pesos; and yet a decade further on, it now buys 148. Debt is also growing again. It exploded from around 40% of GDP to 160% in the aftermath of the 2001 crisis, declined again to around 40% alongside the commodity boom by 2014, and then rose once more to over 100% of GDP by the time the pandemic arrived in 2020.

Argentina has long required new sources of capital and international institutional power. Indeed, its demand for inclusion in the Group of 20 (G20) during the global financial crisis of 2007-09 in part reflected resentment at Brazil’s membership and relatively higher perceived status as an emerging power.

But what is in it for the BRICS as an organisation? New members, in theory, imply new economic resources that can be mobilised and greater strength in numbers. The expansion of the group also provides greater legitimacy and reinforces the sense that the club is worth joining – an effect that could become self-perpetuating.

At the same time, while these potential new members are certainly regionally significant, they are not the largest, most powerful, economically dynamic or diplomatically influential of countries that could theoretically join (certainly compared with the MINT countries). Furthermore, the BRICS would no longer really comprise mainly the ‘rising powers’ in a global sense with Argentina, Iran and Saudi Arabia joining.

Consequently, the broader issue is that the BRICS as an organisation still lacks deeper institutionalisation. Beyond the establishment of the NDB, ‘it is difficult to see what the group has done other than meet annually’ (O’Neill, 2021).

This problem is compounded by the fact that economic growth trajectories have been so uneven, and the group lacks clear unifying ideological principles or a shared vision for managing the global order.

Expanding the membership, then, is at least partly about gaining renewed impetus in a context where the BRICS remain beset by divergent interests, development trajectories, relative levels of geo-economic significance and ability to exercise substantial influence over the international system.

In all of these areas, China’s power resources far outstrip those of the other members, while as an aspirant hegemon, it also depends most on – and has additional responsibility for – maintaining a calm, stable and open global economy (Beeson and Zeng, 2019).

But the level of openness that China adopts, or the extent of its accommodation to liberal, Western mores, depends to a significant degree on the policy area under discussion (Hameiri and Zeng, 2019; Weinhardt and ten Brink, 2020).

How could the expansion of the BRICS affect the global economy?

The worry, from a Western perspective, relates to the fact that – with the partial exception of South Africa (Reddy, 2022) – the BRICS have become, to differing degrees, more nationalist and authoritarian over the past decade.

Xi Jinping has cemented his dominance in China. Brazil under Jair Bolsonaro – who will shortly face a presidential election run-off against Lula da Silva – and India under Narendra Modi have both experienced a starkly populist turn (de Souza, 2020; Sinha, 2021). Vladimir Putin’s Russia has become a pariah state that poses numerous thorny geopolitical questions extending well beyond the immediate Ukraine crisis (Burns, 2019; d’Eramo, 2022). In all four, minorities are persecuted and civil rights restricted.

Consequently, some European and US policy-makers worry that the BRICS may become less an economic club of rising powers seeking to influence global growth and development, and more a political one defined by their authoritarian nationalism.

Yet both the BRICS and the West share much in common. Economically, people the world over have long been disenchanted with market-led globalisation. The United States and many European countries – and even the European Union (EU) itself – have marvelled at China’s expansion and reassessed their own desperate need for new institutions and mechanisms to drive interventionist industrial policy to keep up (Lavery and Schmid, 2021; Hopewell, 2021).

Politically – with continuing constitutional upheaval wrought by Brexit and the election of Donald Trump only the most obvious examples – Western countries have also experienced declines in the quality of democracy. As European states elect (or come close to electing) nationalists like Georgia Meloni in Italy or Marine Le Pen in France, they appear more vulnerable to further democratic decay than at any other time in recent history.

Diplomatically, the war in Ukraine appears to have drawn a stark dividing line between an Eastern-backed Russia and the West. But this only obscures the complicity of many of the latter’s banking institutions and political elites in fostering Putin’s regime, even as its excesses became progressively more evident (Bienkov, 2022).

So as the BRICS rise, they disrupt the global order in problematic ways that give incentives to the West to adopt illiberal mores (Hopewell, 2017). But the East also faces the countervailing pressure to become more liberal themselves, thereby reinforcing as well as reshaping that order (Bishop and Murray-Evans, 2020).

This is evident in the ‘ambivalent’ way that China and India are torn between their self-identity as developing countries, with a purported mission to lead the Global South, and the unavoidable reality that their economic interests increasingly align with the Global North (Cooper, 2021).

The web of constraints that China and India face – notably the tension inherent in preserving an uneasy BRICS unity while becoming responsible global diplomatic powers themselves – was highlighted starkly in September 2022 when both made their displeasure at Russia’s quagmire in Ukraine public for the first time (Lau, 2022).

Will globalisation endure?

It is difficult to envisage a decisive decoupling of West from East, or a definitive process of ‘deglobalisation’. The sheer volume of trade in goods and services, the flows of capital, data and people across borders to facilitate it, the extent of economic interdependence and the complexity of global value chain-based production, which relies on inputs sourced from around the world, all militate against it (Bishop and Payne, 2021a).

We will not see a decisive retreat behind national borders. Authoritarian autarky is not a viable or credible development strategy today. But at the same time, the pressures facing states from below mean that high levels of unmediated and destabilising economic openness also remain politically toxic.

The future of the global economy will plausibly remain globalised in general – with globalisation itself changing shape – while simultaneously becoming more national in certain respects.

This is not as contradictory as it sounds. Globalisation is a process driven substantially by states that are embedded within it. The two co-exist: they are not alternatives.

We require a recalibration of the nature of the relationship – an attempt to find greater policy space for nations within a renewed overarching global framework of institutions, rules and norms.

Between the two, the regional level is crucial. Indeed, even if we are witnessing some degree of ‘reshoring’ in light of Ukraine and wider upheavals in the global economy, many production networks are likely to reconstitute themselves regionally rather than nationally (Grey, 2019). Modern forms of production at the highest points of a value chain require economies of scale that are frequently continental in scope.

The key question is not if the global economy will evolve and change shape, but rather whether this occurs in a well-managed or increasingly fraught way. This will be determined by how the framework of global governance adapts to new realities.

International bodies are unquestionably in desperate need of reform and revitalisation (Bishop and Payne, 2021b). But the system that they collectively comprise is not as dysfunctional as is often believed (Drezner, 2014).

All states require the public goods that they provide and, as Lord O’Neill (2021) notes, the greatest disappointment with the performance of the BRICS is that they have not yet lived up to their promise in terms of sustaining the G20 on the next chapter of its development.

Overcoming this will be necessary for the body to carry out its crucial role of ‘steering’ the global economy in coming years and ensuring that tensions between West and East are mollified rather than magnified.

Where can I find out more?

Who are experts on this question?

  • Gregory Chin, York University, Toronto, Canada
  • Tom Chodor, Monash University, Australia
  • Andrew Cooper, University of Waterloo, Canada
  • Kristen Hopewell, University of British Columbia, Canada
  • Valbona Muzaka, Uppsala University, Sweden
  • Amrita Narlikar, GIGA Hamburg, Germany

Author: Matthew Bishop, University of Sheffield

Photo by William_Potter on iStock

The post The BRICS countries: where next and what impact on the global economy? appeared first on Economics Observatory.

]]>
Industrial action: is the UK going back to the 1970s? https://www.coronavirusandtheeconomy.com/industrial-action-is-the-uk-going-back-to-the-1970s Tue, 18 Oct 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18642 The National Union of Rail, Maritime and Transport Workers (RMT) strike over pay, job cuts and working conditions has been joined by tens of thousands of workers from National Rail and 13 train operators. Unions representing NHS staff and teachers have also warned of industrial action to demand wages that keep up with rising prices. […]

The post Industrial action: is the UK going back to the 1970s? appeared first on Economics Observatory.

]]>
The National Union of Rail, Maritime and Transport Workers (RMT) strike over pay, job cuts and working conditions has been joined by tens of thousands of workers from National Rail and 13 train operators. Unions representing NHS staff and teachers have also warned of industrial action to demand wages that keep up with rising prices.

The events of recent weeks have led to comparisons with the 1970s when the country saw nationwide strikes that resulted in millions of lost working days (Office for National Statistics, ONS). But is this an accurate parallel to draw?

Current disputes draw on comparable problems – high prices and stagnant wages – to the strikes in the 1970s and similar groups of workers are involved. False narratives of the 1970s, articulated by current critics of trade unions, distort understanding of the present problems. Unions then and now are wrongly portrayed as greedily advancing selfish pay claims that cause inflation.

The current disputes

The current industrial action mainly involves trade union members employed in delivering public services. They work for councils and publicly owned bodies, for example, in healthcare. Others are in private companies that provide services that are not open to market competition, such as the railways.

These workers operate within the ‘foundational economy’, which is responsible for maintaining the vital infrastructure and operational elements of everyday life. Without their work, the economic and social wheels of the country grind to a halt.

The gender, ethnic and age diversity of these workers varies from some existing stereotypes of strikers as relatively privileged older, white men. This socially diverse profile is reinforced when we consider that most employees in the foundational economy were identified as key workers during the pandemic. The recognition of these jobs has perhaps strengthened expectations of future reward that have not been fulfilled.

The immediate cause of the current disputes is the rising cost of living, particularly related to increasing food and energy prices. The doubling of domestic gas and electricity prices since the start of 2022 was accompanied by an increase in petrol prices of over 25% from January to late June. Further price escalation is almost certain. Inflation in the UK, at 9.1%, is at its highest rate for 40 years. Analysis of household expenditure estimates by the National Institute of Economic and Social Research (NIESR) indicates that household bills now exceed income in 60% of UK homes.

As a result, it can be argued that strikes in pursuit of wage claims are not the drivers of inflation, as some UK government ministers have claimed. But rather that they are a collective response to the broken relationship between employment and economic security.

In-work poverty – defined as when an individual’s income, after housing costs, is less than 60% of the national average – has grown incrementally since the 1980s. In the UK, this already affected one in eight workers before the recent cost of living crisis emerged (Joseph Rowntree Foundation, 2022). The New Labour government tried to alleviate this with tax credits and other wage subsidies after 1997. The Conservative-led coalition government scaled these back radically from 2010, while reducing support for low-income family housing costs.

The Institute for Public Policy Research (IPPR), reporting in May 2021, saw these two factors as driving the general increase of in-work poverty. Double-earner households, one full-time and the other part-time, were twice as likely to be in poverty in 2019/20 (12%) than in 1996/97 (6%) (IPPR, 2021). This is likely to have contributed to the rapid escalation of food bank usage by wage-earning households reported in the press.

Two important structural forces shape this in-work poverty. First, the loss of around four million jobs in manufacturing, metals and mining, which resulted from the anti-inflationary policies adopted by Margaret Thatcher’s Conservative government after its election in 1979.

Second, trade unions were politically marginalised. Thatcher’s governments and their Conservative successors made it progressively easier for employers to ‘derecognise’ unions. Trade union density – the portion of the workforce represented by unions – fell from around 50% in 1979 to around 30% in 1997. In 2021, the figure stood at around 23%, although in the public sector, it remained at around 50% of workers. The strikers in 2022, drawn from this unionised minority, are operating from a position of weakness rather than strength. They have limited alternatives when seeking to have their voices heard. The government’s reluctance to support workers has been further underlined by the apparent abandonment of the Conservative Party’s 2019 commitment to produce an employment bill that would protect workplace rights lost as a result of Brexit.

Disputes in the 1970s

Strike activity measured in working days lost was higher in the UK in the 1970s than in any other decade in the period after the Second World War (Office for National Statistics, ONS). In 1972, the first of two peak years, 23.9 million working days were lost. This was mainly driven by a seven-weeks long national strike of 280,000 coal miners, followed by a further three-weeks long strike in 1974, which contributed to the electoral defeat of Edward Heath’s Conservative government.

This established the narrative of privileged male trade unionists exerting illegitimate political influence through relentless industrial action. Most coal was bound for electricity generation in power stations, so the miners were central protagonists in the UK’s foundational economy of the 1970s.

Strikers in the second half of the decade were likewise mainly drawn from this segment, with national strikes of healthcare and council workers, firefighters, dockers and lorry drivers, among others. Strikes of manufacturing workers tended, by comparison, to be either localised or short-lived.

Foundational economy strikers in the 1970s, as in the 2020s, were diverse in ethnic, gender and generational terms. Union density among women workers rose from 31% in 1970 to 40% in 1980. Workers of African and South Asian heritage were prominent in strike movements in manufacturing industry, notably the famous campaign for union recognition among photo-processing employees at Grunwick in London in 1976-77, and with healthcare, council services and transport, especially during what became known as the winter of discontent of 1978-79.

The winter of discontent dominates political memories of the 1970s. 1979 was the second and largest peak year of days lost to strikes in that decade, at 29.5 million. Only around half of these days, about 15 million, were actually lost in the pay bargaining year of 1978/79. The other half were in the following pay bargaining year of 1979/80 when 31 million days were lost, driven by a national strike in the steel industry. This is an important detail. Many reflections on the winter of discontent consciously or unwittingly double its economic significance by repeating the error that it cost around 30 million lost working days.

Strikes in the 1970s were mainly shaped, as in the 2020s, by economic insecurities that stimulated workforce demands for increased wages. Coal miners in the 1970s were seeking to arrest a declining relative position. In the 1960s, the workforce had been cut by more than half, as the UK accelerated towards a mixed-fuel economy.

The national coal strikes of 1972 and 1974 were prefigured by lengthy and large unofficial work stoppages in 1969 and 1970. The sudden fuel shock of 1973-74, when oil prices quadrupled, strengthened the bargaining position of miners, but also intensified inflationary pressures that were already rising rapidly. The peak rate of inflation approached 25% in 1975, compared with 9.1% today.

It was the Labour government’s attempt to control inflation after 1975 that underpinned many of the strikes that followed. Wage rises were controlled by annual fixed percentage increases. Total cash increases therefore grew more slowly for lower-paid workers. They viewed the downward squeeze on their wages as unjust, especially when set against less restrictive measures on dividends and profits.

The Labour government was, of course, sympathetic to unions. It strengthened statutory provision against workplace inequalities of gender and ethnicity, and established the Health and Safety Executive. It also attempted to transform authority in workplaces with an agenda for industrial democracy. This would have included worker-directors in companies with more than 1,000 employees.

Business opposition blocked this, in league with the Conservative Party and the anti-union national press. Strikes therefore remained the only meaningful expression of workers’ voice in the 1970s, as in the 2020s, a signal of collective weakness rather than strength.

Conclusion

Strikes are expensive expressions of workforce voice and acts of last resort. This discussion of current and historic strikes shows that they tend to be a sign of weakness, arising where workers are not being listened to, as much as they are a sign of strength.

The current strikes are clustered in the unionised parts of the workforce. Those involved are occupationally diverse and of varied ethnic, gender and generational backgrounds. They are mainly providing vital services and can be understood as operating within what is termed the foundational economy.

Critics of these striking workers seek to misrepresent and delegitimise them through mobilising a stereotyped view of the past, focusing on the 1970s, the peak decade of industrial action in post-Second World War Britain.

But the 1970s to which these critics return did not exist much beyond the front pages of anti-trade union newspapers. Then, as now, strikers were diverse in their background, attempting to protect precarious living standards in a period of rising economic insecurity.

Where can I find out more?

Who are experts on this question?

  • Jim Phillips
  • Alan Manning
  • Alex Bryson
Author: Jim Phillips
Picture by atlantic-kid on iStock

The post Industrial action: is the UK going back to the 1970s? appeared first on Economics Observatory.

]]>
Banking crises: who won the 2022 Nobel Prize in Economic Sciences and why? https://www.coronavirusandtheeconomy.com/banking-crises-who-won-the-2022-nobel-prize-in-economic-sciences-and-why Tue, 11 Oct 2022 09:45:38 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=19746 The Nobel Prize in economics – or to use its proper title, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel – is awarded annually to an economist or economists who have contributed to the advancement of economics. The prize has not been without criticism. The 1997 award to Myron Scholes and […]

The post Banking crises: who won the 2022 Nobel Prize in Economic Sciences and why? appeared first on Economics Observatory.

]]>
The Nobel Prize in economics – or to use its proper title, the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel – is awarded annually to an economist or economists who have contributed to the advancement of economics.

The prize has not been without criticism. The 1997 award to Myron Scholes and Robert Merton for option pricing theory was subsequently criticised when Long-Term Capital Management, a hedge fund that implemented their theories and on whose board they sat, collapsed in 1998.

More recently, it has been argued that the Nobel Prize has had a malign effect on the discipline of economics in that it has legitimised free-market thinking and deregulatory economic policy (Offer and Söderberg, 2016).

Who won the prize in 2022?

The winners of the 2022 prize are Ben Bernanke, Douglas Diamond and Philip Dybvig. Ben Bernanke, currently at the Brookings Institution in Washington DC, was Chair of the Federal Reserve (the US central bank) from 2006 to 2014 and prior to that a professor of economics at Princeton University; Douglas Diamond is a professor of finance at the University of Chicago; and Philip Dybvig is a professor of banking and finance at Washington University in St Louis.

The Nobel committee awarded the 2022 prize to these three economists because they ‘have significantly improved our understanding of the role of banks in the economy, particularly during financial crises. An important finding in their research is why avoiding bank collapses is vital.’

Why did Diamond and Dybvig win the prize?

Diamond and Dybvig produced a seminal theoretical piece of work that explains what banks do, why they are vulnerable to runs, and how government provision of deposit insurance makes banking systems stable (Diamond and Dybvig, 1983).

Why do banks exist? Why don’t households lend their savings directly to firms? If they did, then their money is tied up in an illiquid project. Savers are therefore reluctant to lend to firms because they may want liquidity – that is, they want access to the money that they have lent to the firm because of future unexpected spending needs such as unemployment or the birth of a child.

In Diamond and Dybvig’s work, banks provide households with mutual insurance against shocks that affect their consumption needs. Banks collect funds from lots of savers and invest a proportion in illiquid loans and hold the remainder in a cash reserve. Those households who have consumption shocks can simply withdraw their savings early. Banks thus provide liquidity insurance (Diamond and Dybvig, 1983).

Deposits at banks are on a first-come-first-served contractual basis. Thus, when depositors fear that enough other depositors will withdraw their deposits and because banks will have invested a sizeable proportion of depositors’ money in illiquid projects, their incentive is to get first in the queue. This results in a bank run.

Banks are fragile because anything – and in the Diamond and Dybvig (1983) model, it literally is anything – can spark a bank run. The policy conclusion from Diamond and Dybvig’s work is that bank fragility can be mitigated with government-provided deposit insurance. Their work has also influenced bank regulation, which has been designed to prevent banks from becoming fragile.

As Figure 1 shows, the number of bank failures in the United States fell dramatically after the introduction of federal deposit insurance in 1934. Most economies have introduced deposit insurance schemes since 1983.

The UK’s deposit insurance scheme came into existence in 1982. But its presence did not prevent the UK banking system becoming fragile in 2007 and 2008. Most notably, it did not prevent a bank run by depositors at Northern Rock in September 2007.

The problem with the UK’s deposit insurance scheme in 2007 was that only 90% of deposits up to £35,000 were protected. In other words, depositors at Northern Rock were completely rational in their run on the bank because only 90% of their money was insured.

Today, depositors in UK banks enjoy 100% protection up to £85,000. So, unless they have more than £85,000 in UK banks, depositors have no incentive for a run on their bank.

The work of Diamond and Dybvig (1983) has been criticised because it ignores the role of shareholder capital and shareholder liability in preventing banks becoming fragile (Turner, 2014). It has also been criticised as having little basis in historical experience (White, 1999).

Why did Bernanke win the prize?

Ben Bernanke’s seminal piece of work was also published in 1983 (Bernanke, 1983). It seeks to answer a simple question in economic history: why did a mild recession in the United States in late 1929 turn into the Great Depression of the 1930s?

The standard answer up until 1983 was the one given by a previous Nobel laureate, Milton Friedman, and his co-author Anna Jacobson Schwartz in their 1963 book A Monetary History of the United States.

They argue that the failure of over 9,000 banks or 20% of the US banking system in panics between 1930 and 1933 (see Figure 1) caused the money supply to contract sharply (Friedman and Schwartz, 1963).

In addition, the collapse of banks made depositors withdraw money from banks and hold more cash. This increase in the cash/deposit ratio resulted in the money multiplier contracting, which meant that the injection of reserves of the Federal Reserve had limited effect on the money supply.

Figure 1: Number of bank closures or suspensions in the United States, 1921-70

Source: Federal Deposit Insurance Corporation, 2018

The collapse of the money supply arising from bank failures resulted in a steep fall in GDP and in the price level. Friedman and Schwartz argue that the failure of the Federal Reserve to act as a ‘lender of last resort’ and provide liquidity support to banks facing runs was a major contributor to the depth and length of the Great Depression (Friedman and Schwartz, 1963).

Bernanke points out a major flaw with the Friedman and Schwartz (1963) story: the fall in the US money supply was too small to explain the subsequent falls in output. Something else – a non-monetary phenomenon – was at play (Bernanke, 1983).

That something else was that bank failures increased the cost of credit intermediation. In other words, bank failures made it much more costly for banks to channel funds from depositors to borrowers. This resulted in a collapse of bank credit by 50% between 1929 and 1933. This non-monetary effect was particularly acute for households, farmers and small businesses.

The policy implications of Bernanke’s work are clear. Policy-makers should not let banks collapse and they should keep lending channels open during financial crises to avoid a repeat of the Great Depression. Serendipitously, Bernanke was Chair of the Federal Reserve during the global financial crisis of 2007-09, during which his main policy objective was to prevent a collapse in credit and a steep rise in the cost of credit intermediation.

Where can I find out more?

Who are experts on this question?

  • Anil Kashyap
  • Paul Krugman
  • Richard Portes
  • John Turner
Author: John D. Turner, Queen's University Belfast
Picture by rrodrickbeiler on iStock

The post Banking crises: who won the 2022 Nobel Prize in Economic Sciences and why? appeared first on Economics Observatory.

]]>
How did the UK economy change during the second Elizabethan age? https://www.coronavirusandtheeconomy.com/how-did-the-uk-economy-change-during-the-second-elizabethan-age Mon, 12 Sep 2022 09:40:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=19242 When Queen Elizabeth II acceded to the throne in 1952, the UK was still in its post-war rebuilding phase. Rationing for most food products had just ended, with tea being de-rationed in September 1952 and meat only being de-rationed in 1954. The currency had yet to go through decimalisation and computers were human beings who […]

The post How did the UK economy change during the second Elizabethan age? appeared first on Economics Observatory.

]]>
When Queen Elizabeth II acceded to the throne in 1952, the UK was still in its post-war rebuilding phase. Rationing for most food products had just ended, with tea being de-rationed in September 1952 and meat only being de-rationed in 1954.

The currency had yet to go through decimalisation and computers were human beings who did complex computations.

In this article, using a contemporary lens, we review how the UK economy has been transformed since 1952 – and where one or two things don’t seem to have changed much.

Table 1: The UK economy, 1952 versus 2022

 19522022
Population (million)50.468.3
Real GDP per capita (£)7,53332,555
Life expectancy (years)69.281.7
Infant mortality (per 1,000 live births)29.13.4
National debt/GDP (%)165.194.9
Unemployment rate2.43.8
Men at university (000s)781,178
Women at university (000s)231,569
Dollar/sterling exchange rate2.801.16
Energy from coal (%)88.56.5
Coal mine employees (000s)7161
Consumer prices index (%)10.710.1
Bank of England rate (%)4.01.75
Top 1% wealth share48.521.3
Sources: Bank of England; Higher Education Statistics Agency; Office for National Statistics; Mitchell, 1988; Turner, 2010; World Inequality Database.
Notes: Where data for 2022 are not available yet, data for 2021 are used.

The health and wealth of the nation

The UK economy has grown by 332% over the past 70 years as measured by real GDP per capita. Therefore, in purely material terms, UK citizens in 2022 are much better off than in 1952.

The increased wealth of the nation is also more evenly distributed than it was in 1952. For example, the top 1% of wealth holders then held nearly half the wealth. Today, that figure is close to 20%.

As can also be seen from Table 1, life expectancy has dramatically improved over the past 70 years. In 1952, the average man died about three years after retiring. Today, life expectancy in the UK is nearly 82, on average.

Infant mortality, another indicator of improved health, has fallen from 29.1 deaths per 1,000 live births to just 3.4. These improvements in life expectancy and infant mortality have their origins in improvements in healthcare provision, better nutrition, reduced pollution and developments in pharmaceutical and medical technology.

Ultimately, the increased wealth of the nation has been a fundamental driver of these improvements in health outcomes. The National Health Service, created just four years before the coronation, has played an important role in providing universal access to healthcare and therefore to these improvements.

The rise in life expectancy has created difficulties for both the government and employers when it comes to pension provision. The National Insurance Act of 1946 set the state pension age for men at 65 and women at 60. Employers and pension schemes quickly followed suit.

These retirement ages made economic sense in 1952, but as people have lived longer, the cost to the state and the burden on pension schemes has meant that state retirement ages have been creeping upwards. Employers have dealt with increased longevity by moving away from defined benefit pension schemes over the past couple of decades.

End of growth?

The increased GDP per capita of the UK since 1952 has largely come as a result of improvements in productivity. Productivity in the UK enjoyed a halcyon period of growth in the 1950s and 1960s.

This slowed somewhat after the 1970s and into the 1990s. But since the global financial crisis of 2007-09, UK productivity growth – and hence economic growth – has been at its lowest rate in 250 years (Crafts and Mills, 2020).

So, is this unprecedented fall in UK productivity growth temporary or permanent? If it is the latter, then this has major consequences for policy-makers who have operated in a world of economic growth for decades.

Economic historians suggest that the UK’s low productivity in recent years has been the culmination of an adverse set of circumstances – the worst financial crisis in the country’s history, Brexit and the weakening impact of information and communications technology on productivity (Crafts and Mills, 2020).

Indeed, the productivity growth that the UK has enjoyed since 1952 can be largely attributed to the technological revolution of the computer, which had transformed business practices by the 1990s, and then the internet, which transformed commerce.

The recovery of productivity growth, and the performance of the UK economy, will largely be determined by whether there is another technological revolution.

Techno-pessimists argue that the effect of innovation and new technologies, such as artificial intelligence (AI) and nanotechnology, on productivity and economic growth will be much less than it was in the past (Gordon, 2016). In other words, the 2020s will not repeat the productivity growth witnessed in the 1950s and beyond.

On the other hand, techno-optimists argue that productivity growth in the early years of general purpose technologies such as AI is underestimated (Brynjolfsson et al, 2020).

The UK’s poor productivity performance may be exaggerated because GDP is not a great measure for the new digital age. Some argue that the mismeasurement of GDP arising from the digital revolution means that we are likely to be underestimating productivity growth (Coyle, 2018).

Environmental sustainability

GDP is not everything (Coyle, 2016). It fails to take account of natural capital – the environment. A nation’s GDP may be soaring, but it may come at a great environmental cost. So, has the UK’s economic performance come at the expense of environmental degradation?

In early December 1952, the Great Smog of London hit the capital for five days. A combination of unusually cold temperatures, fog and a lack of wind combined with pollution from coal fires and coal-fired power stations to create possibly one of the most severe smog episodes that London has ever experienced.

The smog caused severe respiratory disease, which contributed to the death of approximately 12,000 people and 100,000 people falling ill (Stone, 2002; Bell et al, 2004). It also had long-term health effects on children in utero and new-born infants (Bharadwaj, 2016).

Recent research means that we know that coal pollution had major negative effects on the economy. Coal pollution in the UK had major adverse effects on infant mortality and child development (Beach and Hanlon, 2018; Bailey et al, 2018). The industrial use of coal also had a major negative effect on employment growth in UK cities between 1851 and 1911 (Hanlon, 2020).

The Great Smog was instrumental in the passage of a series of clean air acts, which sought to replace coal fires with alternative forms of heating. 1952 was the apex of employment in the coal industry. As can be seen from Table 1, nearly three-quarters of a million people (mainly men) were employed in coal mining in 1952. By 2022, less than 1,000 were employed by UK mines. Coal went from supplying 88.5% of the country’s energy needs in 1952 to about 6.5% in 2022.

Cost of living crisis

Before we leave our retrospective look at 1952, one striking parallel in Table 1 is that inflation in 1952 was as high as in 2022. In other words, it appears that some things have not changed about the UK economy.

High inflation in 1951 and 1952 was attributable to a boom in world commodity prices caused by an inventory build-up in advance of the Korean War (Dow, 1998; Radetzki, 2006). The end of that conflict in 1953 ushered in a period of low inflation and stable commodity prices until the 1970s.

The parallels with 2022 are remarkable. A post-pandemic boom and the Ukraine war have pushed commodity prices to high levels, feeding into inflation. This historical analogue might suggest that inflation will fall rapidly next year.

But unlike in the early 1950s, the inflation of 2022 has been preceded by government largesse to help the economy to recover from the pandemic. This might mean that high inflation will not be as transitory as we may wish.

We must hope that high inflation will be brought under control in the early days of King Charles III. But if policy-makers repeat the mistakes of the 1970s, high inflation might only be tamed in the reign of King William V.

Where can I find out more?

Who are experts on this question?

  • Stephen Broadberry
  • Nick Crafts
  • Diane Coyle
  • Jagjit Chadha
Author: John Turner, Queen’s University Belfast
Editor's note: This is an update of the article Platinum Jubilee: how has the UK economy changed over the past 70 years?, first published on 1 June 2022
Photo by PicturePartners for iStock

The post How did the UK economy change during the second Elizabethan age? appeared first on Economics Observatory.

]]>
Does public trust in government matter for effective policy-making? https://www.coronavirusandtheeconomy.com/does-public-trust-in-government-matter-for-effective-policy-making Tue, 26 Jul 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18928 Trust is essential for governance, and it is therefore necessary for governments to build it among the public. Many political economists see effective states as those stemming from ‘top-down’ directives by governments. Investments into state capacity by incumbents, such as establishing an effective bureaucracy, increase the range of policies a government can implement successfully. Citizen […]

The post Does public trust in government matter for effective policy-making? appeared first on Economics Observatory.

]]>
Trust is essential for governance, and it is therefore necessary for governments to build it among the public.

Many political economists see effective states as those stemming from ‘top-down’ directives by governments. Investments into state capacity by incumbents, such as establishing an effective bureaucracy, increase the range of policies a government can implement successfully. Citizen compliance with policies is either assumed or, alternatively, achieved via coercion by law enforcement.

A second approach to state building stresses the role of individuals. Here, citizens and the government work in a mutually reinforcing, reciprocal relationship (Besley, 2021). ‘Bottom-up’ private action can similarly enhance state effectiveness.

If government is perceived to be trustworthy, people will be more likely to comply with public policies via consent. Public trust in government and the state is therefore crucial for increasing voluntary compliance.

Governments can achieve more if they know that citizens trust that policy-makers have their best interests at heart. And while it is easy to see how public trust can be eroded, (re)building trust is pivotal and can be challenging.

The importance of public trust

Trust in government and the state matters for a variety of reasons. Primarily, it increases voluntary compliance towards public policies. If we think of state capacity broadly as ’the government’s ability to accomplish its intended policy goals’, it stands to reason that compliance serves an essential purpose (Dincecco, 2017).

The archetypal example of state capacity is fiscal capacity – referring to the state’s ability to increase tax revenues to fund public goods and services for society (Besley and Persson, 2011). Investments into fiscal capacity typically involve strengthening bureaucracies and establishing fair and transparent tax systems, part of which involves improved monitoring. Citizen compliance with paying one’s taxes is therefore a direct function of the state’s oversight and reach to enforce tax policy (Allingham and Sandmo, 1972).

At the same time, if people think that tax revenues are well spent on public goods and services – with government proving its trustworthiness against profligacy as a result – this should increase society’s intrinsic willingness to pay taxes (Levi, 1988). In economics, this is sometimes known as ‘tax morale’ (Luttmer and Singhal, 2014).

Trust in government is crucial to establishing norms of compliance towards taxes (OECD, 2019). And compliance via consent, as opposed to compulsion, is a substantially cheaper method to implement effective policies (Tyler, 2006).

Supporting fiscal capacity via tax morale is one example of how voluntary compliance enhances state capacity via trust in government. But there are a range of other policies that require high levels of compliance to be successful, beyond just taxes. And there are also a host of activities by which citizens interact with the state on key issues which require public trust.

The Covid-19 pandemic is a useful setting to explore this idea further. Across countries, vaccine rollouts have almost exclusively been administered by the state (as opposed to private companies). Trust in government has therefore been necessary to incentivise people to get vaccinated (and to thwart conspiracy theories that go against government-backed scientific advice).

Even before the pandemic, there was evidence of a strong association between public trust and inoculation willingness (Blair et al, 2017). Political trust during Covid-19 has also been associated with greater compliance with lockdown policies (Bargain and Aminjonov, 2020). Clearly, with a public health emergency like Covid-19, trust in government has been a vital asset for delivering effective interventions such as the vaccine rollout.  

Direct measures of compliance are possible on a policy-by-policy basis. But micro-data can provide some insights into cross-national trends. An index of voluntary compliance can be constructed by looking at respondents’ willingness to fight for their country, pay taxes and accept higher taxes to prevent environmental pollution (Besley, 2022). Conscription and taxes are two well-studied areas of compliance-based interactions with the state (Levi, 1997).

This work shows a strong positive correlation between the proportion of respondents who report high confidence in government and the voluntary compliance index (see Figure 1). Although not causal evidence, it highlights that trust and compliance are closely related.

Figure 1: Trust and compliance, by country

Source: Integrated Values Survey

Levels of political trust across the UK

Trust in government is relevant to every country’s political life. Recent political events have focused significant attention on this to the UK. In the first Conservative Party leadership debate to replace Boris Johnson, trust was the first issue highlighted, and was subsequently discussed at length.

Indeed, only 35% of the UK population trust the government, according to new data from the Office for National Statistics (ONS). This is below the OECD average of 41%.

While trust in government is crucial, there are several other branches of the state that citizens can have faith in. Local government is typically deemed more trustworthy than national government. This is partly because people have more interaction with local public services and have greater access to, say, local councillors compared with MPs in Westminster (Jennings, 1998).

The ONS findings show that 42% of people in the UK trust local government – a notably higher share than for national government. The data also indicate that trust is higher in the civil service, at 55%. Given civil servants are impartial bureaucrats, this should overcome public trust as being driven purely by partisan ties (for example, Conservative voters will be more likely to trust a Conservative government versus a Labour one). Political parties are actually the state institution with the lowest level of trust among the public, at 20%.

Figure 2: Trust in political institutions

Source: ONS

The ONS findings are a static snapshot of the distribution of public trust in 2022. Ideally, we also want to understand how trust has evolved over time. The available data from the OECD going back to 2010 show that trust in government dropped to around 35% in 2019, where it has remained since.

This stands below the 2010-2020 average of 42% (the pale blue line in Figure 3). Although speculative, it seems that Brexit, Boris Johnson’s premiership and Covid-19 are plausible drivers of this trend.

Figure 3: Trust in UK government (2010-2020)

Source: OECD

Why should governments build trust?

Declining public trust in government is worrying. The Covid-19 pandemic has tested public trust around the world. As the worst of the crisis subsides, (re)building trust will be pivotal not only to expand state capacity as a long-term project, but to also tackle future crises.

There are a range of issues which require government and their citizens to work together. The threat of climate change is a clear example. Governments must implement the right environmental policies to encourage a green transition.

Evidence suggests that such policies can help shift households’ habits towards greener patterns of consumption – although this logic is yet to be extended to firms and production tendencies (Nyborg et al, 2016; Mattauch et al, 2022). But for these theoretical ideas to work in practice, compliance will be necessary and cannot be taken for granted.

The OECD’s trust survey report shows that across OECD countries, half of individuals think that climate change should be prioritised by government, but only one-third are confident in policy success on this issue.

Across UK regions and the devolved nations there is also a positive correlation between confidence in government and citizens’ willingness to sacrifice income for the environment (see Figure 4). This relates to growing evidence on linking trust to climate policy preferences (Klenert et al, 2018; Dechezleprêtre et al, 2022). This also indicates that political trust matters for enhancing voluntary compliance to solve crises such as climate change, by increasing the range of policies available to policy-makers.

Figure 4: Political trust and environmental preferences, by UK nation/region

Source: Integrated Values Survey

How to build trust in government?

The drivers of mistrust in government are to some extent fairly obvious. Corruption is a case in point. If people see politicians using public office for private gain, they will be less likely to believe that government has their best interests at heart. Why would any household or firm pay taxes if they knew the money was being used to line politicians’ pockets?

Evidence that policy-makers are not following the rules, regulations or laws that they set also decreases public trust. ‘Partygate’ and the Dominic Cummings affair in the UK are evidence of this. Respect for the rule of law by those working in government is paramount for maintaining trust, ensuring there is sufficient compliance for delivering effective policy.

In terms of building trust in the state, proving policy competence is one method. Successful delivery of public goods and services, to increase the welfare of citizens, can be highly effective. The United States in the 1930s is a clear historical example. Recent evidence suggests that Americans who benefited from President Franklin  D. Roosevelt’s ‘New Deal’ – at the time, an unprecedented expansion of the state – were far more willing to voluntarily contribute to the war effort during the Second World War (Caprettini and Voth, 2022).

As with the case for tax morale, people are more likely to trust government, and voluntarily comply with policies, if government is perceived to work in citizens’ best interests.

Increasing the scope of public engagement with politics is also an important driver of public trust (Kumagai and Iorio, 2020). Legal scholars have emphasised the importance of ‘legitimacy’ – individuals tend to obey the law not from fear of sanctions, but because they see the law as a legitimate moderator of human behaviour (Levi et al, 2012).

A greater commitment to procedural fairness and amplifying people’s voices in the political process can help ensure policy decisions are seen as being legitimate, thereby strengthening trust.

Developing a strong social contract between government and the governed is no easy task. Healthy scepticism towards politicians is important so that government remains accountable to an active and vigilant citizenry.

Nevertheless, public trust is critical to expanding state capacity for the long run and to tackling future crises, such as climate change.

Where can I find out more?

Who are experts on this question?

  • Tim Besley
  • Chris Dann
  • Imran Rasul
  • Paola Giuliano
Author: Chris Dann
Photo by f11photo from iStock

The post Does public trust in government matter for effective policy-making? appeared first on Economics Observatory.

]]>
Sterling crisis: what are the historical precedents? https://www.coronavirusandtheeconomy.com/sterling-crisis-what-are-the-historical-precedents Mon, 25 Jul 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18904 The sterling exchange rate has been sliding during 2022, particularly against a strong US dollar. On 1 January, £1 would have bought $1.35, compared with just $1.20 on 21 July (Investing.com). Figure 1: GBP to US dollar exchange rate Source: Investing.com Might this weakness – in a context of high inflation and low growth – […]

The post Sterling crisis: what are the historical precedents? appeared first on Economics Observatory.

]]>
The sterling exchange rate has been sliding during 2022, particularly against a strong US dollar. On 1 January, £1 would have bought $1.35, compared with just $1.20 on 21 July (Investing.com).

Figure 1: GBP to US dollar exchange rate

Source: Investing.com

Might this weakness – in a context of high inflation and low growth – be pointing towards a future crisis of the currency? That is hard to judge. Sterling weakness is no longer resisted by UK authorities as it was in the 20th century, undermining the idea of a sterling crisis.

But examining relevant elements of past sterling crises can still help us to understand what is currently happening and what might happen going forward.

What is a currency crisis?

There are many reasons why people and organisations buy and sell pounds, say, for other currencies. Such ‘business as usual’ foreign exchange transactions – for example to pay for imports – take place all the time.

In a currency crisis, there is unusually large dealing activity by speculators, who sell pounds in expectation of making a profit, by buying them back on a later date, at a lower exchange rate. Anyone can be a speculator, whether it be a business, individual or government agency. They are motivated by making a profit or avoiding a loss.

While speculation in foreign exchange takes place constantly, the distinguishing features of a currency crisis are the widespread expectations of the speculators, and the scale and direction of their speculation: to sell the currency.  

A currency crisis is therefore defined as a ‘speculative attack’ on the currency, and it results in either a large, discrete decline in the exchange rate, or strong countervailing policy action to overcome the currency pressure. It can be measured with hindsight by calculating a weighted currency pressure index, measuring unusual changes in the exchange rate, interest rates and international reserves (Eichengreen et al, 1994).

More simply, it can be defined as a minimum threshold decline in the exchange rate, say, by 15% or 25% over a short period (Reinhart and Rogoff, 2009; Frankel and Rose, 1996).

A currency pressure index is hard to apply historically to the UK, where published reserves and key interest rates were frequently ‘massaged’ by the authorities (Capie, 2010; Naef, 2021). The effective decline in sterling against trading partners during acknowledged crises over the last 100 years has always been less than 25% (see Table 1).

Movement against the US dollar has sometimes been greater, but the United States has a current weight of only 20% in sterling’s effective exchange rate index calculated by the Bank of England. Europe’s weight is 55%, while that of China/Hong Kong is 12%.

Table 1: Past crises

Time periodCrisisDecline vs USD /
effective
Resisted?Banking crisisPolitical crisis
Sept-Dec 1931Sep 1931 Forced off gold standard-33% / -20%YesYesYes
Aug-Sept 1939Eve of WW2-14% / naYesNoNo
Jul-Aug 1947Convertibility-YesNoNo
May-Sept 1949Sep 1949 Realignment-33% / -9%YesNoNo
Jun 1951 - Jul 1952Korean war boom-YesNoNo
Apr-Oct 1955Floating exchange rate debate-YesNoNo
May-Dec 1956Suez-YesNoNo
Apr-Sep 1957 --YesNoNo
Jan-Jul 1961--YesNoNo
Jul 1964 - Jan 1965--YesNoNo
May-Aug 1965--YesNoNo
Jan-Sep 1966--YesNoNo
Apr-Nov 1967Nov 1967 devaluation-14% / -14%YesNoNo
Jun-Jul 1972Jun 1972 forced from EEC snake, sterling floats-8% / -8%(Yes)NoNo
Mar 1975 - Oct 1976Slide leading to IMF loan-36% / -23%(Yes)(Secondary)No
Jan-Sep 1981Slide from strong pound-26% / -18%(Yes)NoNo
Dec 1985 - Oct 1986Lawson slide, oil price fall-4% / -19%(Yes)NoNo
Sep 1992 - Feb 1993Sep 1992 forced from ERM-29% / -18%(Yes)NoNo
Sep-Dec 2008Sep 2008 Lehman collapse, global financial crisis-21% / -19%NoYesNo
Jun-Oct 2016June 2016 Brexit referendum-16% / -16%NoNo(No)
Sources: Bank of England daily sterling-dollar exchange rates and effective exchange rate indices; Burk and Cairncross, 1992; Cairncross, 1985; Cairncross and Eichengreen, 1983; Cairncross, 1996; Eichengreen, 1995; Hirsch, 1965; Kennedy, 2018; Redmond, 1980; Schenk, 2010; Stephens, 1997.

What makes a currency crisis?   

Theories of currency crises seek to explain the reasons why a successful speculative attack might occur. They focus on inconsistent macroeconomic policies (for example, the crises in Latin America in the 1980s), self-fulfilling crises caused by speculators perceiving policy dilemmas (such as the 1992 European exchange rate mechanism, ERM, crisis) or financial fragility in the banking system or wider economy (for example, the 1997 Asian crisis) (Glick and Hutchison, 2011).

Historic sterling crises have usually been explained as the first type, by government policies that are inconsistent with the prevailing exchange rate. For instance, the sterling crises of the 1960s have been attributed to fiscal deficits leading to excessive public-sector borrowing. The expansion of domestic demand increased the volume of imports and, through higher wage costs, adversely affected the competitiveness of exports. This produced strains in the balance of payments. The Bank of England then did not tighten domestic monetary conditions sufficiently to counteract the resulting loss of reserves (Capie, 2010).

By contrast, banking weakness played a major role only in the significant cases of the UK leaving the gold standard in September 1931 (Accominotti, 2012) and the global financial crisis of 2007-09 (Laeven and Valencia, 2010).

Such ‘twin crises’ – a banking and a currency crisis each exacerbating the other – can have some of the worst consequences for an economy (Kaminsky and Reinhart, 1999). While the British banking system is stronger than in 2007, low interest rates and high levels of debt make firms and households vulnerable to rising interest rates.    

The 1992 ERM crisis is usually presented as a self-fulfilling crisis (further details in the box below). Speculators perceived that the government would not accept the deflationary measures, and unemployment consequences, needed to defeat them (Eichengreen et al, 1994). In this case, while inconsistent macroeconomic policies led inexorably to devaluation, in a self-fulfilling crisis devaluation only happens if speculators make it happen.

These types of crises are by definition hard to spot because common early warning signals, such as a rise in the real exchange rate (prices of UK goods becoming uncompetitive internationally), are not necessarily present. (For early warning signals see Reinhart and Rogoff, 2009; Glick and Hutchison, 2011).

The ERM crisis

The UK’s dramatic exit from the ERM on Black Wednesday, 16 September 1992, was a costly national humiliation and demonstrated how powerful speculators had become in foreign exchange markets.

It also shows the importance of the international context. For 20 years, the European community had been operating currency peg schemes, designed to keep member currencies stable against each other. But after an embarrassing early exit from the European ‘snake’ in 1972, the UK avoided them.

The government’s decision to join the ERM in October 1990 had mainly a counter-inflationary purpose, since its anchor currency was the low-inflation Deutschmark. But this late entry also coincided with the reunification of East and West Germany. By summer 1992, Germany’s independent central bank was keeping its own interest rates high in order to combat the domestic inflationary consequences. Meanwhile, the UK was in recession, with its key short-term interest rate at 10%, and there was concern that the recession could become a slump.

In other words, the UK was not – on this occasion – following inappropriately loose fiscal and monetary policy. What was needed was an orderly realignment of currencies with an increase in the value of the Deutschmark against other leading participants. Most historical accounts indicate that this was diplomatically feasible within the ERM.

But the British government boxed itself into a corner. The Chancellor Norman Lamont refused to devalue unless France did so. He also signalled that the government had little stomach for an increase in interest rates given the state of the economy. And his rhetoric indicated that he would use massive intervention in the currency markets to defend the peg and defeat the speculators.

This was a further invitation to speculators, such as the Hungarian financier George Soros, who could see that an increased volume of intervention by the British, French and German central banks would allow even larger opposing bets to be made at a vulnerable exchange rate, creating the opportunity for an increased volume of profits.

The battle was lost within a few days after the Bank of England’s foreign exchange reserves were exhausted. Black Wednesday itself proved the impotence of the government’s efforts, culminating in a decision to abandon the ERM entirely. In the following months, sterling continued to drift downwards. After its unsuccessful experiments with monetarism and exchange rate targeting, the British government came to concentrate its counter-inflation efforts on targeting inflation directly.


(See Roberts et al, 2017; Schenk, 2010; Stephens, 1997).

The power of speculators increased significantly with the growth of international capital flows from the 1990s (Lane and Milesi-Ferretti, 2007). But policy dilemmas and constraints on government action are also nothing new and have been a feature of most sterling crises since 1931. Unemployment was a particular government concern in 1931, 1972, 1992 and the global financial crisis of 2007-09.

The exchange rate policy context

Implicit in these theoretical ideas is the existence of an exchange rate policy – resisting the speculative pressure – and a monetary regime, such as a fixed or pegged exchange rate. Resistance means that the government makes policy more consistent by increasing interest rates, reducing government spending and raising taxes. Or it employs its international reserves, intervenes in the spot and forward exchange markets and borrows reserves from other countries, hoping to see off the speculators.

Under fixed exchange rate regimes, the interwar gold standard (1925-31) and Bretton Woods (1946-71), the sterling devaluations of 1931, 1949 and 1967 were fiercely resisted, and the capitulation was accordingly traumatic (Cairncross and Eichengreen, 1983).

Under pegged or managed floating regimes, the depreciations of 1972, 1976 and 1992 were also resisted for a time, with international borrowing, but with less commitment (Schenk, 2010). This century, reflecting the reality of large capital flows and the UK’s limited international reserves, sterling has been floating freely.

The sterling exchange rate declines of 2008 and 2016 were not resisted and so did not attract much adverse comment. Nicholas Macpherson argues in the Financial Times that UK citizens should be concerned about sterling weakness, which increases inflation and, post-Brexit, may be becoming less effective at reducing trade deficits, but that UK politicians think they can be indifferent to it.

Fundamentals, the current and capital accounts

Fundamental international changes in capital, human capital, production and productivity lead currencies to fluctuate over time. The long-term trend of sterling has been downward and some devaluations – for example, 1931 and 1949 – partly reflected the consequences of world wars.

In such devaluations, there is ‘reason to doubt whether things would have gone better in the long run…if…devaluation had been delayed or avoided altogether’ (Cairncross and Eichengreen, 1983). The fundamentals are seen in the balance of payments, where problems can lie either in the current or capital accounts, with the latter playing a large role in all the sterling crises listed in Table 1.

Between the 1949 devaluation and the 1972 float, a period littered with sterling crises, the UK had a current account surplus more often than a deficit. Current account deficits also tended to be small in relation to GDP – for example, 1% in the crisis year 1964 (Mitchell, 1988).

But policy-makers argued that UK current account surpluses were simply not large enough given sterling’s use as a reserve currency by the ‘sterling area’, mainly comprising Commonwealth countries, and UK capital exports to those countries.

In 1965, 89% of the sterling area’s foreign exchange reserves were still held in sterling. This meant that sterling’s fortunes rested on the balance of payments of the wider sterling area, not just the UK. Some sterling crises, such as those in 1952 or 1957, coincided with heavy spending of sterling reserves by Australia or India (Kennedy, 2018).

Sterling’s weakness between 1964 and 1976 is also partly attributable to the disintegration of the sterling area, as sterling-holding countries – from Burma in 1964 to Nigeria in 1975-76 – sold sterling for other currencies (De Bromhead et al, 2022; Schenk, 2010).

Today, sterling is still a minor reserve currency, accounting for about 5% of known global foreign exchange reserves (International Monetary Fund). At the same time, the UK has a large current account deficit running in 2022 at more than 7% of GDP, making it reliant on capital inflows (Office for National Statistics).

Monetary and fiscal policy adjustment

Under fixed and pegged exchange rate regimes, the central bank first perceives currency pressure through a loss of reserves. To halt this and stabilise the exchanges, UK authorities usually increased short-term interest rates and tightened lending conditions. The action that governments usually found most difficult and controversial was to reduce public spending (Cairncross and Eichengreen, 1983; Capie, 2010).

In the 30 years following the Second World War, burdened by post-war debt, UK governments were concerned about the country’s slower growth relative to peers. They often delayed taking corrective action, leading to speculative pressure on reserves and sudden interest rate hikes coinciding with crises (see Table 1). The reserve signals were also made more confusing by payments in the wider sterling area, further delaying fiscal adjustment – for example in 1960 and 1964.

The overall result was a pattern of ‘stop-go’ in UK economic activity (Brittan, 1971). Policy-makers even thought that this might be the reason for the UK’s relative underperformance and that it would be better to ignore the signals. In 1963-64 and 1972-73, governments consciously decided to ‘go for growth’. This led to severe pressure on sterling in October-November 1964, and high inflation in 1974-75, after the price of oil more than doubled in late 1973 (Brittan, 1971; Cairncross, 1996).

As seen in past inflationary episodes, persistently high inflation relative to other countries is accompanied by currency depreciation (Reinhart and Rogoff, 2009). To defeat inflation, people’s expectations must be anchored by credible monetary and fiscal policy (Sargent, 1982).

In the 1980s, monetary policy and a strong pound were used forcefully to bear down on inflation, to the cost of the UK’s manufacturing sector (Stephens, 1997). Since 1997, the setting of interest rates has been delegated to the Bank of England’s Monetary Policy Committee (MPC).

But an independent agency targeting 2% inflation two to three years in the future may still suffer from forecasting errors and biases, such as being more concerned about today’s weak growth than its above-target inflation.

Changes in expectations

The context for sterling today is very different from that which prevailed during the sterling crises of the 20th century. The days of exchange rate intervention and the sterling area are long past.

Nevertheless, sterling’s effective exchange rate could be vulnerable to changes in expectations about the UK triggered by, for example, fiscal policies lacking credibility or adverse new information.

The government is facing difficult decisions about how to navigate the cost of living crisis. Inflations and recessions can lead to social and political conflict, which was notable in 1931 and 1972-79. A new prime minister may be tempted to ‘go for growth’, repeating the approach of 1963-64 and 1972-73. And new information may emerge about the likely longer-term effects of Brexit on UK exports, investment and growth.

Even small changes in long-term expectations can have a large impact on the exchange rate, as currency markets discount the future and recalculate its present value.  

Where can I find out more?

Who are experts on this question?

  • Currency crises: Olivier Accominotti, Guillermo Calvo, Barry Eichengreen, Jeffrey Frankel, Michael Hutchison, Graciela Kaminsky, Paul Krugman, Maurice Obstfeld, Carmen Reinhart, Kenneth Rogoff, Andrew Rose
  • British economic policy: Forrest Capie, Nick Crafts, Barry Eichengreen, Nicholas Macpherson, Roger Middleton, Scott Newton, Catherine Schenk, Jim Tomlinson
  • History of sterling: Maylis Avaro, Benjamin Cohen, Barry Eichengreen, Francis Kennedy, Catherine Schenk, John Singleton
Author: Francis Kennedy
Picture by VM Studio on iStock

The post Sterling crisis: what are the historical precedents? appeared first on Economics Observatory.

]]>
Summer of discontent https://www.coronavirusandtheeconomy.com/summer-of-discontent Fri, 24 Jun 2022 09:03:34 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18592 Newsletter from 24 June 2022 Midsummer’s day is here, but the UK economic news is unremittingly wintry. Food and fuel costs continue to push higher. National rail strikes presage a long season of industrial action as workers and their trade union representatives demand pay increases that keep up with rising prices. Higher interest rates, as […]

The post Summer of discontent appeared first on Economics Observatory.

]]>
Newsletter from 24 June 2022

Midsummer’s day is here, but the UK economic news is unremittingly wintry. Food and fuel costs continue to push higher. National rail strikes presage a long season of industrial action as workers and their trade union representatives demand pay increases that keep up with rising prices. Higher interest rates, as the Bank of England tries to deliver on its central objective of returning inflation to an annual rate of around 2%, risk weakening a fragile economy and perhaps tipping it into recession. And on top of all that, there is the self-inflicted harm of Brexit – disrupting goods and services trade, business investment, the post-pandemic recovery, and much else.

We have covered many of these challenges at the Economics Observatory in recent weeks and months. Early in the year, Delia Macaluso (University of Oxford) and Michael McMahon (also Oxford and one of our lead editors) explained how Covid-19-induced shortages of goods and increases in labour, energy and transport costs in global supply chains were contributing to initial inflationary pressures around the world. Russia’s invasion of Ukraine and European efforts to move away from dependence on Russian oil and gas have led to even higher energy prices, as discussed in pieces by Erkal Ersoy and Christopher Aitken (Heriot-Watt University) and by Helen Thompson (University of Cambridge).

The war is also having a big impact on global food prices, as explored this week on the Observatory by Lotanna Emediegwu (Manchester Metropolitan University). As he describes in this and an earlier analysis of the effects on global food security, conflict in the ‘breadbasket of Europe’ is driving up food prices for the continent and the wider world. But developing and emerging economies are being hit hardest due to their reliance on the region for fuel and grain imports. Crop shortages and price hikes in these countries could spur further political turbulence and even violence.

Double trouble

Back in the UK, the consumer price index hit an annual inflation rate of 9.1% in May, the highest since March 1982, according to new data from the Office for National Statistics (ONS). Observatory manager Charlie Meyrick considers what this means for the cost of living crisis, particularly for lower-income households, as well as the industrial action by the National Union of Rail, Maritime and Transport Workers and potential strikes in other parts of the economy. These are issues to which we will return in the coming weeks.

Figure 1: CPIH, CPI and OOH inflation rates (May 2012 to May 2022)

Source: ONS
Note: CPIH – consumer price index including owner occupiers’ housing costs; CPI – consumer price index; OOH – owner occupiers’ housing costs

Elsewhere, there’s been some good coverage of the inflation data and their implications for monetary policy and public sector pay. Our colleagues at the National Institute of Economic and Social Research (NIESR) note the key role of rising food prices in keeping inflation at this historic high – and like the Bank of England, they expect the annual rate to reach double digits before turning down. Chris Giles at the Financial Times notes that just a year ago, inflation appeared under control, and blames central bankers for complacency about what was coming. And Soumaya Keynes at The Economist and Paul Johnson at the Institute for Fiscal Studies (IFS) separately anticipate some of the challenges for the government of trying to restrain public sector pay to curb inflation.

Looking much further back in history, another new piece this week gives a perspective on UK inflation over nearly a millennium. Jason Lennard of the London School of Economics (LSE) and Ryland Thomas at the Bank of England outline the challenges of measuring the rate of inflation facing different households, sectors and regions – and how past statisticians and commentators have sought to construct a price index. Gradual progress on measurement over the centuries by contemporaries and economic historians has improved our understanding of inflation over the long run, but much remains to be done.

Figure 2: UK consumer price level, 1086 to 2021

Sources: From 1209, the measure of consumer prices used in both charts includes housing costs (actual and owner occupiers’ rent) and is based on the aggregate expenditure weights of all households. From 1209 to 1830, it is based on the domestic expenditure deflator of Clark, 2015; from 1830 to 1949, it uses the consumers’ expenditure deflator based on Deane, 1968Feinstein, 1972; and Sefton and Weale, 1995; and from 1949, it uses the long-run CPIH index recently produced by the Office for National Statistics. Between 1086 and 1209, the price index is a trend measure based on a more limited set of commodities based on Barratt, 19962001, and Mayhew, 2013. Alternative measures of consumer prices such as the CPI and those based on the expenditure weights of the ‘working classes’ or wage earners can be found in Thomas and Dimsdale, 2017.

The final new Observatory contribution this week draws our attention away from rising prices to focus on falling prices – specifically those of cryptocurrencies like Bitcoin and Ethereum, which have been plummeting in recent weeks. William Quinn (Queen’s University Belfast), who late last year wrote a punchy piece for us on why the price of Bitcoin has risen/fallen in the past day/week/month, now clarifies why the latest movements are steeply downwards. While the weak state of the global economy has triggered the crash, its root cause is that cryptocurrencies have always been fundamentally unsound long-term investments. They have no intrinsic value – and like pyramid schemes, they require a continuous flow of new investors to sustain prices. That flow has dried up.

Figure 3: Bitcoin price (dollars), June 2019-June 2022

Source: Yahoo! Finance

Safe European home

Another ill-advised scheme is also revealing serious signs of stress. It is six years this week since the referendum vote for the UK to leave the European Union (EU) – and our colleagues at the Centre for Economic Policy Research (CEPR) have marked the sad occasion with an ebook on what Brexit has meant for the UK economy. Next week here, we’ll have a piece by the publication’s editor Jonathan Portes (King’s College London and UK in a Changing Europe) on the impact of the post-Brexit immigration system. Overall migration numbers have not changed much since when the UK was an EU member, but the provenance, skills and sectoral mix of migrants is starting to look substantially different.

Previous Observatory articles on Brexit include explorations of its impact on Northern Ireland’s economy, Welsh ports, Scotland’s fishing industry, competition policy, Premiership football, hate crime, earnings inequality and the role of sterling in UK trade.

This week, our colleagues at the Centre for Economic Performance (CEP) at LSE have published evidence on the dramatic fall in the value of UK imports from the EU relative to the rest of the world after the Trade and Cooperation Agreement came into effect, as well as the destruction of many smaller trading relationships. A separate CEP study shows that leaving the EU’s single market and customs union has led to a 6% rise in food prices in the UK.

One final big news story this week has been European leaders’ approval of Ukraine as a candidate for EU membership. A recent Observatory piece by Richard Disney and Erika Szyszczak (both University of Sussex) considers the economic differences that EU membership would bring. They note that existing agreements between Ukraine and the EU have already promoted substantial trade flows. Accession would have bigger implications for freedom of movement of capital and workers – investment inflows and migration outflows – and these areas are where negotiations are likely to focus.

Young at heart

The Royal Economic Society (RES) young economist of the year competition closes next month. Essays of up to 1,000 words can address some of the big issues we discuss on the Observatory, including cryptocurrencies, the cost of living crisis and the ‘levelling up’ agenda for tackling regional inequalities – on which we will have a new piece by CEP’s Henry Overman and Helen Simpson (Centre for Evidence-Based Public Services, CEPS, University of Bristol) early next week.

If you have comments on any of the articles published by the Economics Observatory, please get in touch. We also welcome suggestions for questions that our contributors can answer. And do pass on the link to this newsletter to friends and colleagues who might be interested. Anyone can sign up here.

Author: Romesh Vaitilingam
Photo by solarseven from iStock

The post Summer of discontent appeared first on Economics Observatory.

]]>
Why are cryptocurrencies crashing? https://www.coronavirusandtheeconomy.com/why-are-cryptocurrencies-crashing Thu, 23 Jun 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18582 Cryptocurrencies are digital assets that purport to be a form of money. Since the beginning of 2017, increasing interest in the concept has caused their prices to soar, making many early adopters rich. As a result, they are now generally promoted as investment assets rather than money assets. But the prices of popular cryptocurrencies have […]

The post Why are cryptocurrencies crashing? appeared first on Economics Observatory.

]]>
Cryptocurrencies are digital assets that purport to be a form of money. Since the beginning of 2017, increasing interest in the concept has caused their prices to soar, making many early adopters rich. As a result, they are now generally promoted as investment assets rather than money assets. But the prices of popular cryptocurrencies have recently collapsed (see Figures 1-3). Why?

Figure 1: Bitcoin price (dollars), June 2019-June 2022

Figure 2: Ethereum price (dollars), June 2019-June 2022

Figure 3: XRP price (dollars), June 2019-June 2022

Source: Yahoo! Finance

To appreciate what is happening, we need to understand the nature of cryptocurrencies as investments. In finance, we typically evaluate investments by assessing their associated future cash flows. For example, if we want to understand how much a share in a company is worth, we try to estimate how much money the business might make in the future.

A small number of investment assets, such as gold, are useful to investors despite producing no cash flows. This is usually because extensive historical data indicate that their price tends to rise when other assets are losing value. Including them in a portfolio of investments can therefore reduce the investor’s level of risk.

Depending on investors’ risk preferences, this may offset the reduced expected return arising from the absence of cash flows. These assets are also commodities: they would still be useful to their holder even if no one was willing to buy them.

When evaluated as an investment, cryptocurrencies are unique – and not in a good way. In the vast majority of cases, they do not entitle the holder to any cash flows; their price does not appear to rise when other investments are falling; and they have no value beyond the willingness of another person to pay for them.

In addition, the most popular cryptocurrencies incur substantial energy costs. Bitcoin’s ‘proof-of-work’ verification system gives investors incentives to use computing power repeatedly to guess a very large number, with the closest guess receiving newly minted Bitcoin. This burns through an estimated $6.5 billion of electricity per year. Since these electricity costs cannot be paid in Bitcoin, the system is negative-sum: more money will be lost by the losers than is gained by the winners.

Figure 4: The negative-sum cash flows of Bitcoin

This means that the cryptocurrency ecosystem requires a continuous inflow of new investment just to keep prices at the same level. Positive returns can only come from future investors who are willing to pay a higher price.

The only other class of ‘investment’ for which this is always true is Ponzi or pyramid schemes, two common forms of fraud in which money from late investors is used to pay unsustainably high returns to early investors. Similar to Ponzi schemes, crypto investors often advertise the coins they hold aggressively in an effort to attract newcomers (a practice commonly known as ‘shilling’).

Another way to increase the flow of cash into the system is via the use of debt. For example, rather than buy $1,000 worth of Bitcoin, an investor might use that $1,000 as collateral for a loan of $10,000 and buy $10,000 worth of Bitcoin (this is generally known as margin trading).

Since this generates ten times more demand, margin trading will cause prices to rise more quickly in a bull market – a period when an asset’s price rises continuously. But the lender typically retains the ability to force the borrower to sell if their investments fall below a certain level – the dreaded ‘margin call’. These forced sales mean that prices will also fall more rapidly on the way down.

Another significant form of debt in the crypto ecosystem involves stablecoins. These are cryptocurrencies that are ‘pegged’ to the value of a traditional currency, usually the dollar. They are often issued without being fully backed by underlying dollars, with the excess stablecoins invested in crypto assets.

Effectively, this means investing depositor money multiple times, in much the same way as a fractional reserve bank (where only a portion of deposits are backed by cash and therefore available for withdrawal). Since it incurs liabilities, it constitutes another form of debt.

At the time of writing, the largest stablecoin, Tether, had 67.9 billion Tethers outstanding. The quantity of dollar equivalents backing these assets is unknown but has previously been at levels as low as 6%. In other words, the Tether Corporation had at that time issued over 16 Tethers for every dollar it owed to depositors. This represents another significant source of debt, and the potential failure of the Tether peg is widely considered as a systemic risk to the crypto sphere.

Partially backed stablecoins can simply be used to buy other cryptocurrencies, increasing their price even when no new dollars are entering the system. More commonly, they are used to collateralise margin trading in the manner outlined above, layering one level of indebtedness on another.

Importantly, the fundamental negative-sum nature of cryptocurrencies is not changed by all of this debt. More leverage can postpone the crash, but it cannot prevent it, and it is likely to make it more sudden and painful. Research shows that debt-fuelled bubbles tend to be much more economically destructive than other bubbles (Quinn and Turner, 2020).

The current drop in the price of cryptocurrencies is simply the result of this debt-fuelled, negative-sum system unwinding. As a result of the increasing cost of living, rising interest rates and post-pandemic return to normality, the flow of new money entering the system has dried up.

Falling prices are leading to margin trader liquidations, financial difficulties at major crypto firms and stablecoin collapses. These events impose losses on other participants in the crypto economy, who may themselves default on debt, creating a vicious downward spiral. This momentum can only be stopped by finding a way to generate a new influx of dollars or to prop up prices temporarily with more debt.

The trigger for the crash was a change in the economic environment, but its root cause is that cryptocurrencies have always been fundamentally unsound long-term investments. History tells us that negative-sum assets with no use value cannot hold their value indefinitely. As is often the case during a bubble, the promise of ‘getting rich quick’ appears to have blinded many participants to the economic reality.

The big question for policy-makers now is whether this poses a threat to financial stability. Fortunately, the extent of institutional investment in crypto has been heavily exaggerated, and systemically important banks are unlikely to have much direct exposure to the crash.

The main risk to stability comes from the possibility that the banks are indirectly exposed in a way that they themselves do not fully understand. This was the case during 2021’s Archegos scandal, when the collapse of an investment company triggered banks and other investors to lose billions of dollars.

The bad news is that money lost in crypto investments is almost certainly gone forever. Investments with regulated UK financial services firms are often partially covered by the Financial Services Compensation Scheme, and deposits at banks are covered by deposit insurance. But since crypto is largely unregulated, money held at crypto exchanges or investment platforms is not covered. Although the crypto crash is unlikely to cause the next great recession, the losses to individual investors could be severe.

Where can I find out more?

Who are experts on this question?

  • Frances Coppola
  • John M. Griffin
  • John Paul Koning
  • David Gerard
  • Andrew Urquhart
  • Eric Budish
Author: William Quinn
Photo by solarseven from iStock

The post Why are cryptocurrencies crashing? appeared first on Economics Observatory.

]]>
Inflation past and present: how have we measured the rising cost of living? https://www.coronavirusandtheeconomy.com/inflation-past-and-present-how-have-we-measured-the-rising-cost-of-living Mon, 20 Jun 2022 00:00:00 +0000 https://www.coronavirusandtheeconomy.com/?post_type=question&p=18477 Inflation in the UK and elsewhere is rising. This increase in consumer prices is linked to soaring energy costs as a result of the war in Ukraine, as well as the considerable disruptions to global supply chains following the pandemic. While the fundamental cause of the general increase in prices is debated (see King, 2022, […]

The post Inflation past and present: how have we measured the rising cost of living? appeared first on Economics Observatory.

]]>
Inflation in the UK and elsewhere is rising. This increase in consumer prices is linked to soaring energy costs as a result of the war in Ukraine, as well as the considerable disruptions to global supply chains following the pandemic.

While the fundamental cause of the general increase in prices is debated (see King, 2022, and Capie, 2022, for a monetary interpretation), the rise in oil and gas prices is likely to squeeze real incomes in countries that are net importers of energy. This is resulting in what is now widely described as a cost of living crisis.

In the UK, real household disposable income per person – that is, income per person adjusted for the price of consumer goods – is predicted to fall by just under 2% in 2022, according to the Office for Budget Responsibility (OBR). This is a large fall in the context of the last 100 years and one that is typically only observed in large recessions and financial crises (see Figure 1).

Figure 1: Real household disposable income per person and inflation, 1920 to 2022

Panel A: Contributions of nominal income per person and inflation

Panel B: Real gross disposable income

Sources: Office for National Statistics (ONS); Sefton and Weale, 1995; Thomas and Dimsdale, 2017; OBR March 2022 Economic and Fiscal Outlook. The measure of inflation used here is based on the national accounts consumer expenditure deflator (CED). The 2022 observations are forecasts (shown in pale pink/blue).

Concerns about the cost of living and the need to measure it stretch back to ancient times. Two centuries of research on constructing price indices suggest that measuring the overall rate of inflation is far from straightforward.

So how did contemporaries attempt to measure inflation in the distant past? And how have economic historians attempted to reconstruct retrospective estimates of the cost of living using modern methods and techniques. This article looks at the issue in the context of UK economic history and recent developments in the construction of price indices.

Constructing a price index – what were the difficulties faced in the past?

There is a great deal of research on the appropriate construction of price indices (see Diewert, 1998; Johnson, 2015; Ralph et al, 2018). There are several key steps in constructing a modern-day price index and many difficult practical and conceptual issues arise for statistical agencies. So how did the commentators and statisticians of the past go about it and how successful were they?

Step 1: Which price index to measure?

For an open economy like the UK, there are several indices one might want to construct: for example, for consumer prices, producer prices, import and export prices, house prices and so on. No single measure of inflation is likely to serve all purposes.

A consumer price index – the focus of this article – typically attempts to measure the rate of inflation experienced by households. Such a measure is the basis of the UK’s inflation target that the government expects the Bank of England’s Monetary Policy Committee to achieve – currently set at 2%.

But a measure of consumer prices will include imported goods and when these increase relative to domestic firms’ prices, it means that consumer price inflation will move differently to that of, say, the GDP deflator, which is a common measure of the price of an economy’s output. (The GDP deflator is the price of a country’s output net of its imports, also known as the value-added deflator.)

Consumer prices also include taxes and distribution costs, which are passed on to households by producers and retailers. Before the 20th century, data limitations largely restricted commentators to measures of producer or wholesale price inflation.

From the 18th century onwards, these were increasingly available in newspapers and periodicals such as the London Gazette and The Economist. Indices derived from these prices often mixed domestic and imported goods, and the wholesale prices used could move very differently from retail prices.

Choices also arise in the coverage of a consumer price index. One could be constructed for the entire population or for subsets, such as for regions or different types of households (low-income and high-income, retired and non-retired, etc.).

Towards the end of the 19th century there was a growing interest in the living standards of working households, in part due to the increasing industrial unrest in the lead-up to the First World War. By the early 20th century, the UK Board of Trade had begun to attempt to measure the key elements of the cost of living faced by the ‘working classes’.

The imperative of the war saw the Board begin publishing the first ‘cost of living’ index, which combined prices from 5,500 retailers across 620 locations with weights from the Household Expenditure Survey of 1904.

After the Second World War, this developed into a more comprehensive index known as the retail price index or RPI, which, for many decades, was the reference measure of retail price inflation in the UK. The RPI was extended to all wage earners, but it excluded the top 4% of households by income and some of the poorest pensioner households given their very different spending patterns.

Today, the Office for National Statistics (ONS) provides a variety of measures of household inflation for the whole population and for different groups of consumers based on the consumer price index (CPI).

Step 2: Defining the concept – cost of goods versus cost of living?

The second challenge of any price index is to pin down the conceptual target – what exactly are we trying to measure?

Today, many official consumer price indices produced by statistical agencies are defined very precisely as cost of goods indices (COGIs). These attempt to measure the cost of a representative basket of goods typically purchased by households over time.

The composition of the basket and the weights applied to price changes are fixed for a period of time – typically a year for a monthly price index. But they are then updated periodically, reflecting updated spending patterns and new products, to provide what’s known as a chained index. A key challenge for statisticians, both now and in the past, is how to include new products in the index and how to incorporate increases in the quality of goods.

Many statistical agencies emphasise that the COGIs they construct are not necessarily the same thing as a theoretical cost of living index, which is an economic concept (derived from Konus, 1924). The latter has a precise definition: how has the cost of achieving a given living standard or utility changed over time, allowing for potential substitutability between goods.

A well-known result is that a fixed-base cost of goods index will, in general, overstate the rate of inflation relative to a true cost of living index. The cost of living concept, taken at face value, requires additional knowledge about consumers’ preferences and how they would respond to changes in prices.

This information is difficult or impossible for many statistical agencies to collect in a timely manner even today. Nevertheless, what are called ‘superlative’ price indices do exist – such as the Fisher ideal index (Diewert, 1976). These are practical to construct and consistent with the true cost of living index, provided that some basic assumptions about consumer behaviour hold.

But in the past, many contemporaries defined the cost of living as the cost to households of purchasing a set of essential goods and services deemed necessary for a subsistence or respectable standard of living. So when they referred to estimating the cost of living they, like modern statistical agencies, were in practice measuring a cost of goods index based on a basket of essentials.

Step 3: Sourcing the prices

The third challenge is sourcing the relevant price quotes for the goods and services in the basket. One of the first recorded attempts in the UK was at the start of the 18th century by Bishop William Fleetwood, who was concerned with a fellowship at an Oxford college where the nominal terms had been unchanged since the mid-15th century.

To explore how prices had changed in the long interval, he collected prices for a small set of goods and services, concluding that many had risen significantly, although he did not try to aggregate them to form an index.

Figure 2: Retrospective annual index of cost of living, monthly wheat prices and the monthly exchange rate on Hamburg

Panel A: Cost of living index

Panel B: Monthly wheat prices

Panel C: Hamburg shillings per £

Source: Allen, 2007; Gayer, Rostow and Schwartz, 1953; London Gazette

Before the mid-19th century, statisticians interested in measuring the rate of inflation faced a trade-off. There was high frequency information on individual prices of goods such as wheat from the Corn Returns published weekly in the London Gazette (see Brunt and Cannon, 2013).

These data were timely and might vary alongside aggregate prices (see Figure 2, Panel A for a comparison with a retrospective estimate of workers’ cost of living between 1790 and 1821). But they could also be subject to idiosyncrasies such as poor harvests and global price pressures.

Exchange rates also provided a timely indicator. Indeed, the rates of exchange on Hamburg and Amsterdam were the focus of contemporaries involved in the Bullionist controversy, over whether paper notes should return to being convertible into gold on demand, during the French and Napoleonic Wars.

At that time, Britain was on a paper standard and the inflationary consequences were being debated. A falling rate of exchange was indicative of increasing prices but also subject to idiosyncratic influences (see Figure 2, Panel C).

In contrast, there was more comprehensive but infrequent information on a broader set of prices (such as from Evelyn, 1798, and Young, 1812). These could give a clearer signal of overall inflation but might also be long out of date.

It was not until the second half of the 19th century that statisticians had a better understanding of inflation in real time. The Economist, The Statist and The Times also regularly reported (unweighted or crudely weighted) wholesale or producer price indices (based on the work of Jevons, 1865, and Sauerbeck, 1886).

Better and more timely information on retail prices developed throughout the 20th century, especially after the Second World War. Today, a sample of hundreds of thousands of goods and services prices is collected each month.

Step 4: Aggregating the prices into an index

The fourth and final step is the method of bringing together and averaging price quotes into an overall index. In principle, this seems simple: you just weight the prices of each good by their importance in expenditure.

But when collecting price quotes for a single item or at very low levels of aggregation – known as the elementary level – there are typically no estimates of quantities or expenditure shares, and statisticians need to use unweighted averages. Even when quantities or expenditures at higher levels of aggregation are available, there are a number of methods of weighted averaging that in principle could be used. These issues were increasingly confronted by commentators and statisticians from the 18th century onwards.

On the use of unweighted averages, major breakthroughs were made by Dutot (1738), Carli (1764) and Jevons (1863). In modern indices, Dutot and Jevons have generally been preferred to Carli at the elementary level of aggregation, based on a number of axiomatic, economic and statistical considerations.

Such considerations led to the discontinuation of RPI as a national statistic in the UK in 2013. Now, CPI and its variant CPIH (which includes the costs of owner occupied housing) are more widely used as they are based exclusively on Jevons and Dutot averaging.

Further conceptual and practical questions developed during the 19th century on how to source and use the information on expenditure shares to deliver best on the concept being measured. For example, should one use expenditures shares at the beginning of the period, the end of the period or some optimal average of the two?

The important theoretical developments were made by Young (1812), Lowe (1823), Laspeyres (1871), Paasche (1874), Fisher (1921), Divisia (1926) and Tornqvist (1936). It was not until relatively recently that many of the axiomatic, economic and statistical concepts were developed and refined for use by statistical agencies in the construction of modern indices – see Diewert (1988).

On the practical side, statisticians in the 19th century typically only had limited information on household budgets. What they had came from investigations by social commentators and parliamentary inquiries, which were limited in what they could feasibly collect and process.

Indeed, in the aftermath of the Second World War, the Board of Trade cost of living index was still based on the 1904 survey of households, albeit with minor updates to 1914. As a result, an interim index of retail prices began in 1947, where the weights were taken from a survey in 1938. RPI, which was launched in 1956, used a set of post-war weights that were regularly updated from more comprehensive surveys. The descendants of these are still in use today, for example, the Living Costs and Food Survey (LCF).

The improved coverage of surveys led to additional questions (Prais, 1998). For example, should one use aggregate expenditure shares for each type of good, which implicitly places more weight on the expenditure of richer households (a plutocratic weighting)? Or should one make use of survey-level data and measure the expenditure shares of each household individually and then weight them equally (a democratic index)? This idea has been revisited recently (Aitken and Weale, 2020) and the ONS has begun releasing experimental democratic indices of prices.

How have economic historians measured inflation retrospectively?

Many economic historians have gone back and revisited the prices and budgets collected by contemporaries, including tapping new and previously underused sources to calculate inflation retrospectively. Thanks to their efforts, we now have a relatively continuous record of inflation back to the early 13th century (see Figure 3).

Figure 3: UK consumer price level, 1086 to 2021

Figure 4: Annual rate of UK consumer price inflation (including housing costs), 1209 to 2021

Sources: From 1209, the measure of consumer prices used in both charts includes housing costs (actual and owner occupiers’ rent) and is based on the aggregate expenditure weights of all households. From 1209 to 1830, it is based on the domestic expenditure deflator of Clark, 2015; from 1830 to 1949, it uses the consumers’ expenditure deflator based on Deane, 1968; Feinstein, 1972; and Sefton and Weale, 1995; and from 1949, it uses the long-run CPIH index recently produced by the Office for National Statistics. Between 1086 and 1209, the price index is a trend measure based on a more limited set of commodities based on Barratt, 1996, 2001, and Mayhew, 2013. Alternative measures of consumer prices such as the CPI and those based on the expenditure weights of the ‘working classes’ or wage earners can be found in Thomas and Dimsdale, 2017.

In many ways, a retrospective analysis is as challenging as contemporary estimation given the broader range of sources to choose from and modern techniques to apply. For example, a key issue was the use of prices collected from institutional sources such as the records of manors, Oxford and Cambridge colleges, hospitals and local government, which might have been subject to stickiness arising from fixed price contracts (Ashton, 1949).

The use of raw material and wholesale prices as a proxy for retail prices and the London-centric nature of many prices also raised concerns (Ashton, 1949). Other research was able to demonstrate that these problems were not as large as feared (Feinstein, 1995, 1998). But the price of items such as clothing proved just as difficult to measure as they do today, given the range and quality of goods available.

An important result of this retrospective analysis is that workers’ real wages grew relatively slowly during the Industrial Revolution – what’s referred to as ‘Engel’s Pause’ – and that it was almost a century before the majority of the working class obtained any economic benefits (Feinstein, 1995, 1998; Allen, 2007); though more optimistic estimates have been constructed (Clark, 2005).

Another good example of the use of modern methods is a recalculation of the early 20th century cost of living index in interwar Britain (Gazeley, 1994). This work updated the expenditure weights from 1914 to 1938 to capture changes in consumption more accurately. It indicated lower inflation during the First World War and lower deflation during the slump of the 1930s than previously thought.

Conclusions

The gradual progress on measurement over the centuries by contemporaries and economic historians has improved our understanding of inflation over the long run. Figure 3 presents almost a millennium of price data in the UK and Figure 4 shows annual inflation from the beginning of the 13th century, using the most consistent available price measures that cover all consumers and include the cost of housing.

Three things stand out. There have been several phases where there has been no trend increase in prices, interspersed by periods of large secular increases such as the early 13th century, the 16th century, the end of the 18th century and since the start of the 20th century.

Second, large bursts of inflation have often been associated with internal and external wars, which are then often followed by periods of deflation (with the notable exception of the Second World War).

Third, the positive and persistent rate of inflation that we have observed since 1940 contrasts with the volatility of prices exhibited for centuries before that, where for many periods, deflation was equally as likely as inflation.

That persistence could reflect a fundamental shift in the willingness of society to accept a non-zero and persistent rate of inflation as the optimal state of affairs, given the potential costs of deflation (Groth and Westaway, 2009). But equally, it could reflect difficult measurement issues, such as the improvement in the quality of goods over time, which means that true price stability is associated with positive measured inflation.

The anticipated increase in the cost of living over the next few years, with inflation rates expected to differ across sectors and household groups, is likely to bring many of these deeper questions back into sharp relief.

Where can I find out more?

Who are experts on this question?

  • Erwin Diewert
  • Paul Johnson
  • Robert O’Neill
  • Jeff Ralph
  • Paul A. Smith
  • Ian Gazeley
  • Robert Allen
  • Gregory Clark
  • Stephen Broadberry
  • Nicholas Crafts
  • Jan Tore Klovland
  • Rebecca Searle
  • Sara Horrell
Authors: Jason Lennard and Ryland Thomas
Photo of 1945 UK food shortages from Wikimedia Commons

The post Inflation past and present: how have we measured the rising cost of living? appeared first on Economics Observatory.

]]>