Risks In Financial Markets And Shadow Banks

Andrew Bailey, Deputy Governor, Prudential Regulation and Chief Executive Officer, Prudential Regulation Authority gave a speech at  Cambridge University – Financial Markets: identifying risks and appropriate responses – which discusses important concepts in relation to the effective supervision of Financial Markets, in the context of expanding bond markets and automated electronic trading. There is good evidence that financial market conditions have evolved in ways that reduce the likelihood of continuous market liquidity in all states

There is a commonly-held narrative about the financial crisis that the banks caused it, and the solution is more regulation of both an economy-wide (macro-prudential in the jargon) and firm specific  (micro-prudential) type. But it isn’t that simple, and tonight I want to outline the role of financial markets and non-bank institutions (which sometimes go under the somewhat pejorative term of shadow banks ) within the overall financial system and describe how, with sufficient resilience, they play a number of key roles in the financial system, including offering borrowers alternatives to bank lending. Nevertheless, I also want to explain why there is significant and increasing emphasis on the risks they can pose to financial stability. Put simply, it is quite often said that we are living in unprecedented times in the performance of financial markets.

The simple narrative around banks is that they over-extended themselves (over-leveraged in terms of the ratio of assets to capital and over-extended in terms of the ratio of illiquid to liquid assets) in the run-up to the crisis, and the resulting problems had two closely linked and malign effects: first, the crisis jeopardised the provision of those core financial services which banks provide and on which all of us depend; and second, by so doing – and being too big or complicated to deal with as failed companies – they required the use of taxpayers’ money to bail them out. That’s the story, and it explains why the public policy actions taken both immediately after the crisis (bail-outs) and the subsequent post-crisis reforms have been directed at protecting those or core financial services and seeking to ensure that taxpayers’ money does not need to be put at risk.

There is however more to the story than that. In the period between the early 1990s and the onset of the crisis, there was a remarkable and unprecedented evolution of the financial system which involved a major expansion of activity. Banks moved from a traditional model of taking deposits and lending them out, to a model that involved far more the origination and distribution of loans – often known often as securitisation, in which these loans were substantially distributed to shadow banks. These shadow banks thereby took on more of the traditional core bank functions of credit assessment and maturity transformation (the practice of borrowing at shorter maturities than the maturities of the assets they held). And, they did so, like the banks, with weak levels of capital.

But, it would be a mistake to portray shadow banks as bad. There is good evidence that in the twenty years before the crisis they emerged as a stabilising force (most notably in the US) because they were able to expand their provision of credit at times when traditional bank lending underwent cyclical contractions. That said, there were some troubling properties associated with the growth of shadow banking. For instance, quite a few were sponsored by banks as a means to reduce the amount of capital to be held against risk exposures. When the crisis hit, in a number of cases those banks found they had to stand behind their offshoots for contractual or reputational reasons, so the separation was illusory and led to greater leverage in the system. Another issue was that the originate and distribute model of securitisation was often opaque and led to insufficient genuine risk transfer away from the banking system, in ways that became very problematic when the crisis hit. Shadow banks, also neglected the funding side of their balance sheets, so that they came to depend upon using their assets as security to obtain funding, often from banks. This is quite different from the traditional model of deposit funded banking where the assets (loans) are not used as security for raising funds. However, it must be said that in the run-up to the crisis, banks too came to depend overly on such secured funding. When the crisis hit, the value of the assets used as security for collateral fell, funding conditions tightened and in some instances were cut off .

These weaknesses meant that the counterbalancing behaviour of shadow banks vanished. Instead, they retracted just as banks did, but much more violently, which exacerbated the magnitude of the crisis. The result was therefore greater volatility in financial markets, and a dramatic increase in the vulnerability of economies to financial shocks. This contraction in credit supply was thus a powerful channel through which the financial sector hit economies. The result was the largest contraction in real economic activity since the Great Depression. In the better times, securitisation and the shadow banking system appeared to have reduced the sensitivity of the aggregate supply of lending and thus the sensitivity of the real economy to transitions in bank funding conditions. But they did not do so at the point it would have been most valuable, during the global crisis. As Stanley Fischer has recently put it: “when non-banks pulled back, other parts of the system suffered. When non-banks failed other parts of the system failed.”).

The originate to distribute model created tradeable assets – the securities in securitisation. The success of the model depended on there being liquid secondary markets for these securities. In its broadest sense, market liquidity refers to the ease with which one asset can be traded for another, and thus different markets can be more or less liquid. The level of liquidity in financial markets depends on among other things the amount of arbitrage or market making capacity and whether specialised dealers (market makers) will step in as buyers or sellers in response to temporary imbalances in supply and demand (Fender and Lewrick 2015). In what appeared to be normal times before the crisis, there was abundant capacity to maintain liquidity in markets, supported by banks and shadow banks such as hedge funds.

But during the crisis, such capacity became much more scarce or even undeployed, and market liquidity dried up. The key point here is that the originate to distribute approach depended on continuous liquidity in financial markets, and when that dried up in the crisis the effects were severe.

I want to move on now to what has happened since the crisis. Financial market activity has grown rapidly. There are many statistics that could be quoted, so to choose one, over the last 15 years, global bond markets have grown from around $30 trillion in 2000 to nearly $90 trillion today. That is a lot, not least because in the middle of that 15 year period came the global financial crisis. Therefore, when it comes to the task of maintaining market liquidity, there is a lot more to hold up. Also, the broad investment or asset management sector is now much larger, at around $75 trillion at end-2013. Thus, in the wake of the financial crisis there has been a substantial increase in the intermediation of credit via financial markets rather than long-term on the balance sheets of banks, involving both the supply of new credit to borrowers and the absorption of assets coming out of the banking system, as banks reduce their balance sheets.

Over the same period, there has been a fundamental and rapid change in the microstructure of financial markets – the organisation of how they work. Electronic platforms are increasingly used in a number of major financial markets (notably equity and foreign exchange markets). As part of that change, automated trading – which is a subset of electronic trading using algorithms to determine trading decisions – has become common in those markets. And, within automated trading, there has been growth in high frequency trading – which relies on speed of execution to get ahead of other market players . While electronic trading has contributed to increasing market efficiency and probably reducing transaction costs, there are also risks that arise from trading strategies that are flawed, or where in constructing the strategy not all possible outcomes were considered, including the ability to trade large blocks.

To recap, the last two decades have seen major changes in the financial system. These have, in turn, shaped the impact of the global financial crisis and its aftermath. I want now to look at the aftermath of that crisis and pick out several developments that are important for understanding current and future risks to financial stability.

The first development concerns the overall pattern of activity in financial markets. While the size of global bond markets has grown rapidly, the evidence indicates that trading volumes in a number of markets have declined. Bond inventories held by primary dealers have likewise reduced, bid-ask spreads have risen in the corporate bond markets, and it has become more expensive to hedge named credit risk using derivatives. A key point here is that the balance sheets of dealers active in these markets have shrunk markedly, with many fewer firms active in market-making.

Markets have grown, but the capacity to maintain liquidity – as judged by the market–making capacity of the major banks and broker-dealers – has declined . As my colleague Chris Salmon recently put it, this reduction in market making capacity has been associated with increased concentration in many bond markets, as firms have become more discriminating about the markets they make, or the clients they serve. But this trend has gone hand-in-hand with a growth in assets under management, with important implications for the provision of liquidity by market makers in times of stress in those markets.).

The second post-crisis development is the natural consequence of the severity of the crisis and its impact on real economies. The extraordinary (by historical standards) degree of monetary policy easing by central banks was followed by a fall in volatility in financial markets. Markets appeared to come to take comfort from their own mantra of “low-for-long” rates which in turn incentivised a “search for yield” (to be clear, “low for long” has not been in the phraseology of central banks).

Studies of the US Treasury market have indicated that the Federal Reserve’s programme of Quantitative Easing (QE) caused a reduction in the liquidity premium return for holding those bonds. Part of the effect of QE programmes is to improve market conditions for the targeted asset classes but also to see the trickle down to other asset classes as market conditions change more generally). To be clear however, QE asset purchase operations were not designed to tackle a liquidity problem in the financial system. Rather, the impact on liquidity was one of the channels through which QE has affected the real economy and thus has had its intended effect in monetary policy terms. While estimates of the impact of QE are inherently uncertain, one of the desired outcomes of central bank asset purchases is to lower yields thus affecting longer term interest rates and creating a positive economic effect. In doing so, QE can improve the functioning of financial markets by reducing liquidity premia.

The third post-crisis development is the impact of the growth of automated trading in financial markets, and the challenges this poses for maintaining continuous market and liquidity. Over the last year volatility in many financial markets has picked up from a low base and we have seen some acute but short-lived incidents of extreme volatility and impaired liquidity in secondary markets. On 15 October last year there was unprecedented volatility in the US Treasury market, and on 15 January this year there was substantial volatility in the Swiss Franc exchange rate following the unexpected decision by the Swiss National Bank to remove its Europe/Swiss Franc floor. Now, central banks are known for their powers of understatement, so what do I mean by words like “unprecedented” and “substantial”. On 15 October, 10 year US Treasury yields moved intra-day by around 8 standard deviations of preceding daily changes. On 15 January, the Swiss Franc moved by more than 30 standard deviations. For rough scale, an 8 standard deviation move should happen once every three billion years or so for normally distributed data.

You may at this point recall the saying popularised by Mark Twain, about “lies, damned lies and statistics”. I think I can be reasonably confident in saying that the fact of these events happening does not mean that we should expect low volatility in financial markets for at least the next three billion years.

I am not going to spend time discussing the causes of these events; suffice to say that there was news of an unexpected sort, and the size of the resulting moves points to greater sensitivity in the response of markets. The ability of markets to trade without triggering major price moves was limited. That said, by the end of both days, volatility had reduced, prices had retraced a portion of their peak intra-day moves and liquidity returned. This quick stabilisation helped to limit contagion to other markets, and thus wider effects on the stability of the financial system. Should we therefore be concerned? My answer to that is we should certainly be keenly interested. I agree with the conclusion of the Federal Reserve Bank of New York that understanding the manner in which the evolving market structure is affecting market liquidity, efficiency and pricing is highly important ). This conclusion has been reinforced in the recent publication of the Senior Supervisors Group (SSG) in which the PRA participates). The SSG has concluded that “key supervisory concerns centre on whether the risks associated with algorithmic trading have outpaced control improvements. The extent to which algorithmic trading activity, including HFT, is adequately captured in banks’ risk management frameworks, and whether standard risk management tools are effective for monitoring the risks associated with this activity, are areas of inquiry that all supervisors need to explore”.

As supervisors of almost all of the world’s major trading banks – through their operations in London – we can provide some helpful assessment of these events. We have observed that the balance between aggregate buy and sell orders submitted to banks’ electronic trading systems can shift instantaneously, and sometimes violently, upon this type of occurrence. The impact is often exacerbated by the simultaneous reduction in order book depth on organised multilateral electronic trading venues. The electronic trading contribution was more evident on 15 January, as a foreign currency market event than the 15 October (a bond market event), reflecting the different patterns of trading in these markets.

On the 15 January, the ability of banks’ e-trading systems to hedge positions consistently through automatic risk management broke down as the necessary reference prices became discontinuous and unreliable. The algorithms of automatic trading have rules embedded in their code such that quotes are immediately pulled if there is a severe market liquidity event. Moreover, the algorithms often have automatic rules that activate circuit breakers or so-called “kill switches” should the aggregate notional risk on a firm’s book exceed programmed limits. On 15 January, the algorithms acted quickly to pull the so-called “streaming prices” when liquidity in the reference market for these prices dried up. Where this did not happen simultaneously, it resulted in large open positions being accumulated by the banks, quite literally within seconds, as an overwhelming balance of client sell orders were automatically executed. Once pre-determined risk accumulation limits had been breached the algorithms instantaneously shut down. Whilst each algorithm, operating independently, may well have been quite prudently calibrated to protect the bank from building an exposure that exceeded its risk appetite, collectively, the impact on market liquidity was akin, albeit temporarily, to a cascading failure across a power grid.

As a consequence, the foreign exchange market reverted to human voice orders as the substitute for automated trading. There were therefore outcomes that appear not to have been expected. So, at the risk of quoting Shakespeare inappropriately, all was well that ended (reasonably) well, but the risk that this would not be the outcome is too great to ignore.

In summary, there is good evidence that financial market conditions have evolved in ways that reduce the likelihood of continuous market liquidity in all states. One element of this is the response of regulators to the financial crisis (to which I will return later), while the other is a product of the rapid development of technology and trading strategies. The effects have probably been offset to some degree by beneficial influences from central bank monetary policy actions which have increased market liquidity. Measures of risk that reflect the overall demand for and supply of financial assets, including liquidity risk premia, remain low by historical standards, notwithstanding recent events. In part, this likely reflects the continued intended effects of monetary policy setting and the communication of policy looking forward. This has, as intended, provided an incentive for risk-taking by investors, and thus the market environment has been conducive to the so-called “search for yield”.

But, as described, underlying conditions in financial markets suggest that the current situation could be fragile . Shocks that might prompt large-scale asset disposals are of particular concern. The global asset management industry is both large in size in its own right and relative to the size of the commercial banking system.

A key issue is the degree to which asset managers (or shadow banks) typically offer short-term redemptions against potentially illiquid assets. This capacity to realise assets without unwanted disturbance to financial markets is therefore critical and is shaping the work of authorities. The risk is inherently global in nature, thereby suggesting that internationally–coordinated policy action is the preferred outcome where necessary.  In the rest of my time, I will describe the work that is being done on policy responses.

First, I want to challenge the argument that the issue derives from the re-regulation of the capital and liquidity positions of banks that have in the past acted as market-makers, and thus marginal investors. This argument has a number of strands: capital and funding costs for dealer inventories in banks and broker-dealers have increased; the cost of hedging with single name credit default swaps has risen, causing availability to drop; proprietary trading restrictions (e.g. the Volcker Rule in the US) limit market making (it is too hard to distinguish prop trading from market making); and increased trade transparency requirements restrict market liquidity.)

If we look at the US as the prime example, the evidence indicates that the big run-up in inventories of fixed income securities held by the primary dealers occurred from around 2003-04 onwards, reached a peak in 2008, and has then settled back to around the 2002 level over the last two years, or so.

BOE!8May2015Source: Federal Reserve Bank of New York, as reproduced in the Bank of England Financial Stability Report – December 2014

Looked at in this light, the increase in inventory capacity in the dealer community was ephemeral, reflecting the underpricing of risk, a weak capital regime and the subsidy provided to the major banks by implicit government guarantees. Dealers de-risked their balance sheets rapidly as the crisis hit, and this reminds us that their capacity and willingness to stand in the way of major market moves (akin to catching a falling knife) was always constrained . And all of this happened before any new regulations were put in place.

Last on this point, it is worth recalling the background to the large increase in inventories from around 2002/04. Here, regulation does appear to have played a role, and not a good one. The first amendment to the Basel I capital standard came in the mid 1990s in the form of the so-called Market Risk Amendment. It enabled a substantial reduction in the capital held against trading book assets such as inventories, to a level that could be less than 1% of those assets. To illustrate this point, here is a quote from the FSA’s report into the failure of RBS.

“The capital regime was more deficient, moreover, in respect of the trading books of the banks ….. the acquisition of ABN AMRO meant that RBS’s trading book assets almost doubled between end 2006 and end 2007. The low risk weights assigned to trading assets suggested that only £2.3 billion of core tier 1 capital was held to cover potential trading losses which might result from assets carried at around £470 billion on the firm’s balance sheet.

In fact, in 2008 losses of £12.2 billion arose in the credit trading area along (a subset of total trading book assets). A regime which inadequately evaluated trading book risks was, therefore, fundamental to RBS’s failure.”).

I do not doubt that the reversal of this capital treatment of trading books has had an impact on dealer inventory levels by increasing the capital intensity. But I don’t accept that the fairly ephemeral position that emerged shortly before the crisis was fit for purpose or sustainable.

What are we therefore doing about the fragility of market liquidity and the risks to both financial stability and the state of the real economy that arise from it? First, we are working hard to understand better these risks and how they could manifest themselves. As the Bank of England’s Financial Policy Committee stated at the end of March, our concern is that investment allocations and the pricing of some securities “may presume that asset sales can be performed in an environment of continuous market liquidity.” (FPC (2015))

We are: gathering better data and thus building a greater understanding of the channels through which market liquidity can affect financial stability and economic activity; establishing a better understanding of how asset managers form their strategies for managing liquidity in their funds in normal and stressed conditions (taking into account any increase that might have occurred in the correlations between various market participants’ trading activities, such as the use of passive investment strategies); and deepening our knowledge of the contributors to greater fragility of market liquidity. The FPC has asked for a full report on these issues when it meets in September and an interim report in June.

Globally, the Financial Stability Board also has set priorities for its work, with which we are fully engaged. The intention is to understand and address vulnerabilities in capital market and asset management activities, focussing on both near-term risk channels and the options that currently exist to address them, the longer-term development of these markets and whether additional policy tools should be applied to asset managers according to the activities they undertake, with the aim of mitigating systemic risks.

The PRA, as the UK’s prudential supervisor of major trading firms, will continue to develop its capacity to assess algorithmic or automated trading, including the governance and controls around the introduction and maintenance of trading algorithms, and the potential system-wide impact of crowded positions and market liquidity. We will assess the adequacy of existing risk measurement and management practices in capturing exposures from the large volume of intraday trading instigated by these algorithms. We will continue to develop our assessment of whether trading controls deployed around algorithmic trading are fit for purpose, and in doing so we will no doubt capture insights on the role of market making on electronic platforms. This is all part of our task of supervising firms’ trading books. It should be assisted by the introduction of MIFID2 (the Markets and Financial Instruments Directive) in Europe, which will impose rules on algorithms and high frequency trading, including the introduction of circuit breakers, minimum tick sizes and maximum order-to-trade ratios, thereby seeking to improve the stability of markets.

It might be possible to conclude that it is all work to understand the problem rather than fix it. Not so, and I want to end by summarising six areas where action is already under way to reduce impediments to the development of diverse and sustainable market based finance.

First, maintaining the stability of the financial system means that we have to keep a close watch on how risks that can appear in financial markets and the non-bank financial system may wash back into and affect the critical functions performed by banks; in other words destabilise the core of the system. In order to enhance our protection against this risk, in this year’s Bank of England concurrent stress test, we are taking a substantial step to enhance the coverage of market risks. Our new approach to stress testing trading activities will capture how fast banks could unwind or hedge their trading positions in the stress scenario. This means positions that are less liquid under stress conditions will receive larger shocks. And, we have developed a new approach to stressing counterparty credit risk, which focusses on capturing losses from exposures that would become large under the stress scenario and for counterparties that would be most vulnerable in the stress scenario.

Second, the Bank of England, working with the FCA and HM Treasury has set up the Fair and Effective Markets Review to restore trust and confidence in the fixed income, currency and commodity (FICC) markets in the wake of the serious wave of misconduct seen since the height of the financial crisis. The Review is taking a fundamental look at the root causes of these abuses, the steps that have already been taken by firms and regulators to put things right, and what more is needed to deliver less vulnerable market structures and raise standards of behaviour in future. The Review will publish its recommendations in June 2015. Out of this assessment, and based on consultations to date, will I believe come priorities on market structure “standards” and transparency, effective competition, professional culture within firms and effective, pre-emptive supervision which reduces the drama of ex-post enforcement.

The third area of action concerns initiatives to improve the functioning of markets to support activity in real economies. Resilient market-based financing will help to support sustainable economic growth. The aim behind the European Commission initiative on Capital Markets Union is to strengthen markets in the EU to support growth and stability, and sustainable progress on this front will be welcome . Likewise, sound securitisation is a goal of the wider financial reform programme. The Bank of England and the ECB have published a consultation paper to identify simple, transparent and comparable securitisation techniques, the use of which should be encouraged. This work is now being taken forward in international policymaking bodies.

The fourth area of activity involves so-called securities financing transactions (SFTs) including securities lending and repurchase (repo) agreements. These can have the beneficial effects of supporting price discovery in financial markets and secondary market liquidity, and are important as part of market-making activities by financial firms, as well as their investment and risk management activities. But, as we witnessed in the crisis, they can also be a source of excessive leverage and mismatches in liquidity positions. As a consequence, some of these markets shrank rapidly as the crisis took hold. The Financial Stability Board has taken steps to introduce haircuts on SFTs that are not centrally cleared, with the aim of preventing excessive leverage becoming available to shadow banks in a boom, thereby reducing the procycliality of that leverage. The haircuts set an upper limit on the amount that banks and broker-dealers can lend against securities of different credit quality.

The fifth area concerns the risk of asset managers offering short-term redemptions to investors against potentially illiquid securities. The proportion of assets held in such structures has increased over the past decade. Given more fragile underlying market liquidity, for the reasons I have described, stressed disposals of assets might be harder to accommodate in an orderly fashion. The international securities regulatory body IOSCO, issued recommendations in 2012 that provide a basis for Common Standards for Money Market Funds (MMFs) across jurisdictions, in particular seeking to ensure that MMFs are not susceptible to the risk of runs (in the way that banks can be). More broadly, work continues on putting into practice appropriate policies and standards to prevent the risk of disorderly sales of assets in the face of investor withdrawals. Potential responses (and at this stage we are looking at options in an open way) are to require funds to hold larger liquid asset buffers to facilitate orderly redemption payments to investors, to apply more stringent leverage limits where appropriate, and to require that the redemption terms offered to investors take sufficient account of the risk that secondary market liquidity in the assets they hold could become impaired. These are possibilities, but at this stage very much not policies for the reason that a lot more work is need to properly assess them.

Last, central banks can back-stop market liquidity by acting as market makers of the last resort.  The Bank of England had described in its so-called Red Book how it could act in such a way in exceptional circumstances. Here too, there is a lot more to be done to consider the circumstances in which this tool could be used.

Conclusion

The rapid trend towards greater use of market-based financing is one that should be welcomed. But, it is important that accompanying risks to financial stability are well understood and managed. Credit creation since the financial crisis has been heavily reliant on market based finance in the UK and internationally. We have to be alert to, and ready to handle the risks and consequences of any reversal in market conditions. Recent incidents of market volatility act as a reminder that it can disappear very quickly in more normal as well as stressed times. Moreover the business models of the broker-dealers that act as market makers are changing in response to the financial crisis and they are becoming reluctant to absorb large positions. In my view those changes are inevitable, because the pre-crisis state of affairs was ephemeral and unsustainable. But the impact of the change is of course important for both monetary policy and financial stability, because it affects the supply of credit to the economy and the stability of the financial system. My assessment is that in terms of understanding the risks and framing possible mitigating actions, we will fare better if we start by focussing on the activities that create such market risk, and then as appropriate move on to the entities that house those activities.

The policy response from the authorities is by nature an activity that needs to be carried out through close international co-ordination. The Bank of England is committed to playing its part, consistent with the major presence of financial market activity in the UK, alongside and as a part of the work of the G20 under the auspices of the Financial Stability Board.

Bank of England Inflation Report For March – CPI 0%, Please Explain!

In order to maintain price stability, the Government has set the Bank’s Monetary Policy Committee (MPC) a target for the annual inflation rate of the Consumer Prices Index of 2%. Subject to that, the MPC is also required to support the Government’s economic policy, including its objectives for growth and employment. The Inflation Report is produced quarterly by Bank staff under the guidance of the members of the Monetary Policy Committee. It serves two purposes. First, its preparation provides a comprehensive and forward-looking framework for discussion among MPC members as an aid to decision-making. Second, its publication allows the MPC to share our thinking and explain the reasons for their decisions to those whom they affect.

GDP growth was robust in 2014, moderating in the second half of the year. Despite the weakness in 2015 Q1, the outlook for growth remains solid. Household real incomes have been boosted by the fall in food, energy and imported goods prices. The absorption of remaining slack and a pickup in productivity growth are expected to support wage growth in the period ahead. Along with the low cost of finance, that will help maintain domestic demand growth. Activity in the United States and a number of emerging markets has slowed but momentum in the euro area appears to have strengthened over the quarter as a whole.

CPI inflation was 0.0% in March 2015 as falls in food, energy and other import prices continued to weigh on the annual rate. Inflation is likely to rise notably around the turn of the year as those factors begin to drop out. Inflation is then projected to rise further as wage and unit labour cost growth picks up and the effect of sterling’s appreciation dissipates. The MPC judges that it is currently appropriate to set policy so that it is likely inflation will return to the 2% target within two years. Conditional on Bank Rate following the path currently implied by market yields — such that it rises gradually over the forecast period — that is judged likely to be achieved.

CPI inflation was 0.0% in March, triggering a second successive open letter from the Governor to the Chancellor of the Exchequer. Around three quarters of the weakness in inflation relative to target, or 1.5 percentage points, was due to unusually low contributions from food, energy and other goods prices, which are judged largely to reflect non-domestic factors. The biggest single driver has been the large fall in energy prices. Falls in global agricultural prices and the appreciation of sterling have also led to lower retail prices for food and other goods. Absent further developments, these factors will continue to drag on the annual inflation rate before starting to drop out around the end of 2015.

The remaining one quarter of the weakness in inflation relative to target, or 0.5 percentage points, is judged to reflect domestic factors. Wage growth remained subdued in Q1, despite a further fall in the unemployment rate. Part of that weakness is likely to reflect the effects of slack in the labour market, although the concentration of recent employment growth in lower-skilled jobs, which tend to be less well paid, is also likely to account for part of it.

Chart 2 shows the Committee’s best collective judgement for the outlook for CPI inflation. In the very near term, inflation is projected to remain close to zero, as the past falls in food, energy and other goods prices continue to drag on the annual rate. Towards the end of 2015, inflation rises notably, as those effects begin to drop out. As the drag from domestic slack continues to fade, inflation is projected to return to target within two years and to move slightly above the target in the third year of the forecast period.

The path for inflation depends crucially on the outlook for domestic cost pressures. A tightening of the labour market and an increase in productivity should underpin wage growth in the period ahead. There is a risk that the temporary period of low inflation may persist for longer — for example, if it affects wage settlements. Alternatively, wages could pick up faster as labour market competition intensifies, which could pose an upside risk to inflation. Inflation will also remain sensitive to further movements in energy and other commodity prices, and the exchange rate.

BOECPIMAy2015Another influence on wage and price-setting decisions is inflation expectations. Nearly all measures of inflation expectations have fallen over the past year, with household measures now below pre-crisis average levels. Surveys suggest that employees and firms expect little recovery in pay growth this year. Other measures of inflation expectations are, however, close to historical averages. The MPC judges that inflation expectations remain broadly consistent with the 2% inflation target.

The MPC also noted, however, that, as set out in the February 2014 Report, the interest rate required to keep the economy operating at normal levels of capacity and inflation at the target was likely to continue to rise as the effects of the financial crisis faded further. Despite this, beyond the three-year forecast horizon the yield curve had flattened further over the past year. There was uncertainty about the reasons for this. Given that uncertainty, there was a risk that longer-term yields would move back up over time, for example, in response to a tightening of US monetary policy.

 

Bank of England Maintains Bank Rate at 0.5%

The Bank of England’s Monetary Policy Committee at its meeting on 8 May voted to maintain Bank Rate at 0.5%. The Committee also voted to maintain the stock of purchased assets financed by the issuance of central bank reserves at £375 billion.  The Committee’s latest inflation and output projections will appear in the Inflation Report to be published at 10.30 a.m. on Wednesday 13 May. At the same time, an open letter from the Governor to the Chancellor of the Exchequer will be published, following the release of data for CPI inflation of 0.0% in March.

The previous change in Bank Rate was a reduction of 0.5 percentage points to 0.5% on 5 March 2009. A programme of asset purchases financed by the issuance of central bank reserves was initiated on 5 March 2009. The previous change in the size of that programme was an increase of £50 billion to a total of £375 billion on 5 July 2012.

The Bank will continue to offer to purchase high-quality private sector assets on behalf of the Treasury, financed by the issue of Treasury bills, in line with the arrangements announced on 29 January 2009 and 29 November 2011.

UK Lending Update

The Bank of England just released their lending trends data for first quarter 2015.  Included in the report is some relevant data on investment or buy-to-let loans. BTL mortgages accounted for 15% of the total outstanding value of UK-resident mortgages as at end-2014 Q4. The rate of possession of buy-to-let properties was almost twice as high as for owner-occupied ones.

Overall, the rate of growth in some measures of the stock of lending to UK businesses picked up in the three months to February. Net capital market issuance was positive in this period. Mortgage approvals by all UK-resident mortgage lenders for house purchase rose slightly in the three months to February compared to the previous period. The stock of secured lending to households increased, but the pace of growth has slowed since 2014 H1. The annual growth rate in the stock of consumer credit was little changed in recent months.

Pricing on lending to small and medium-sized enterprises was little changed in the three months to February. Respondents to the Bank of England’s 2015 Q1 Credit Conditions Survey reported that spreads on new lending to large businesses fell significantly. The Bank’s series of quoted interest rates on fixed-rate mortgages decreased in 2015 Q1 compared to the previous quarter. Quoted rates on some personal loans continued to fall.

Contacts of the Bank’s network of Agents noted that credit availability had eased further, including for most small and medium-sized companies. Respondents to the Bank of England’s Credit Conditions Survey expected demand for bank lending to increase significantly from small businesses, increase from medium-sized businesses and be unchanged from large businesses in 2015 Q2. Lenders in the survey reported that the availability of secured credit to households was broadly unchanged and that
demand for secured lending fell significantly in the three months to early March 2015.

Secured lending to individuals. The number of mortgage approvals by all UK-resident mortgage lenders for house purchase increased slightly in the three months to February compared to the previous period. Approvals for remortgaging also rose slightly. The stock of secured lending to individuals increased, but the pace of growth has slowed since 2014 H1. The monthly net mortgage flow was little changed in recent months.

UK-Lending-April-2015-1Overall, gross secured lending was higher in 2014 than in recent years. Within this, the share of gross lending for buy-to-let purposes increased. BTL lending represented 13% of total gross mortgage lending in 2014, with gross advances having recovered from its post-crisis trough though still below its 2007 peak. BTL mortgages accounted for 15% of the total outstanding value of UK-resident mortgages as at end-2014 Q4. A buy-to-let mortgage is a mortgage secured against a residential property that will not be occupied by the owner of that property or a relative, but will instead be occupied on the basis of a rental agreement. In 1996 the Association of Residential Letting Agents, the trade body of estate agents dealing with rental properties, along with four lenders set up its first BTL initiative to encourage private individuals to invest in rental property. This market grew steadily and the share of BTL lending in total gross mortgage lending increased until mid-2008, according to data from the Council of Mortgage Lenders (CML).

UK-Lending-April-2015-2After the onset of the financial crisis, gross buy-to-let lending fell more sharply than total mortgage lending. Reflecting discussions with the major UK lenders, the July 2011 Trends in Lending publication noted one reason for this decline in 2008–09 was that the availability of this lending was said to have tightened as some specialist lenders exited this market. Another reason was that wholesale funding markets — often used to fund BTL lending — became impaired.

UK-Lending-April-2015-3

Gross lending for BTL purposes has grown since 2010, reflecting both supply and demand factors, and was £27.4 billion in 2014. Over the past five years the share of total BTL lending in overall mortgage lending has picked up to 15% in 2014 Q4, higher than in the pre-crisis period, according to data from the CML. Data based on the Bank of England and Financial Conduct Authority’s Mortgage Lenders and Administrators Return (MLAR), derived from a different reporting population and definitions of residency, also show that gross BTL lending grew faster than overall gross mortgage lending in recent years. Contacts of the Bank’s network of Agents noted that the rental market had continued to grow strongly in recent months, supporting continued steady growth in buy-to-let activity.

Gross buy-to-let advances for remortgaging have also increased in recent years. Its share of the total grew from 32% in 2002 to 52% in 2014, with the share of gross advances for house purchase at 45%. UK-Lending-April-2015-4The share of the number of BTL mortgages for house purchase in the total number of house purchases has increased from its trough in 2010 to 13% in 2014 though remains below its 2008 peak, according to data from the CML. A significant proportion of advertised BTL mortgage products in the four years after the financial crisis were at loan to value (LTV) ratios below 75%. The number of advertised BTL mortgage products at LTV ratios of 75% and above has increased since mid-2013, but most are below 80% LTV ratio.

UK-Lending-April-2015-5

Data on quoted rates for fixed-rate BTL mortgages from Moneyfacts Group indicate that they have fallen since the onset of the financial crisis. This follows the same broad pattern as the aggregate measures of quoted rates on fixed-rate mortgages published by the Bank of England. Spreads over reference rates initially widened on fixed-rate BTL products, as mortgage rates fell by less than swap rates. Since 2013, spreads on these products narrowed as relevant reference rates increased. In recent months, spreads ticked up as fixed BTL rates fell by less than these swap rates. Floating BTL mortgage rates have also decreased since the onset of the financial crisis. The decrease was similar to that for rates on fixed BTL products since 2013. With Bank Rate unchanged, spreads over Bank Rate for floating-rate BTL mortgages have narrowed in recent years. Looking ahead, lenders in the Bank of England’s Credit Conditions Survey expected a reduction in spreads on BTL lending in 2015 Q2. Indicative BTL rates by LTV ratio ranges have also decreased over the years. Rates for LTV ratios below 75% have fallen sharply over the past twelve months.

UK-Lending-April-2015-6

BTL mortgages as a proportion of the total number of outstanding mortgages more than three months in arrears rose sharply at the start of 2009, around the same time as the overall mortgage arrears rate. The BTL arrears rate fell back and has been lower than that for all mortgages in recent years. In some contrast, the possessions rate on BTL mortgages peaked much later than that for owner-occupied mortgages and while it has fallen recently, still remains higher than that for owner-occupiers. But the CML noted that some of the differences in the path of arrears and possession rates seen when comparing the BTL sector with the wider market reflects the use of receivers of rent in the BTL sector. Other things being equal, the use of receivership may have mitigated some increase in reported BTL arrears and possession rates and delayed the increase in reported BTL possessions.

UK-Lending-April-2015-7The rate of possession of buy-to-let properties was almost twice as high as for owner-occupied ones, even though the rate of underlying arrears on buy-to-let lending remained lower in 2014, according to data from the CML. They commented that this was because lenders offer extended forbearance to owner-occupiers to help them get through periods of financial difficulty without losing their home.

Bank of England Maintains Bank Rate at 0.5%

The Bank of England’s Monetary Policy Committee at its meeting today voted to maintain Bank Rate at 0.5%. The Committee also voted to maintain the stock of purchased assets financed by the issuance of central bank reserves at £375 billion.  The minutes of the meeting will be published at 9.30 a.m. on Wednesday 22 April.

The previous change in Bank Rate was a reduction of 0.5 percentage points to 0.5% on 5 March 2009. A programme of asset purchases financed by the issuance of central bank reserves was initiated on 5 March 2009. The previous change in the size of that programme was an increase of £50 billion to a total of £375 billion on 5 July 2012.

Information on the Asset Purchase Facility can be found on the Bank of England website at http://www.bankofengland.co.uk/monetarypolicy/Pages/qe/default.aspx.

The Bank will continue to offer to purchase high-quality private sector assets on behalf of the Treasury, financed by the issue of Treasury bills, in line with the arrangements announced on 29 January 2009 and 29 November 2011.

The Economics Of Deflation

In a speech at Imperial College Business School, Deputy Governor Ben Broadbent explores the economics of deflation. He explains what the theoretical risks are, how the MPC could respond were risks to materialise and why, while the MPC must be ‘watchful’, the chances of sustained deflation are ‘low’.

This week the ONS announced that CPI had fallen to 0%, ‘the first time the UK has failed to record positive annual inflation for over 50 years.’ Some have claimed that ‘deflation is a problem “because it encourages people to postpone consumption”. … and this can lead to a vicious circle in which prices are then depressed further.’ This is ‘at best a partial description of the problem, at worst a bit misleading. It leaves out two important considerations.’

First, any incentive to delay purchases is offset by an increased propensity to consume when, as now, nominal incomes are rising. Secondly, the incentive to save applies whenever nominal interest rates are higher than inflation and doesn’t ‘suddenly appear when expected inflation dips below zero’. Likewise, debt deflation is driven by nominal incomes falling much faster than nominal interest rates and would not occur when incomes are growing. What we have seen in the past few months is, instead, ‘what some have termed “good” deflation’; an improvement in the terms of trade which has led to an increase in nominal income growth.

In theory, even where periods of mild deflation, like that seen in Britain in the second half of the 19th century and up to the First World War, do lead to falls in nominal wages, ‘there doesn’t look to be anything particularly bad about negative inflation’ – so long as the appropriate level of the real interest rate is sufficiently high not to have to contend with the zero lower bound. When that is the case, interest rates can compensate for widespread price falls.

‘Clearly, we are not in the same position today’ and given the cuts in interest rates made necessary by the financial crisis ‘there is now less room to deal with any entrenched and protracted deflation, in prices and wages alike’ should that occur. ‘The evidence suggests that unconventional policy is effective: even if they don’t circumvent it entirely, asset purchases help soften the constraint of the zero lower bound. Second, with the financial system gradually improving my guess is that the neutral real rate of interest is more likely to rise than fall over the next couple of years.’

Nonetheless the constraints of the effective lower bound means the MPC must be ‘watchful’ for signs that ‘falling prices will beget yet weaker or even negative wage growth – that today’s “good” deflation will metastasise into something a good deal worse’.

‘What are the chances of such a thing?’ There is evidence that price falls have ‘already fed through to spending and household confidence’ and that the propensity to consume has dominated the propensity to save. The historical experience also suggests that low inflation is very unlikely to persist. Looking at 3,300 data points from across the world, in only 70 has inflation actually fallen and ‘in the majority of those inflation was positive the following year. Of the 24 instances in which prices have fallen for more than one year, all but four – two in Japan, two in Bahrain – have occurred in developing countries with pegged exchange rates.’

The presence of a credible monetary policy framework reduces even further the risk of protracted deflation. ‘If people expect inflation generally to be on target, over the medium term, the central bank has to do less to actively ensure it’ and policy does not need to respond to ‘one-off hits to the price level.’

Ben concludes that over the next couple of years ‘headline inflation will get a sizeable kick upwards, once the falls in food and energy prices drop out of the annual comparison. In the meantime, the UK labour market continues to tighten – low inflation won’t be the only influence on pay growth this year – and the boost imparted by lower commodity prices to real incomes is already apparent in higher consumer spending, here and in other countries too. ‘

The New Complexity

On microscopes and telescopes is a speech given by Andrew G Haldane, Chief Economist, Bank of England. It contains some important insights into the questions of how to model the current state of the financial system. For example, there may be greater scope to co-ordinate macro-prudential tools. One way of doing so is to develop macro-prudential instruments which operate on an asset-class basis, rather than on a national basis. Here are some of his remarks.

At least since the financial crisis, there has been increasing interest in using complexity theory to make sense of the dynamics of economic and financial systems. Particular attention has focussed on the use of network theory to understand the non-linear behaviour of the financial system in situations of stress. The language of complexity theory – tipping points, feedback, discontinuities, fat tails – has entered the financial and regulatory lexicon.

Some progress has also been made in using these models to help design and calibrate post-crisis regulatory policy. As one example, epidemiological models have been used to understand and calibrate regulatory capital standards for the largest, most interconnected banks – the so-called “super-spreaders”. They have also been used to understand the impact of central clearing of derivatives contracts, instabilities in payments systems and policies which set minimum collateral haircuts on securities financing transactions.

Rather less attention so far, however, has been placed on using complexity theory to understand the overall architecture of public policy – how the various pieces of the policy jigsaw fit together as a whole. This is a potentially promising avenue. The financial crisis has led to a fundamental reshaping of the macro-financial policy architecture. In some areas, regulatory foundations have been fortified – for example, in the micro-prudential regulation of individual financial firms. In others, a whole new layer of policy has been added – for example, in macro-prudential regulation to safeguard the financial system as a whole.

This new policy architecture is largely untried, untested and unmodelled. This has spawned a whole raft of new, largely untouched, public policy questions. Why do we need both the micro- and macro-prudential policy layers? How do these regulatory layers interact with each other and with monetary policy? And how do these policies interact at a global level? Answering these questions is a research agenda in its own right. Without answering those questions, I wish to argue that complexity theory might be a useful lens through which to begin exploring them. The architecture of complex systems may be a powerful analytical device for understanding and shaping the new architecture of macro-financial policy.

Modern economic and financial systems are not classic complex, adaptive networks. Rather, they are perhaps better characterised as a complex, adaptive “system of systems”. In other words, global economic and financial systems comprise a nested set of sub-systems, each one themselves a complex web. Understanding these complex sub-systems, and their interaction, is crucial for effective systemic risk monitoring and management.

This “system of systems” perspective is a new way of understanding the multi-layered policy architecture which has emerged since the crisis. Regulating a complex system of systems calls for a multiple set of tools operating at different levels of resolution: on individual entities – the microscopic or micro-prudential layer; on national financial systems and economies – the macroscopic or macro-prudential and monetary policy layer; and on the global financial and economic system – the telescopic or global financial architecture layer.

The architecture of a complex system of systems means that policies with varying degrees of magnification are necessary to understand and moderate fluctuations. It also means that taking account of interactions between these layers is important when gauging risk. For example, the crisis laid bare the costs of ignoring systemic risk when setting micro-prudential policy. It also highlighted the costs of ignoring the role of macro-prudential policy in managing these risks. That is why the post-crisis policy architecture has sought to fill these gaps. New institutional means have also been found to improve the integration of these micro-prudential, macro-prudential, macro-economic and global perspectives. In the UK, the first three are now housed under one roof at the Bank of England.

He concludes:

This time was different: never before has the world suffered a genuinely global financial crisis, with every country on the planet falling off the same cliff-edge at the same time. This fall laid bare the inadequacy of our pre-crisis understanding of the complexities of the financial system and its interaction with the wider economy, locally but in particular globally. It demonstrated why the global macro-financial network is not just a complex adaptive system, but a complex system of systems.

The crisis also revealed gaps and inadequacies in our existing policy frameworks. Many of those gaps have since been filled. Micro-prudential microscopes have had their lens refocused. Macro-prudential macroscopes have been (re)invented. And global telescopes have been strengthened and lengthened. Institutional arrangements have also been adapted, better enabling co-ordination between the micro, macro and global arms of policy. So far, so good.

Clearly, however, this remains unfinished business. The data necessary to understand and model a macro-financial system of systems is still patchy. The models necessary to make behavioural sense of these complexities remain fledgling. And the policy frameworks necessary to defuse these evolving risks are still embryonic. More will need to be done – both research and policy-wise – to prevent next time being the same.

For example,

There may be greater scope to co-ordinate macro-prudential tools. One way of doing so is to develop macro-prudential instruments which operate on an asset-class basis, rather than on a national basis. This would be recognition that asset characteristics, rather than national characteristics, may be the key determinant of portfolio choices and asset price movements, perhaps reflecting the rising role of global asset managers.

There has already been some international progress towards developing asset market specific macro-prudential tools, specifically in the context of securities financing transactions where minimum collateral requirements have been agreed internationally. But there may be scope to widen and deepen the set of financial instruments covered by prudential requirements, to give a richer array of internationally-oriented macro-prudential tools. These would then be better able to lean against global fluctuations in a wider set of asset markets.

Is Higher Market Volatility The New Normal?

Chris Salmon, Executive Director, Markets, Bank of England, gave a speech “Financial Market Volatility and Liquidity – a cautionary note”. He suggests that financial market conditions have changed quite noticeably over the past year or so, with significantly higher volatility. For example, on 15 October and 15 January the immediate intra-day reaction to the news was unprecedented. The intra-day change in 10-year US bond yields was 37 bps, with most of this move happening within just an hour of the data release. The intraday range represented nearly eight standard deviations, exceeding the price moves that happened immediately following the collapse of Lehman Brothers. On 15 January, the Swiss franc appreciated by 14%. The intraday range was several times that number, and market participants continue to debate the highest traded value of the franc on the day.

Could such events could imply that a number of major asset markets may have become more sensitive to news, so that a given shock causes greater volatility? Is this the new normal?

If so, what is the root cause? We know the macroeconomic outlook has changed and perhaps become less certain. Central banks have reacted, and in some cases pushed the innovation envelope further. Unsurprisingly these developments have been associated with more volatile financial markets. Given that policy-makers have previously been concerned that persistently tranquil financial markets could encourage excessive risk-taking, this development is not necessarily an unwelcome development. But recent months have also seen a number of short episodes of sharply-higher volatility which coincided with periods of much impaired market liquidity. There are good reasons to believe that the severity of these events was accentuated by structural change in the markets. There must be a risk that future shocks could have more persistent and more widespread impacts across financial markets than has been the case in the recent past.

Whilst market intelligence suggests that uncertainty surrounding the global outlook has been one factor; itself in no small part a consequence of the unexpected rough halving in the price of oil since last summer. This uncertainty can be seen in the recent increase in the dispersion of economists’ forecasts for inflation in the US, UK and euro area during 2015. Central banks themselves have reacted to the changed global outlook and monetary policy makers have been active in recent months. Indeed, so far this year 24 central banks have cut their policy rates. Moreover, the decisions by the ECB and three other central banks since last summer to set negative policy rates have raised questions about where the lower bound for monetary policy exists.

But he suggests there are two other factors in play, which may indicate a more fundamental change, and lead to continuing higher volatility.

First, market makers have become more reluctant to commit capital to warehousing risk. Some have suggested that this reflects a combination of reduced risk tolerance since the financial crisis, and the impact of regulation designed to improve the resilience of the financial system. This reduction in market making capacity has been associated with increased concentration in many markets, as firms have been more discriminating about the markets which they make, or the clients they serve. And this trend has gone hand-in-hand with a growth in assets under management by the buy-side community. The combination has served to amplify the implications of reduced risk warehousing capacity of the intermediary sector relative to the provision of liquidity from market makers during times of market stress relative to the past.

The second is the evolution of the market micro-structure. Electronic platforms are now increasingly used across the various markets. In some cases regulation has been the cause but in others, such as foreign exchange markets, firms have over a number of years increasingly embraced electronic forms of trading. This includes using ‘request-for-quote’ platforms to automate processes previously carried out by phone. Electronic platforms are effective in pooling liquidity in ‘normal’ times but may have the potential, at least as currently calibrated and given today’s level of competition, to contribute to discontinuous pricing in periods of stress if circuit-breakers result in platforms shutting down. There has been much commentary about the temporary unavailability of a number of electronic trading platforms in the immediate aftermath of the removal of the Swiss franc peg.

Oil Price Falls And Monetary Policy

In a speech at Durham University Business School, MPC member Ian McCafferty considers the factors contributing to the recent fall in the oil price, the impact on inflation and its likely persistence and how, given this analysis, UK monetary policy should respond.

Ian argues that as with the oil price falls seen in 1985 and 1998, ‘there is merit in examining recent oil price developments, and the implications for the outlook for the oil market, through the prism of hog-cycle theory.’ As with the livestock markets hog cycle theory is based on, short term elasticity of oil supply is low but the longer-term elasticity substantially higher.  As a result the main adjustment to price falls comes from changes in investment plans which in turn impact productivity and supply in the longer term.

This analysis shows that ‘the lag in the supply response means that for a while, even after the initial price fall, supply continues to exceed demand, such that inventories continue to build.’ As the market balances and inventory levels fall back ‘the market tightens and prices begin to rise, encouraging supply to recover. But here too, there are noticeable lags – first, it will require a period of higher prices to encourage producers to commit to new investment, and geographical, geological and political issues mean that the lead time to new supply is relatively lengthy.’

Ian suggests that ‘we can expect oil production to ease in the second half of the year’ and for demand for oil to increase due to the net positive impact to global demand, estimated by Bank staff to stand at around 0.8%, which in turn will support greater demand for oil. ‘Overall, it is reasonable to assume that, by the end of 2015, supply and demand for oil will be coming back into balance, although inventories will remain high for a further period. This should translate into more stable yet still relatively low prices,’ though further out ‘prices might be expected to recover’.

The fall in oil prices, and their predicted persistence, has important implications for both the likely path of inflation and the appropriate response of monetary policy. While the immediate direct effect is clearly disinflationary, detracting ‘a bit more than half a percentage point from headline inflation for the rest of the year’ indirect effects could emerge in both directions. The fall in the oil price could generate inflationary pressure by boosting demand and with little effect on potential supply in the economy. Conversely, the risk of falling inflation expectations feeding through to lower wage settlements could create further disinflationary pressure.

‘How should monetary policy respond to such a sharp oil price shock?  As always in monetary policy, the answer depends on the source of the shock.’ As it is supply rather than demand that has ‘been the dominant factor behind the recent fall in the oil price… it should be treated primarily as a simple cost or price-level shock’. This would mean looking-through the temporary impact on inflation as the MPC has done previously when rising oil prices pushed inflation well above target.

‘But how temporary is temporary? Policymakers need to consider not just the source of the shock but also its persistence.’ In doing so they should, Ian suggests, refer to the ‘optimal policy rule’ which states that ‘looking through an undershoot of inflation, even a prolonged one, is more justified if the real economy is operating at or above full capacity’.

This, combined with the likely path for spare capacity set out in the Inflation Report and Ian’s view that ‘there may be less spare capacity left in the labour market’ than the MPC’s collective judgement, would suggest that it would be right to ‘look through’ the current low level of inflation.

This, however, is complicated by the potential for the persistent, depressing effect on annual inflation to constrain a growth in pay by causing inflation expectations to become de-anchored. ‘Judging the scale of this downside risk is difficult. Some measures of inflation expectations have fallen but others suggest that inflation expectations remain well-anchored, and there are no signs at present that anything approaching deflationary psychology is likely to take hold.’ Nonetheless, it is not a risk the MPC can dismiss, ‘at least while inflation remains close to zero’. This, Ian concludes, is why he decided not to vote for an increase in Bank Rate at the January and February policy meetings.

The Post-Crisis Bank Capital Framework

David Rule, Executive Director, Prudential Policy at the Bank of England gave a good summary of the current issues surrounding capital, and commented specifically on issues surrounding internal (advanced) methods.

Six and half years after the depths of the Great Financial Crisis, we know the shape of the future global bank capital framework. But important questions do remain. Today I want to focus on how regulators should measure risk in order to set capital requirements, with some final remarks on the particular case of securitisation. To start, though, a reminder of the key elements of the post-crisis, internationally-agreed framework:

  • Banks have minimum requirements for at least 4.5% of risk-weighted assets (RWAs) in core equity and 6% of RWAs in going concern Tier 1 capital, including for the purpose of absorbing losses in insolvency or resolution. Basel III tightened the definition of capital significantly.
  • Systemically-important banks have further loss absorbing capacity so that they can be recapitalised in resolution without taxpayer support, ensuring the continuity of critical functions and minimising damage to financial stability.
  • In November 2014, the Financial Stability Board (FSB) proposed that total loss absorbing capacity (TLAC) for globally systemically-important banks (G-SIBs) should comprise at least 16-20% RWAs.
  • Core equity buffers sit on top of this TLAC so that the banking system can weather an economic downturn without unduly restricting lending to the real economy; the Basel III capital conservation buffer for all banks is sized at 2.5% of RWAs.
    o Systemically-important banks hold higher buffers; and
    o Buffers can also be increased counter-cyclically when national authorities identify higher systemic risks.

The new bank capital framework will cause banks to hold significantly more capital than the pre-crisis regime. Major UK bank capital requirements and buffers have increased at least seven-fold once you take account of the higher required quality of capital, regulatory adjustments to asset valuations and higher risk weights as well as the more obvious increases in headline ratio requirements and buffers. Small banks have seen a lesser increase than systemically-important banks, reflecting the important new emphasis since the crisis on setting capital buffers and TLAC in proportion to the impact of a bank’s distress or failure on the wider financial system and economy. In sum, the framework is now impact- as well as risk-adjusted. From a PRA perspective, this is consistent with our secondary objective to facilitate effective competition.

We are currently in transition to the final standards, with full implementation not due until 2019. Although the broad shape is clear, I want to highlight four areas where questions remain:

First, the overall calibration of TLAC. The FSB will finalize its ‘term sheet’ that specifies the TLAC standard for G-SIBs in light of a public consultation and findings from a quantitative impact study and market survey. It will submit a final version to the G-20 by the 2015 Summit. National authorities will also need to consider loss absorbing capacity requirements for banks other than G-SIBs. In the United Kingdom, the Financial Policy Committee (FPC) will this year consider the overall calibration of UK bank capital requirements and gone-concern loss absorbing capacity.

Second, the appropriate level of capital buffers, including how and by how much they increase as banks are more systemically important. The Basel Committee has published a method for bucketing G-SIBs by their global systemic importance, a mapping of buckets to buffer add-ons and a list of G-SIBs by bucket. This will be reviewed in 2017. Separately the US authorities have proposed somewhat higher add-ons. National authorities also have to decide buffer frameworks for domestically systemically-important banks or D-SIBs. In the UK, the FPC plans to consult on a proposal for UK D-SIBs in the second half of this year.

Third, the location of capital buffers, requirements and loss absorbing capacity within international banking groups. A number of such groups are moving towards ‘sibling’ structures in which operating banks are owned by a common holding company. This has advantages for resolution: first, loss absorbing capacity can be issued from a holding company so that statutory resolution tools only have to be applied to this ‘resolution entity’ – the operating subsidiaries that conduct the critical economic functions can be kept as going concerns; and second, the operating banks can be more easily separated in recovery or post-resolution restructuring. It also fits with legislation in countries such as the UK requiring ring fencing of core retail banking activities and the US requiring a foreign banking organization with a significant U.S. presence an to establish intermediate holding company over U.S. subsidiaries. A ‘single point of entry’ approach to resolution might involve all external equity to meet buffers and external equity and debt included in TLAC being issued from the top-level holding company. An important question then is to what extent and on what terms that equity and debt is downstreamed from the top-level holding company to any intermediate holding companies and the operating subsidiaries. This will also be influenced by the final TLAC standard that includes requirements on these intragroup arrangements.

Finally, I would like to spend more time on my fourth issue: how to measure a bank’s risk exposures in order to set TLAC and buffers – or, in other words, determining the denominator of the capital ratio. Here regulators have to balance multiple objectives:

  • An approach that is simple and produces consistent outcomes across banks. Basel I, based entirely on standardised regulatory estimates of credit risk, met this test.
  • An approach that is risk sensitive and minimises undesirable incentives that may distort market outcomes. Whether we like it or not, banks will evaluate their activities based on return on regulatory capital requirements. So if those requirements diverge from banks’ own assessments of risk, regulation will change market behaviour. Sometimes that may be intended and desirable. But often it will not be. Basel I, for example, led to distortions in markets like the growth of commercial paperback-up lines because under-one-year commitments had a zero capital requirement. Subsequent developments of the Basel capital framework sought to close the gap between regulatory estimates of risk and firms’ estimates of risk by allowing use of internal models for market, operational and credit risk.
  • An approach that is robust in the face of uncertainty about the future. Estimates of risk based on past outcomes may prove unreliable. We should be wary of very low capital requirements on the basis that assets are nearly risk free. And behavioural responses to the capital framework may change relative risks endogenously. For example, before the crisis, banks became dangerously over-exposed to AAA-rated senior tranches of asset backed securities partly because, wrongly, they saw the risks as very low and partly because the capital requirements were vanishingly small.

Ideally regulators would design a framework for measuring risk exposures that maximises each of these objectives. But trade-offs are likely to be necessary and, in my view, the rank ordering of objectives should be robustness followed by risk sensitivity and simplicity. Prioritising robustness points to combining different approaches in case any single one proves to be flawed. So the PRA uses three ways of measuring risk: risk weightings, leverage and stress testing. By weighting all assets equally regardless of risk, the leverage exposure measure provides a cross check on the possibility that risk weights or stress testing require too little capital against risks judged very low but which subsequently materialise.

In the United Kingdom, the FPC’s view is that leverage ratio should be set at 35% of a bank’s applicable risk-weighted requirements and buffers.1 This is simple to understand and can be seen as setting a minimum average risk weight of 35%. So, for non-systemic banks with risk-weighted requirements and buffers of 8.5%, the minimum leverage ratio would be 3%. But a G-SIB, with a risk-weighted buffer add-on of, say, two percentage points, would a have an additional leverage buffer of 0.7 percentage points. And all firms would be subject to a leverage buffer equal to 35% of any risk-weighted counter-cyclical buffer. Another key advantage of using the same scaling factor and mirroring the different elements of the risk-weighted framework is that it creates consistent incentives for different types of banks and over time. By contrast, for example, setting the same leverage ratio for all firms would amount to setting a lower minimum average risk weight for systemically-important banks than other banks.

Stress testing complements risk weighted and leverage approaches by considering the impact of extreme but plausible forward-looking macroeconomic scenarios of current concern to policymakers. Because buffers are intended to absorb losses in an economic downturn, the natural role of stress testing in the capital framework is to assess the adequacy of the buffers based on the Basel risk-weighted and leverage measures. If an individual bank is shown to be an outlier in a stress test, with a particularly large deterioration in its capital position, supervisors may use Pillar II to increase its capital buffers. The PRA is currently consulting on its approach to Pillar II, including a ‘PRA buffer’ that would be used in this way to address individual bank risks. An advantage of concurrent stress testing across major banks is that policymakers can consider the wider systemic impact of the scenario. They can also test whether buffers are sufficient even if regulators prevent banks from modelling management actions that would be harmful to the wider economy: for example, if banks propose to reduce new lending in order to conserve capital. Used in this way, stress testing may inform calibration of the system-wide, countercyclical buffer if macro-prudential policymakers identify elevated systemic risks.

Leverage and stress testing are best seen as complements rather than alternatives to risk-weighted measures of capital, producing a more robust overall framework. Risk weightings will likely remain the binding constraint for most banks most of the time. A central priority of the Basel Committee over the next year or so is to restore confidence in risk weightings by designing a system that balances most effectively the three objectives of robustness, risk sensitivity and simplicity.

Risk sensitivity points to a continuing role for firms’ internal estimates and models. But that depends on finding solutions for problems with them. First, various studies by the Basel Committee have shown material variations in risk weights between banks for reasons other than differences in the riskiness of portfolios. Models appear to be producing excessive variability in capital outputs, undermining confidence in risk-weighted capital ratios and raising questions about gaming. Second, some models may produce low risk weights because the data underpinning them do not include stress events in the tail of the distribution. This is a particular concern in portfolios where the typical level of defaults is low but defaults may correlate in a systemic crisis: for example, exposures to other banks or high quality mortgages. For major global firms, average risk weights fell almost continuously from around 70% in 1993 to below 40% in 2008, since when they have remained around that level. Third, modelled capital requirements can be procyclical. For example, last year’s concurrent stress test of major UK banks by the Bank of England showed that some banks’ mortgage risk weights increased significantly in the test, particularly where banks took a ‘point in time’ approach whereby probability of default was estimated as a function of prevailing economic and financial conditions.

One solution would be to abandon use of banks own estimates and models entirely and use standardised regulatory risk weights. But standardised approaches have their own weaknesses. For example, finding simple and consistent techniques for measuring risk by asset class that work well across countries with different market structures and risk environments is not straightforward. Regulators typically face a trade-off between simplicity and risk sensitivity. An alternative approach is to find solutions for the problems with models. Some possible ideas might include:

  • Requiring banks to provide more transparency about their risk estimates and models. The work of the Enhanced Disclosure Task Force and Basel’s revised Pillar III templates are steps in this direction. Regular hypothetical portfolio exercises by supervisors can identify banks with more aggressive approaches.
  • Being more selective about where it makes sense to allow internal models and where standardised approaches may be more effective. In the case of credit risk, for example, models may be more robust in asset classes with longer and richer histories of default data; and the value-added of models for risk sensitivity is likely to be greater in asset classes where banks have significant private information about differences in risk.
  • Changing the specification of models to take greater account of potential losses if tail risks crystallise. The Basel Committee has already agreed to move from a value-at-risk to an expected shortfall approach to estimating market risk. For credit risk, increasing the implied correlation of default in the model might be a simple way to produce higher risk weights in asset classes where banks are estimating low probabilities of default but regulators are concerned about tail risks.
  • Broadening the use of so-called ‘slotting’ approaches in which banks use their own estimates to rank order risks but regulators determine the risk weights for each ‘slot’. Slotting makes use of the better information banks have about relative risk within an asset class. But regulators decide the level of capital requirements. Slotting was one of the options considered when regulators first started thinking about use of internal models in the capital framework in the 1990s.
  • Putting floors on the level of modelled capital requirements. The Basel Committee has recently consulted on the design of a floor based on standardised risk weights to replace the existing transitional capital floor based on the Basel I framework. But it has not taken decisions on calibration: in other words, how often the floors would ‘bite’.

The Basel Committee has said that it will consider the calibration of standardised floors alongside its work on finalising revised standardised approaches to credit risk, market risk and operational risk, and as part of a range of policy and supervisory measures that aim to enhance the reliability and comparability of risk-weighted capital ratios. Restoring confidence in risk weights will form a major part of the Committee’s agenda over the next year or so. Meanwhile, at a national level, supervisors can use Pillar II to address risks not adequately captured under internationally-standardised risk weightings. The PRA uses Pillar II actively to ensure banks have adequate capital to support all the risks in their businesses and has recently set out in a transparent way for consultation the methodologies it proposes using to inform its setting of Pillar II capital requirements.

Finally, I want to speak briefly about securitisation as an example of an area where regulators find it hard to measure risk. One reason is that part of the securitisation market grew up in order to exploit weaknesses in risk weightings by allowing banks to maximise reduction in capital requirements while minimising decreases in revenue. A lesson from the past is that the risk of unintended market consequences is high. Risk weighting approaches for securitisation have relied either on external tranche ratings or on regulatory formulae. Both have problems. Formulae may not include all the key dimensions of risk. Ratings agencies can. But their track record in the financial crisis was poor and authorities globally are seeking – and in the US case are required by law – to reduce reliance on rating agencies.

As well as the micro-prudential goal to ensure that banks measure securitisation risks appropriately and hold adequate capital against them, we also have a macro-prudential goal that the securitisation market develops in a sustainable way. These goals are aligned because, as we saw in the crisis, a market that develops in an unhealthy way can mean unexpectedly greater risks for banks. What are the characteristics of a sustainable securitisation market? One in which:

  • banks and other issuers can use securitisation to transfer risk and raise funding but not to manage capital requirements artificially;
  • investors are diverse and predominantly ‘real money’ as opposed to the fragile base of leveraged funds and bank treasuries that collapsed in Europe during the crisis;
  • issuers’ incentives are adequately aligned with those of investors; and
  • investors have the information they need to understand the risks they are taking.

If structured soundly in this way, securitisation markets can be an important channel for diversifying funding sources and allocating risk more efficiently. Overall, the development of a carefully structured securitisation market could enable a broader distribution of financial sector risk, allow institutional investors to diversify their portfolios and banks to obtain funding and potentially remove part of the risk from banks’ balance sheets to free up balance sheet capacity for further lending to the economy.

The Basel Committee published a revised securitisation framework in December last year. Jointly with IOSCO, it also published for consultation a set of criteria to help identify simple, transparent and comparable (STC) securitisation. This year, the Committee will consider how to incorporate such criteria into the securitisation capital framework. In my view, incorporating the STC criteria will serve both micro-prudential and macro-prudential objectives. First, it will add a measure of ‘structure’ risk into the capital framework complementing existing inputs such as the underlying risk weights on the securitised portfolio, maturity and tranche seniority. That should improve risk sensitivity. And more transparency will help regulators as well as investors to measure risk. Second, such criteria will encourage securitisation market to develop in a more healthy and sustainable way. Finally, and returning to my main theme, I conclude that the post-crisis capital regulation for banks globally should be based on different ways of assessing risk, with leverage and stress testing complementing risk-weighted measures within an integrated framework. Such an approach is most likely to achieve the objectives of robustness followed by risk sensitivity and simplicity.