Why the Fed is no longer center of the financial universe

From The Conversation.

Markets have been speculating for months about whether the US Federal Reserve would raise interest rates in September. The day has finally arrived, and interestingly, there’s much less certainty now about which way it will go than there was just a few weeks earlier.

In August, more than three-quarters of economists surveyed by Bloomberg expected a rate hike this month. Now, only about half do. Traders were also more certain back then, putting the odds at about 50-50. Now the likelihood of a rate hike based on Fed Funds Futures is about one in four.

The Federal Reserve may be on the verge of lifting rates for the first time in more than nine years because unemployment has dropped to pre-crisis levels, the housing market is the healthiest it’s been in 15 years and the economic recovery, while tepid, has continued.

Investors’ and economists’ uncertainty, meanwhile, has been fueled by weak growth in China, Europe and Latin America, giving the Fed pause about whether now’s the right time to start the return to normal.

There is growing alarm that a rate hike will make things even worse for the rest of the world. The Fed risks creating “panic and turmoil” across emerging markets such as China and India and triggering a “global debt crisis.”

The reality, however, will likely be very different. For one, the Fed lacks the power it once did, meaning the actual impact of a rate hike will be more muted than people think. Second, the effect of uncertainty and speculation may be far worse than an actual change in rates, which is why central bankers in emerging markets are pushing their American counterparts to hurry up and raise them already.

The Fed’s waning influence

The Fed, and more accurately the rate-setting Federal Open Market Committee (FOMC), is simply no longer the center of the universe it once was, because the central banks of China, India and the eurozone have all become monetary policy hubs in their own right.

The US central bank may still be preeminent, but the People’s Bank of China, the Reserve Bank of India, and the European Central Bank are all growing more influential all the time. That’s particularly true of China’s central bank, which boasts the world’s largest stash of foreign currency reserves (about US$3.8 trillion) and increasingly hopes to make its influence felt beyond its borders.

Some argue that central banks in general, not just the Fed, are losing their ability to affect financial markets as they intend, especially since the financial crisis depleted their arsenal of tools. Those emergency measures resulted in more than a half-decade of near-zero interest rates and a world awash in US dollars. And that poses another problem.

All eyes will be on Fed Chair Janet Yellen. Reuters

Can the Fed even lift rates anymore?

The old toolkit of market leverage that the Fed used is losing relevance since the FOMC has not raised rates since 2006. And it has rather frantically been trying to experiment with new methods to affect markets.

The Fed and the market (including companies and customers) are beginning to understand that it’s not a given that the Fed can even practically raise rates any more, at least not without resorting to rarely or never-tried policies. That’s because its primary way to do so, removing dollars from the financial system, has become a lot harder to do.

Normally, one way the Fed affects short-term interest rates is by buying or selling government securities, which decreases or increases the amount of cash in circulation. The more cash in the system, the easier (and cheaper) it becomes to borrow, thus reducing interest rates, and vice versa.

And since the crisis, the Fed has added an enormous quantity of cash into the system to keep rates low. Removing enough of that to discourage lending and drive up rates won’t be easy. To get around that problem, it plans to essentially pay lenders to make loans, but that’s an unconventional approach that may not work and on some level involves adding more cash into the equation.

Uncertainty and speculation

In addition, the uncertainty and speculation about when the Fed will finally start the inevitable move toward normalization may be worse than the move itself.

As Mirza Adityaswara, senior deputy governor at Indonesia’s central bank, put it:

We think US monetary policymakers have got confused about what to do. The uncertainty has created the turmoil. The situation will recover the sooner the Fed makes a decision and then gives expectation to the market that they [will] increase [rates] one or two times and then stop.

While the timing of its first hike is important – and the sooner the better – the timing of the second is more so. This will signal the Fed’s path to normalization for the market (that is, the end of an era of ultra-low interest rates).

Right now, US companies appear ready for a rate hike because the impact on them will be negligible, and some investors are also betting on it.

That’s no surprise. Companies have borrowed heavily in recent years, allowing them to lock in record-low rates and causing their balance sheets to bulge. This year, corporate bond sales are on pace to have a third-straight record year, and currently tally about $1 trillion. Most of that’s fixed, so even if rates go up, their borrowing costs won’t change all that much for some time.

At the end of 2014, non-financial companies held a record $1.73 trillion in cash, double the tally a decade ago, according to Moody’s Investors Service.

Beyond the US, there are reports that Chinese and Indian companies are ready for a rate hike as well.

Foregone conclusion

So for much of the world, a hike in rates is already a foregone conclusion – the risk being only that the Fed doesn’t see it. The important question, then, is how quickly, or slowly, should the pace of normalization be. While the Fed may find it difficult to make much of an impact with one move, the pace and totality of the changes in rates will likely make some difference.

But the time to start that process is now. It will end the uncertainty that has embroiled world markets, strengthen the dollar relative to other currencies, add more flexibility to the Fed’s future policy-making and, importantly, mark a return to normality.

Author: Tomas Hult, Byington Endowed Chair and Professor of International Business, Michigan State University

What Drives US Household Debt?

Analysis from the Federal Reserve Bank of St Louis shows that in the US, whilst overall household credit is lower now, this is being driven by reduced credit creation, and not increased credit destruction.  We see a very different profile of debt compared with Australia, where household debt has never been higher. However, our analysis shows that core debt is also being held for longer, so the same effect is in play here, although new debt is also accelerating, driven by housing.

6tl-hhfinHousehold debt in the United States has been on a roller coaster since early 2004. As the first figure shows, between the first quarter of 2004 and the fourth quarter of 2008, total household debt increased by about 46 percent—an annual rate of about 8.3 percent. A process of household deleveraging started in 2009 and stabilized at a level 13 percent below the previous peak in the first quarter of 2013. During those four years, the household debt level decreased at a yearly rate of 3 percent. Since then, it has moved only modestly back toward its previous levels.

This essay provides a simple decomposition of the changes in debt levels to shed light on the sources of those changes. The analysis is similar to the decomposition of labor market flows performed by Haltiwanger (2012) and the decomposition of changes in business credit performed by Herrera, Kolar, and Minetti (2011). We use the term “credit change” to refer to the change in household debt: the difference between household debt (D) in the current period, t, and debt in the previous period, t –1, divided by debt in the previous period, t –1:

The total household debt is the sum of debt for each household i, so this can also be written as

Equivalently, one can add the changes in debt for each household i:

The key advantage of using household-level data is that one can separate positive changes (credit creation) from negative changes (credit destruction) and compute the change in debt as

Credit change = Credit creationCredit destruction,

where

and

These concepts are interesting because they can be linked to different household financial decisions. Credit creation can be linked to additional credit card debt or a new mortgage and credit destruction can be linked to repaying debt or simply defaulting.

As this decomposition makes clear, a stable level of debt (a net change of 0) could be the result of a large credit creation offset by an equally large credit destruction. Or it could indicate no creation and no destruction at all. To differentiate between these cases, it is useful to consider “credit activity” (also called reallocation), which is defined as

Credit activity = Credit creation + Credit destruction.

This is a useful measure because it captures credit activity ignored by the change in total debt.

The second figure shows credit creation, destruction, change, and overall activity. Recall that credit change is the difference between credit creation and destruction, while credit activity is the sum of credit creation and destruction. The credit change shown in the second figure traces the increase in debt before the 2008 crisis, the deleveraging that followed, and the relative stability of debt over the past 3 years. Analy­sis of debt creation and destruction shows that the expansion of debt was due to above-average creation of debt before the crisis—not insufficient credit destruction; credit destruction was actually slightly above average. Thus, credit activity was extensive during that period, with large amounts of both destruction and creation.

The deleveraging involved a decrease in creation (or origination) of debt: Creation started at nearly 10 percent in the expansion period but dropped below 5 percent after the financial crisis. Credit destruction was not the main contributor to the deleveraging: Destruction did not grow during the deleveraging period; it was actually slightly lower than during the expansion period. Thus, the deleveraging period of 2009-11 saw a very low level of credit activity, mainly due to the small amount of new credit issuance.

Finally, the stability of debt from 2011 to 2013 masked the increasing credit activity since both destruction and creation increased but offset each other. In sharp contrast, during the past year, the stability of debt has been due to very low levels of creation and destruction. In fact, credit activity is currently as low as it was in the middle of the financial crisis: about 9 percent of total household debt.

Overall, this analysis of household debt suggests that reduced credit creation, and not increased credit destruction, has been the key driver of the recent evolution of U.S. household debt. A topic for future investigation is that U.S. households are currently engaging in record low levels of financial intermediation, which is not obvious by simply observing the level of household debt.

Structure and Liquidity in Treasury Markets

Extract from a speech by Governor Powell at the Brookings Institution, Washington. The move to fully electronic trading raises important questions about the benefits of fully automated high-speed trading which may lead to industry concentration and liquidity fracturing as the arms-race continues. So it is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.

Treasury markets have undergone important changes over the years. The footprints of the major dealers, who have long played the role of market makers, are in several respects smaller than they were in the pre-crisis period. Dealers cite a number of reasons for this change, including reductions in their own risk appetite and the effects of post-crisis regulations. At the same time, the Federal Reserve and foreign owners (about half of which are foreign central banks) have increased their ownership to over two-thirds of outstanding Treasuries (up from 61 percent in 2004). Banks have also increased their holdings of Treasuries to meet HQLA requirements. These holdings are less likely to turn over in secondary market trading, as the owners largely follow buy and hold strategies. Another change is the increased presence of asset managers, which now hold a bigger share of Treasuries as well. Mutual fund investors, who are accustomed to daily liquidity, now beneficially own a greater share of Treasuries.

Perhaps the most fundamental change in these markets is the move to electronic trading, which began in earnest about 15 years ago. It is hard to overstate the transformation in these markets. Only two decades ago, the dealers who participated in primary Treasury auctions had to send representatives, in person, to the offices of the Federal Reserve Bank of New York to submit their bids on auction days. They dropped their paper bids into a box. The secondary market was a bit more advanced. There were electronic systems for posting interdealer quotes in the cash market, and the Globex platform had been introduced for futures. Still, most interdealer trades were conducted over the phone and futures trading was primarily conducted in the open pit.

Today these markets are almost fully electronic. Interdealer trading in the cash Treasury market is conducted over electronic trading platforms. Thanks to advances in telecommunications and computing, the speed of trading has increased at least a million-fold. Advances in computing and faster access to trading platforms have also allowed new types of firms and trading strategies to enter the market. Algorithmic and high-frequency trading firms deploy a wide and diverse range of strategies. In particular, the technologies and strategies that people associate with high frequency trading are also regularly employed by broker-dealers, hedge funds, and even individual investors. Compared with the speed of trading 20 years ago, anyone can trade at high frequencies today, and so, to me, this transformation is more about technology than any one particular type of firm.

Given all these changes, we need to have a more nuanced discussion as to the state of the markets. Are there important market failures that are not likely to self-correct? If so, what are the causes, and what are the costs and benefits of potential market-led or regulatory responses?

Some observers point to post-crisis regulation as a key factor driving any decline or change in the nature of liquidity. Although regulation had little to do with the events of October 15, I would agree that it may be one factor driving recent changes in market making. Requiring that banks hold much higher capital and liquidity and rely less on wholesale short-term debt has raised funding costs. Regulation has also raised the cost of funding inventories through repurchase agreement (repo markets). Thus, regulation may have made market making less attractive to banks. But these same regulations have also materially lowered banks’ probabilities of default and the chances of another financial crisis like the last one, which severely constrained liquidity and did so much damage to our economy. These regulations are new, and we should be willing to learn from experience, but their basic goals–to make the core of the financial system safer and reduce systemic risk–are appropriate, and we should be prepared to accept some increase in the cost of market making in order to meet those goals.

Regulation is only one of the factors–and clearly not the dominant one–behind the evolution in market making. As we have seen, markets were undergoing dramatic change long before the financial crisis. Technological change has allowed new types of trading firms to act as market makers for a large and growing share of transactions, not just in equity and foreign exchange markets but also in Treasury markets. As traditional dealers have lost market share, one way they have sought to remain competitive is by attempting to internalize their customer trades–essentially trying to create their own markets by finding matches between their customers who are seeking to buy and sell. Internalization allows these firms to capture more of the bid-ask spread, but it may also reduce liquidity in the public market. At the same time it does not eliminate the need for a public market, where price discovery mainly occurs, as dealers must place the orders that they cannot internalize into that market.

While the changes I’ve just discussed are unlikely to go away, I believe that markets will adapt to them over time. In the meantime, we have a responsibility to make sure that market and regulatory incentives appropriately encourage an evolution that will sustain market liquidity and functioning.

In thinking about market incentives, one observer has noted that trading rules and structures have grown to matter crucially as trading speeds have increased–in her words, “At very fast speeds, only the [market] microstructure matters. Trading algorithms are, after all, simply a set of rules, and they will necessarily interact with and optimize against the rules of the trading platforms they operate on. If trading is at nanoseconds, there won’t be a lot of “fundamental” news to trade on or much time to formulate views about the long-run value of an asset; instead, trading at these speeds can become a game played against order books and the market rules. We can complain about certain trading practices in this new environment, but if the market is structured to incentivize those practices, then why should we be surprised if they occur?

The trading platforms in both the interdealer cash and futures markets are based on a central limit order book, in which quotes are executed based on price and the order they are posted. A central limit order book provides for continuous trading, but it also provides incentives to be the fastest. A trader that is faster than the others in the market will be able to post and remove orders in reaction to changes in the order book before others can do so, earning profits by hitting out-of-date quotes and avoiding losses by making sure that the trader’s own quotes are up to date.

Technology and greater competition have led to lower costs in many areas of our economy. At the same time, slower traders may be put at a disadvantage in this environment, which could cause them to withdraw from markets or seek other venues, thus fracturing liquidity. And one can certainly question how socially useful it is to build optic fiber or microwave networks just to trade at microseconds or nanoseconds rather than milliseconds. The cost of these technologies, among other factors, may also be driving greater concentration in markets, which could threaten their resilience. The type of internalization now done by dealers is only really profitable if done on a large scale, and that too has led to greater market concentration.

A number of observers have suggested reforms for consideration. For example, some recent commentators propose frequent batch auctions as an alternative to the central limit order book, and argue that this would lead to greater market liquidity. Others have argued that current market structures may lead to greater volatility, and suggested possible alterations designed to improve the situation. To be clear, I am not embracing any particular one of these ideas. Rather, I am suggesting that now is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.

US to Drive Faster Payments

The Federal Reserve System announced the appointment of Federal Reserve Bank of Chicago Senior Vice President Sean Rodriguez as its Faster Payments Strategy Leader. In this role, Rodriguez will lead activities to identify effective approaches for implementing a safe, ubiquitous, faster payments capability in the United States.

Rodriguez will chair the Federal Reserve’s Faster Payments Task Force, comprised of more than 300 payment system stakeholders interested in improving the speed of authorization, clearing, settlement and notification of various types of personal and business payments. In addition to leading faster payments activities, Rodriguez will continue to oversee the Federal Reserve’s Payments Industry Relations Program.

“Sean’s leadership experience across payment operations, customer relations and industry outreach is exactly what we need to successfully advance the vision for a faster payments capability in the United States,” said Gordon Werkema, the Federal Reserve’s payments strategy director to whom Rodriguez will report. “His passion has contributed significantly to the momentum behind our initiative to date and we’re confident in his ability to carry our strategy forward in strong partnership with the industry.”

Rodriguez brings more than 30 years of experience with Federal Reserve Financial Services in operations, product development, sales and marketing.  He helped establish the Federal Reserve’s Customer Relations and Support Office in 2001 and served on the Federal Reserve’s leadership team for implementing the Check 21 initiative.  More recently, Rodriguez was instrumental in the design and launch of the Federal Reserve’s Payments Industry Relations Program charged with engaging a broad range of organizations in efforts to improve the U.S. payment system.

Additional information about the Federal Reserve’s Strategies for Improving the U.S. Payment System, including the Faster Payments Task Force, is available at FedPaymentsImprovement.org.

The Federal Reserve believes that the U.S. payment system is at a critical juncture in its evolution. Technology is rapidly changing many elements that support the payment process. High-speed data networks are becoming ubiquitous, computing devices are becoming more sophisticated and mobile, and information is increasingly processed in real time. These capabilities are changing the nature of commerce and end-user expectations for payment services. Meanwhile, payment security and the protection of sensitive data, which are foundational to public confidence in any payment system, are challenged by dynamic, persistent and rapidly escalating threats. Finally, an increasing number of individuals and businesses routinely transfer value across borders and demand better payment options to swiftly and efficiently do so.

Considering these developments, traditional payment services, often operating on decades-old infrastructure, have adjusted slowly to these changes, while emerging players are coming to market quickly with innovative product offerings. There is opportunity to act collectively to avoid further fragmentation of payment services in the United States that might otherwise widen the gap between U.S. payment systems and those located abroad.

Collaborative action has the potential to increase convenience, ubiquity, cost effectiveness, security and cross-border interoperability for U.S. consumers and businesses when sending and receiving payments.

Since the Federal Reserve commenced a payment system improvement initiative in 2012, industry dialogue has advanced significantly and momentum toward common goals has increased. Many payment stakeholders are now independently initiating actions to discuss payment system improvements with one another—especially the prospect of increasing end-to-end payment speed and security. Responses to the Federal Reserve’s Consultation Paper indicate broad agreement with the gaps/opportunities and desired outcomes advanced in that paper. Diverse stakeholder groups have initiated efforts to work together to achieve payment system improvements. There is more common ground and shared vision than was previously thought to exist. We believe these developments illustrate a rare confluence of factors that create favorable conditions for change. Through this Strategies to Improve the U.S. Payment System paper, the Federal Reserve calls on all stakeholders to seize this opportunity and join together to improve the payment system.

FED to Modify its Capital Planning and Stress Testing Regulations

The Federal Reserve Board has proposed a rule to modify its capital planning and stress testing regulations.  The proposed changes would take effect for the 2016 capital plan and stress testing cycles.

The proposed rule would modify the timing for several requirements that have yet to be integrated into the stress testing framework.  Banking organizations subject to the supplementary leverage ratio would begin to incorporate that ratio into their stress testing in the 2017 cycle.  The use of advanced approaches risk-weighted assets–which is applicable to banking organizations with more than $250 billion in total consolidated assets or $10 billion in on-balance sheet foreign exposures–in stress testing would be delayed indefinitely, and all banking organizations would continue to use standardized risk-weighted assets.

Banking organizations are currently required to project post-stress regulatory capital ratios in their stress tests.  As the common equity tier 1 capital ratio becomes fully phased in under the Board’s regulatory capital rule, it would generally require more capital than the tier 1 common ratio.  The proposal would remove the requirement that banking organizations calculate a tier 1 common ratio.

The Board is also currently considering a broad range of issues related to its capital plan and stress testing rules.  Any modifications will be undertaken through a separate rulemaking and would take effect no earlier than the 2017 cycle.

Comments on the proposal will be accepted through September 24, 2015.

US Rate Cut Still On The Cards

In a speech Fed Chair Chair Janet L. Yellen “Recent Developments and the Outlook for the Economy“, she outlines the current US economic situation, and confirms the expectation that interest rates will rise later in the year.

The outlook for the economy and inflation is broadly consistent with the central tendency of the projections submitted by FOMC participants at the time of our June meeting. Based on my outlook, I expect that it will be appropriate at some point later this year to take the first step to raise the federal funds rate and thus begin normalizing monetary policy. But I want to emphasize that the course of the economy and inflation remains highly uncertain, and unanticipated developments could delay or accelerate this first step. We will be watching carefully to see if there is continued improvement in labor market conditions, and we will need to be reasonably confident that inflation will move back to 2 percent in the next few years.

Let me also stress that this initial increase in the federal funds rate, whenever it occurs, will by itself have only a very small effect on the overall level of monetary accommodation provided by the Federal Reserve. Because there are some factors, which I mentioned earlier, that continue to restrain the economic expansion, I currently anticipate that the appropriate pace of normalization will be gradual, and that monetary policy will need to be highly supportive of economic activity for quite some time. The projections of most of my FOMC colleagues indicate that they have similar expectations for the likely path of the federal funds rate. But, again, both the course of the economy and inflation are uncertain. If progress toward our employment and inflation goals is more rapid than expected, it may be appropriate to remove monetary policy accommodation more quickly. However, if progress toward our goals is slower than anticipated, then the Committee may move more slowly in normalizing policy.

Long-Run Economic Growth
Before I conclude, let me very briefly place my discussion of the economic outlook into a longer-term context. The Federal Reserve contributes to the nation’s economic performance in part by using monetary policy to help achieve our mandated goals of maximum employment and price stability. But success in promoting these objectives does not, by itself, ensure a strong pace of long-run economic growth or substantial improvements in future living standards. The most important factor determining continued advances in living standards is productivity growth, defined as the rate of increase in how much a worker can produce in an hour of work. Over time, sustained increases in productivity are necessary to support rising household incomes.

Here the recent data have been disappointing. The growth rate of output per hour worked in the business sector has averaged about 1‑1/4 percent per year since the recession began in late 2007 and has been essentially flat over the past year. In contrast, annual productivity gains averaged 2-3/4 percent over the decade preceding the Great Recession. I mentioned earlier the sluggish pace of wage gains in recent years, and while I do think that this is evidence of some persisting labor market slack, it also may reflect, at least in part, fairly weak productivity growth.

There are many unanswered questions about what has slowed productivity growth in recent years and about the prospects for productivity growth in the longer run. But we do know that productivity ultimately depends on many factors, including our workforce’s knowledge and skills along with the quantity and quality of the capital equipment, technology, and infrastructure that they have to work with. As a general principle, the American people would be well served by the active pursuit of effective policies to support longer-run growth in productivity. Policies to strengthen education and training, to encourage entrepreneurship and innovation, and to promote capital investment, both public and private, could all potentially be of great benefit in improving future living standards in our nation.

Dodd-Frank At Five

Fed Reserve Governor Lael Brainard speech “Dodd-Frank at Five: Looking Back and Looking Forward” provides an excellent summary of the state of play of US banking regulation. In short, much done, much still to do.

If there is one simple lesson from the crisis that we all can embrace, it is that no financial institution in America should be so big or complex that its failure would put the financial system at risk. Congress wrote that simple lesson into law as a core principle of the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (Dodd-Frank Act).

Consequently, a fundamental change in our framework of regulation as a result of the crisis is to impose tougher rules on banking organizations that are so big or complex that their risk taking and distress could pose risks to financial stability. Whereas previously, our regulatory framework took a homogeneous approach focused narrowly on the safety and soundness of an institution, the reforms underway take a tailored approach to also address the risks posed by an institution to the safety and soundness of the system.

Five years on, it is an opportune time to ask how far along we are in accomplishing that basic imperative. I would argue we are at a pivotal moment when many of the key requirements that apply differentially to the biggest and most complex institutions will be finalized and their impact will become clear.

In the immediate wake of the crisis, the central focus was to reduce leverage and build capital across the banking system while also addressing risks in derivatives and short-term wholesale funding markets. For instance, considerable effort went into the new Basel III capital framework, whose key elements apply across the entire banking system. With these important foundations laid, attention turned to the tougher standards for institutions whose size and complexity are such that their distress could pose risks to the system as a whole.

Tailoring Standards for Greater Systemic Risk
The Dodd-Frank Act requires the Board to adopt enhanced prudential standards for large banking organizations, as well as for nonbank financial companies that have been designated as systemically important, and to tailor the standards so that their stringency increases in proportion to the systemic footprint of the institutions to which they apply. In addition, rigorous planning and operational readiness for recovery and resolution are required to ensure that big, complex institutions are subject to the same market discipline of failure as other normal companies in America.

Within this framework, the first line of defense is to require big, complex institutions to maintain a very substantial stack of common equity in order to enhance loss absorbency and to induce the institutions to internalize the associated risks to the system. These requirements are designed to lower their probability of “material financial distress or failure” in order “to prevent or mitigate risks to the financial stability of the United States.

The proposed capital surcharge is the regulatory requirement that is most clearly calibrated to the size and complexity of an institution. Last December, the Board proposed a framework of risk-based capital surcharges for the eight U.S. banking organizations identified as global systemically important banks by the Financial Stability Board. The capital surcharges under the proposal are estimated to range from 1.0 percent to 4.5 percent of risk-weighted assets based on 2013 data. The capital surcharge would be required over and above the 7 percent minimum and capital conservation buffer required for all banking organizations under Basel III, and in addition to any countercyclical capital buffer.

The capital surcharge is designed to build additional resilience and lessen the chances of an institution’s failure in proportion to the risks posed by the institution to the financial system and broader economy. The surcharge is calibrated so that the expected costs to the system from the failure of a systemic banking institution are equal to the expected costs from the failure of a sizeable but not-systemic banking organization. In other words, if the failure of a systemic banking institution would have five times the system-wide costs as the failure of a sizeable but not-systemic banking organization, the systemic banking institution would be required to hold enough additional capital that the probability of its failure would be one-fifth as high. The capital surcharge should help ensure that the senior management and the boards of the largest, most complex institutions take into account the risks their activities pose to the system.

Importantly, the surcharge is calibrated in proportion to how an institution scores on specific metrics that capture the system-wide costs of its failure–risks associated with size, interconnectedness, complexity, cross-border activities, substitutability, and short-term wholesale funding. With respect to the last, the logic is that greater reliance on short-term wholesale funding increases the risks of creditor runs and asset fire sales that can both erode the institution’s capital and spark contagion. By calibrating the enhanced capital expectation in direct proportion to a set of measures of size, interconnectedness, and complexity, the proposal provides clear and measurable incentives for institutions to simplify and reduce their systemic footprint.

Second, the crisis also provided a stark reminder that what may seem like thick capital cushions in good times may prove dangerously thin at moments of stress, when losses soar and asset valuations plummet. Therefore, in addition to static capital requirements, large banking institutions must undergo the forward-looking Comprehensive Capital Analysis and Review (CCAR) and supervisory stress test each year to assess whether the amount of capital they hold is sufficient to continue operations through periods of economic stress and market turbulence, and whether their capital planning framework is adequate to their risk profile.

While supervisory stress tests with adverse and severely adverse macroeconomic scenarios are required by statute for all bank holding companies with assets over $50 billion, for the eight U.S. systemic banking institutions, the stress tests are tailored to include a counterparty default scenario, and, for the six systemic institutions with significant trading activities, the stress tests also include a global market shock. In significant part as a result of these additional requirements, in 2015, the eight systemic institutions needed to hold common equity worth 4.7 percent of risk-weighted assets on average above the 7 percent minimum and capital conservation buffer in order to meet the CCAR post-stress minimum requirement, given their planned capital distributions. That’s more than twice the average common equity increment above the regulatory capital minimum plus capital conservation buffer required of the next largest group of banks, those with $250 billion or more in assets that are not globally systemic.

In addition to the quantitative assessments, CCAR provides a powerful process for assessing the quality of each institution’s risk modeling and internal controls on a portfolio by portfolio basis. This is particularly important for institutions where the sheer size and complexity of their activities make it very challenging for even the highest-quality senior executives to effectively monitor and control risk.

The CCAR and stress test exercises provide valuable, forward-looking mechanisms to ensure that large banking institutions can meet their minimum capital ratios through the cycle. For the systemic banking institutions, it will be important to assess incorporating the risk-based capital surcharge in some form into the CCAR post-stress minimum in order to ensure these institutions remain sufficiently resilient to reduce the expected losses to the system through periods of financial and economic stress. Conceptually, the stress test and the capital surcharge should work to reinforce each other–not to substitute for each other.

Third, as we learned from the crisis, risk modeling and risk weighting are subject to considerable uncertainty, and stressed financial markets can make even the most rigorous risk assessments look optimistic in hindsight. Thus, the Basel III capital framework includes a simple, non-risk-adjusted ceiling on leverage that is designed not to bind under most circumstances while providing a robust cushion as a backstop. Although all internationally active U.S. banking organizations are subject to a 3 percent leverage standard that takes into account on- and off-balance sheet exposures under Basel III,6 our systemic banking institutions are required to meet a higher 5 percent leverage standard. The higher leverage standard for the systemic banking institutions is designed as a backstop to the surcharge-enhanced risk-based capital standard, reflecting the higher potential losses to the system from the failure of systemic institutions.

Fourth, in addition to the surcharge, regulatory minimum, and capital conservation buffer, starting in 2016 and phasing in through 2019, the U.S. banking agencies could require the largest, most complex U.S. banking firms to hold a countercyclical capital buffer of up to 2.5 percent of risk-weighted assets when it is warranted by rising macroprudential risks.

In sum, if the tailored capital framework that is under construction had been in place in 2007, the largest, most complex banking institutions could have been required to hold common equity of up to 14 percent of risk-weighted assets on average, which is roughly double the amount of common equity they held at the time.

Fifth, the crisis shined a harsh light on the severe inadequacies in the banking system not only in capital, but also with respect to liquidity risk management. At key moments of financial stress, run-like behavior in the short-term funding markets threatened the solvency of some large, complex banking organizations and compelled them to engage in asset fire sales. As part of the enhanced prudential standards mandated under the Dodd-Frank Act and Basel III liquidity reforms, large banking organizations are now required to maintain substantial buffers of high-quality liquid assets calibrated to their funding needs in stressed financial conditions. They are also required to maintain certain amounts of stable funding based on the liquidity characteristics of their assets.

As with assessments of capital, supervisors also evaluate liquidity at the largest firms in annual horizontal exercises called the Comprehensive Liquidity Analysis and Review (CLAR). In part because of these measures, the total amount of high-quality liquid assets held by the eight U.S. systemic banking institutions has increased by over 60 percent, or $1 trillion, since 2011 to $2.4 trillion currently. And whereas these institutions were materially more reliant on short-term wholesale funding than deposits before the crisis, now the reverse is the case.

Finally, the structure of incentive compensation also came under scrutiny post-crisis with the recognition that the heavy emphasis on stock options and bonuses created skewed incentives that provided substantial rewards for short-term risk taking going into the crisis. The logic of imposing tougher standards on large and complex institutions whose activities could pose risks to the broader financial system extends to requiring better alignment of the incentives of senior executives and senior risk managers with the longer-term fortunes of their banking institutions. Most simply, this calls for a greater share of compensation to be deferred for several years. Under the proposal issued by the Board and other federal financial regulatory agencies in 2011 to implement section 956 of the Dodd-Frank Act, at least 50 percent of incentive compensation of certain executive officers at financial institutions with total consolidated assets of $50 billion or more would have to be deferred over a period of at least three years, and the deferred amounts would need to be adjusted for actual losses that are realized during the deferral period.

Beyond this, for systemic banking institutions, I would like to see consideration given to changing the structure of deferred compensation so that it better balances the interests of the full set of the firm’s stakeholders over the longer term. In particular, when evaluating risky activities, senior executives should internalize not only the upside risk faced by stockholders, but also the downside risk borne by bondholders, especially as that better aligns with the public interest in reducing the likelihood of material financial distress or failure at the systemic banking institutions.9 This set of considerations should help to inform ongoing deliberations regarding implementation the Dodd-Frank Act incentive compensation provisions.

Making Failure Safe
You can see now why I argue we are reaching a key moment in our efforts to build a more resilient financial system. In combination, these more stringent standards, several of which are still in train, should prove powerful in inducing systemic banking institutions to reduce the risks they pose to the system. Beyond this, Congress sought to address too big to fail by requiring systemic institutions to plan and prepare for failure, and by creating a new “orderly liquidation authority.” Under section 165(d) of the Dodd-Frank Act, large bank holding companies are required to submit credible plans for their rapid and orderly resolution under the U.S. Bankruptcy Code. In addition, the orderly liquidation authority created under title II of the Dodd-Frank Act empowers the U.S. government to put a failing systemic banking institution into a governmental resolution procedure as an alternative to resolution under the Bankruptcy Code.

The resolution planning process provides regulators with an important tool to address too big to fail. And we have set the bar realistically high, reflecting lessons from the crisis in the requirements that large banking institutions must meet to ensure their plans and preparations are not deemed to be deficient by the regulators.11

Earlier this month, the eight U.S. systemic banking institutions submitted their most recent resolution plans, which are currently under review. Each of the submissions must provide detailed work plans in several specific areas that have been found to be critical for orderly resolution.

First, an orderly resolution requires that the large, complex firms simplify and rationalize their structures to align their legal entities with business lines and reduce the web of interdependencies among them to ensure separability along business lines. As the crisis made clear, the tangled web of thousands of interconnected legal entities that were allowed to proliferate in the run up to the crisis stymied orderly wind down and contributed to uncertainty and contagion.

Second, the largest, most complex banking organizations must demonstrate operational capabilities for resolution preparedness.  These capabilities include maintaining an ongoing, comprehensive understanding of the obligations and exposures associated with payment, clearing, and settlement activities across all the material legal entities and developing strong processes for managing, identifying, and valuing collateral across all the material legal entities. Capabilities for resolution preparedness also include establishing mechanisms to ensure that there would be adequate capital, liquidity, and funding available to each material legal entity under stressed market conditions to facilitate orderly resolution.

These steps, in turn, hinge on each institution demonstrating the requisite management information systems capabilities to ensure that key data related to each material legal entity’s financial condition, financial and operational interconnectedness, and third-party commitments is readily accessible on a real-time basis.

Fourth, the largest, most complex banking organizations are required to develop robust operational and legal frameworks to ensure continuity in the provision of shared or outsourced services to maintain critical operations during the resolution process.

Fifth, the largest, most complex banking organizations are in the process of amending financial contracts to provide for a stay of early termination rights of external counterparties, recognizing that the triggering of cross-default provisions proved to be a major accelerant of contagion at the height of the crisis and greatly impeded cross border cooperation.

Sixth, the largest, most complex banking organizations are required to develop a clean top-tier holding company structure, in which the parent’s obligations are not supported by guarantees provided by operating subsidiaries, to support resolvability. This will be critical for any institution pursuing the single point of entry strategy.

In addition, the publicly disclosed summary of each institution’s plan is required to include information on the strategy for resolving each material legal entity and what an institution would look like following resolution in order to bolster public and market confidence that resolution would be orderly.

We look forward to assessing the plans submitted earlier this month, which we expect to demonstrate concrete progress on the detailed feedback that was provided by the regulators over the past year. In parallel, Board supervision staff have been engaged in an extensive horizontal review of the operational readiness of the systemic banking institutions on several dimensions of the resolution planning that were detailed in earlier supervisory guidance. Together, the annual plan submissions along with the ongoing supervisory examination of operational readiness provide potent, complementary mechanisms in addressing too big to fail.

Finally, in order to make the firms resolvable, it will be necessary for the largest, most complex firms to maintain enough long-term debt at the top-tier holding company that could be converted into equity to recapitalize the institution’s critical operating subsidiaries so as to prevent contagion. The availability of sufficient capacity at the parent to both absorb losses and recapitalize the critical operating subsidiaries is designed to provide comfort to other creditors of the firm and thereby forestall destructive runs, since the long-term unsecured debt issued by the parent holding company would be structurally subordinate to the claims on the operating subsidiaries. We are in the process of developing a proposal for a long-term debt requirement that would fully address the estimated capital needs of each institution in a gone-concern scenario.

Scale and Scope
Having provided a detailed assessment of the measures Congress chose to require in order to address too big to fail, it is worth spending a minute reflecting on what Congress chose not to require in the Dodd-Frank Act. In particular, it is noteworthy that Congress did not prescribe major changes to scope or scale of systemic institutions in the too-big-to-fail toolkit.

One rationale is that the public sector on its own is unlikely to be the best judge of the optimal scope and scale of financial institutions. While the private sector may be in a better position to judge the market benefits associated with economies of scope and scale and business models associated with particular banking organizations, the public sector is likely to be a better judge of the risks that their size, interconnectedness, and complexity pose to the financial system. Accordingly, the Dodd-Frank Act assigns regulators the responsibility for calibrating requirements such that investors, senior executives, and board members internalize those risks.

Notwithstanding the fact that the law does not prescribe broad structural changes, some observers may judge whether reform has gone far enough based on the extent of changes in the scope or scale of the U.S. systemic banking institutions relative to the crisis. These eight banking institutions now hold $10.6 trillion in total assets and account for 57 percent of total assets in the U.S. banking system today–not materially different from the $9.4 trillion and 60 percent of total assets in 2009. And while some of the U.S. systemic banking institutions have reduced their capital markets activity, they remain the largest dealers in those markets.

To be fair, we are entering an important period when the more stringent standards that we are putting in place to reduce expected losses to the system should inform the cost-benefit analysis of these institutions’ size and structure. As standards for systemically important firms tighten, some institutions may determine that it is in the best interest of their stakeholders to reduce their systemic footprint. Indeed, there already have been some notable structural changes at a few of the largest institutions over the past few years that are not readily apparent from looking at the aggregate assets across the systemic institutions. But it is also possible that some may judge that the economies of scale and scope are such that it makes sense to maintain their systemic footprint, even at the expense of the greater regulatory burdens necessary to protect the system relative to those faced by their non-systemic competitors.

One thing we can all agree is that we have a more resilient and dynamic financial system as a result of having a very large number of banking organizations, in different size classes, pursuing different business models. Indeed, that diversity is one of the hallmarks of the U.S. system, which distinguishes it from many other advanced economies. Accordingly, we want to make sure that our regulatory framework supports banks in the middle of the size spectrum, as well as community banks, and the customers they serve. Thus, by the same rationale that argues for the greater stringency of the standards associated with greater systemic risk at the top end of the scale and complexity spectrum, we will carefully examine opportunities to ease burdens at the lower end of the spectrum. And we will want to continue to refine our regulatory standards, using the authorities under Dodd-Frank to make sure they are tailored to be commensurate with the risk to the system.

Supervisory Stress Testing of Large Systemic Financial Institutions – The Fed

Interesting speech by Fed Vice Chairman Fischer on supervisory stress testing of large systemic financial institutions.

Stress testing has become a cornerstone of a new approach to regulation and supervision of the largest financial institutions in the United States. The Federal Reserve’s first supervisory stress test was the Supervisory Capital Assessment Program, known as the SCAP. Conducted in 2009 during the depths of the financial crisis, the SCAP marked the first time the U.S. bank regulatory agencies had conducted a supervisory stress test simultaneously across the largest banking firms. The results clearly demonstrated the value of simultaneous, forward-looking supervisory assessments of capital adequacy under stressed conditions. The SCAP was also a key contributor to the relatively rapid restoration of the financial health of the U.S. banking system.

The Fed’s approach to stress testing of the largest and most systemic financial institutions has evolved since the SCAP, but several key elements persist to this day. These elements include, first, supervisory stress scenarios applicable to all firms; second, defined consequences for firms deemed to be insufficiently capitalized; and third, public disclosure of the results.

The Fed has subsequently conducted five stress test exercises that built on the success of SCAP, while making some important improvements to the stress test processes. The first key innovation was the development of supervisory models and processes that allow the Fed to evaluate independently whether banks are sufficiently resilient to continue to lend to consumers and to businesses under adverse economic and financial conditions. This innovation took place over the course of several exercises and was made possible by the extensive collection of data from the banks. These data have allowed supervisors to build models that are more sensitive to stress scenarios and better define the riskiness of the firms’ different businesses and exposures.

The second innovation since the SCAP was the use of the supervisory stress test as a key input into the annual supervisory evaluation of capital adequacy at the largest bank holding companies. The crisis demonstrated the importance of forward-looking supervision that accounted for the possibility of negative outcomes. By focusing on forward-looking post-stress capital ratios, stress testing provides an assessment of a firm’s capital adequacy that is complementary to regulatory capital ratios, which reflect the firm’s performance to date. Although we view this new approach to capital assessment as a significant improvement over previous practices, we are aware that the true test of this new regime will come only if another period of significant financial or economic stress were to materialize–which is to say that we will not have a strong test of the effectiveness of stress testing until the stress tests undergo a real world stress test. The same comment, mutatis mutandis, applies to the overall changes in methods of bank regulation and supervision made since September 15, 2008.

Third, supervisory stress testing has been on the leading edge of a movement toward greater supervisory transparency. Since the SCAP, the Fed has steadily increased the transparency around its stress testing processes, methodologies, and results. Before the crisis, releasing unfavorable supervisory information about particular firms was unthinkable–for fear of setting off runs on banks. However, the release of the SCAP results helped to calm markets during the crisis by reducing uncertainty about firm solvency. Indeed, only one of the 10 firms deemed to have a capital shortfall was unable to close the identified gap on the private markets. Our experience to date has been that transparency around the stress testing exercise improves the credibility of the exercise and creates accountability both for firms and supervisors. That said, too much transparency can also have potentially negative consequences, an idea to which I will turn shortly.

With the benefit of five years of experience, the Fed is continuing to assess its stress testing program, and to make appropriate changes. Examples of such changes to date include the assumption of default by each firm’s largest counterparty and the assumption that firms would not curtail lending to consumers and businesses, even under severely adverse conditions. As part of that assessment process, we are also currently seeking feedback from the industry, market analysts, and academics about the program.

Supervisory stress testing is not a static exercise and must adapt to a changing economic and financial environment and must incorporate innovations in modeling technology. Work is currently underway on adapting the stress testing framework to accommodate firms that have not traditionally been subject to these tests. The Dodd-Frank Act requires the Fed to conduct stress tests on non-bank financial institutions that have been designated as systemically important by the FSOC–the Financial Stability Oversight Council. Three of the currently designated financial institutions are global insurance companies. While distress at these firms poses risks to financial stability, particularly during a stressful period, certain sources of risk to these firms are distinct from the risks banking organizations face. A key aspect of this ongoing work includes adapting our current stress testing framework and scenarios to ensure that the tests for non-bank SIFIs–systemically important financial institutions–are appropriate.

Another area where work continues–and will likely always continue–is the Fed’s ongoing research aimed at improving our ability to estimate losses and revenues under stress. Supervisors have both to develop new approaches that push the state of the art in stress testing and to respond as new modeling techniques are developed or as firm activities and risk concentrations evolve over time. For example, forecasting how a particular bank’s revenue may respond to a severe macroeconomic recession can be challenging, and we continue to seek ways to enhance our ability to do so.

Supervisory stress testing models and methodologies have to evolve over time in order to better capture salient emerging risks to financial firms and the system as a whole. However, the framework cannot simply be expanded to include more and more aspects of reality. For example, incorporating feedback from financial system distress to the real economy is a complex and difficult modelling challenge. Whether we recognize it or not, the standard solution to a complex modeling challenge is to simplify–typically to the minimum extent possible–aspects of the overall modelling framework. However, incorporating feedback into the stress test framework may require simplifying aspects of the framework to a point where it is less able to capture the risks to individual institutions. Even so, one can imagine substantial gains from continued research on stress testing’s role in macroprudential supervision and our understanding of risks to the financial system, such as knock-on effects, contagion, fire sales, and the interaction between capital and liquidity during a crisis.

Finally, let me close by addressing a question that often arises about the use of a supervisory stress test, such as those conducted by the Fed, with common scenarios and models. Such a test may create the possibility of, in former Chairman Bernanke’s words, a “model monoculture,” in which all models are similar and all miss the same key risks. Such a culture could possibly create vulnerabilities in the financial system. At the Fed we try to address this issue, in part, through appropriate disclosure about the supervisory stress test. We have published information about the overall framework employed in various aspects of the supervisory stress test, but not the full details that banks could use to manage to the test. This–making it easier to game the test–is the potential negative consequence of transparency that I alluded to earlier.

We also value different approaches for designing scenarios and conducting stress tests. In the United States, in addition to supervisory stress testing, large financial firms are required to conduct their own stress tests, using their own models and stress scenarios that capture their unique risks. In evaluating each bank’s capital planning process, supervisors focus on how well banks’ internal scenarios and models capture their unique risks and business models. We expect firms to determine the risks inherent to their businesses, their risk-appetite, and to make business decisions on that basis.

US Industrial Production Wobbles

According to the FED, industrial production decreased 0.2 percent in May after falling 0.5 percent in April. The decline in April was larger than previously reported, but the rates of change for previous months were generally revised higher, leaving the level of the index in April slightly above its initial estimate. Manufacturing output decreased 0.2 percent in May and was little changed, on net, from its level in January. In May, the index for mining moved down 0.3 percent after declining more than 1 percent per month, on average, in the previous four months. The slower rate of decrease for mining output last month was due in part to a reduced pace of decline in the index for oil and gas well drilling and servicing. The output of utilities increased 0.2 percent in May. At 105.1 percent of its 2007 average, total industrial production in May was 1.4 percent above its year-earlier level. Capacity utilization for the industrial sector decreased 0.2 percentage point in May to 78.1 percent, a rate that is 2.0 percentage points below its long-run (1972–2014) average.

Probably not enough negative news to hold off on interest rates rises in the US later in the year, but was enough to drive the markets lower overnight.

US Economic Outlook and Monetary Policy

A speech by Governor Lael Brainard at the Center for Strategic and International Studies, Washington, D.C. on “The U.S. Economic Outlook and Implications for Monetary Policy” suggests that whilst the US economic outlook is patchy, interest rates will rise.

This spring marks the end of the Federal Reserve’s calendar-based forward guidance and the return to full data dependency in the setting of the federal funds rate. So it is notable that just as policymaking is becoming more anchored in meeting-by-meeting assessments of the data, the data are presenting a mixed picture that lends itself to materially different readings.

No doubt, bad weather, port disruptions, and statistical issues are responsible for some of the softness in first-quarter indicators of aggregate spending. Indeed, it may be that the dismal estimate by the Bureau of Economic Analysis of the annualized change in first-quarter gross domestic product (GDP), negative 0.7 percent, is principally an extension of the pattern, seen for several years, of significantly slower measured GDP growth in the first quarter followed by considerably stronger readings during the remainder of the year. In that case, it would be appropriate to minimize the importance of the first-quarter estimate in judging the likely path of the economy over the remainder of the year.

But there may be reasons not to ignore the recent readings entirely. First, the limited data in hand pertaining to the second quarter do not suggest a significant bounceback in aggregate spending, which we would expect if all of the weakness in the first quarter were due to transitory factors. Private-sector forecasts of second-quarter growth are centered around 2-1/2 percent, while the Federal Reserve Bank of Atlanta’s GDPNow forecast, which was quite accurate in its prediction of the first estimate of first-quarter GDP growth, is projecting second-quarter GDP growth of only 0.8 percent.

Second, it would not be the first time this recovery has proceeded in fits and starts. The underlying momentum of the recovery has proven relatively susceptible to successive headwinds, which have kept overall economic growth well below the average pace of previous upturns.

My own reading is that earlier, more optimistic growth projections may have placed too much weight on the boost to spending from lower energy prices and too little weight on the negative implications for aggregate demand of the significant increase in the foreign exchange value of the dollar and large decline in the price of crude oil.

Based on today’s picture of moderate underlying momentum in the domestic economy and the likelihood of continued crosscurrents from abroad, the process of normalizing monetary policy is likely to be gradual. It is also important to remember that the stance of monetary policy will remain highly accommodative even after the federal funds rate moves off the effective lower bound, because the real federal funds rate will initially still be low and because of the elevated size of the Federal Reserve’s balance sheet and the associated downward pressure on long-term rates. Moreover, the FOMC has stated clearly that it will reduce the size of the balance sheet in a gradual and predictable manner starting at an appropriate time after liftoff, which will depend on how economic and financial conditions evolve.

In summary, the string of soft data in the first quarter raises some questions about the contours of the outlook. While it is possible that residual seasonality and temporary factors were responsible, it would be difficult, based on the data available today, to dismiss the possibility of a more significant drag on the economy than anticipated from foreign crosscurrents and the negative effects of the oil price decline, along with a more cautious U.S. consumer. This possibility argues for giving the data some more time to confirm further improvement in the labor market and firming of inflation toward our 2 percent target. But while the case for liftoff may not be immediate, it is coming into clearer view. When that time comes, the policy path will be highly attuned to incoming data and not on a preset course, and it is important to be mindful of the possibility of volatility as markets adjust to a change in the stance of policy. Thus, the FOMC will continue communicating as clearly as possible regarding the outlook and the factors underlying its policy determinations.