Federal Reserve Announces Sixth Triennial Study to Examine U.S. Payments Usage

The Federal Reserve today announced plans to conduct its sixth triennial study to determine the current aggregate volume and composition of electronic and check payments in the United States. The study builds upon research begun by the Federal Reserve in 2001 to provide the public and the payments industry with estimates and trend information about the evolving nature of the nation’s payments system. A public report containing initial topline estimates is expected to be published in December 2016.

“Over the 15-year life of the study, the survey instruments have been adapted and updated to keep pace with the dynamic change in the U.S. payments system,” said Mary Kepler, senior vice president of the Federal Reserve Bank of Atlanta and the study’s executive sponsor. “Not surprisingly, the 2016 study will incorporate a number of significant enhancements, including an expansion of fraud-related information and an increase in the number of depository and financial institutions sampled. These improvements will strengthen the value of the trend information and insights to be presented with the study’s findings,” Kepler said.

The 2016 Federal Reserve Payments Study consists of three complimentary survey efforts commissioned to estimate the number, dollar value and composition of retail noncash payments in the United States for calendar year 2015. The study will request full-year 2015 payments data for various payment types from respondents to two of the three survey components; the third component involves a random sampling of checks processed in 2015 to determine distribution of party, counterparty and purpose. Results from all three survey components will be used to estimate current trends in the use of payment instruments by U.S. consumers and businesses. Previous studies have revealed significant changes in the U.S. payments system over time, most recently the increasing preference for debit, credit and stored-value cards among consumers and a leveling in growth of other electronic payment types such as the Automated Clearing House network. The Federal Reserve will work with McKinsey & Company and Blueflame Consulting, LLC to conduct this research study.

Additionally, the Federal Reserve plans to supplement its triennial research with two smaller annual research efforts to provide key payments volume and trends estimates in 2017 and 2018. “The industry’s participation and willingness to provide the full scope of the data requested is paramount to our ability to publish the timely and relevant results the industry has come to rely on to help objectively evaluate changes in the nation’s payments landscape,” Kepler said.

More information about Federal Reserve Financial Services can be found at www.frbservices.org. The website also contains links to the five previous Payments Studies.

The Financial Services Policy Committee (FSPC) is responsible for the overall direction of financial services and related support functions for the Federal Reserve Banks, as well as for providing Federal Reserve leadership in dealing with the evolving U.S. payments system. The FSPC is composed of three Reserve Bank presidents and two Reserve Bank first vice presidents.

Latest US Labor Data May Delay Fed Interest Rate Rise

Data from the US Bureau of Labor Statistics for September suggest that the Fed may delay their much anticipated, and continually postponed, interest rate rise. This is a reaction to slowing world trade, China, and financial market uncertainty, as well as as series of downward revisions to earlier months data.

Their September data showed that total nonfarm payroll employment increased by 142,000 in September, and the unemployment rate was unchanged at 5.1 percent. Job gains occurred in health care and information, while mining employment fell. Wage growth was zero.

In September, the unemployment rate held at 5.1 percent, and the number of unemployed persons (7.9 million) changed little. Over the year, the unemployment rate and the number of unemployed persons were down by 0.8 percentage point and 1.3 million, respectively.

The number of persons unemployed for less than 5 weeks increased by 268,000 to 2.4 million in September, partially offsetting a decline in August. The number of long-term unemployed (those jobless for 27 weeks or more) was little changed at 2.1 million in September and accounted for 26.6 percent of the unemployed.

The civilian labor force participation rate declined to 62.4 percent in September; the rate had been 62.6 percent for the prior 3 months. This level of participation has not been seen since the 1970’s. The employment-population ratio edged down to 59.2 percent in September, after showing little movement for the first 8 months of the year.

The number of persons employed part time for economic reasons (sometimes referred to as involuntary part-time workers) declined by 447,000 to 6.0 million in September. These individuals, who would have preferred full-time employment, were working part time because their hours had been cut back or because they were unable to find a full-time job. Over the past 12 months, the number of persons employed part time for economic reasons declined by 1.0 million.

In September, average hourly earnings for all employees on private nonfarm payrolls, at $25.09, changed little (-1 cent), following a 9-cent gain in August. Hourly earnings have risen by 2.2 percent over the year. Average hourly earnings of private-sector production and nonsupervisory employees were unchanged at $21.08 in September.

Macroprudential Policy in the U.S. Economy

Fed Vice Chairman Stanley Fischer spoke at the “Macroprudential Monetary Policy” conference. He remains concerned that the U.S. macroprudential toolkit is not large and is not yet battle tested. The contention that macroprudential measures would be a better approach to managing asset price bubbles than monetary policy, he says, is persuasive, except when there are no relevant macroprudential measures available. It also seems likely that monetary policy should be used for macroprudential purposes with an eye to the tradeoffs between reduced financial imbalances, price stability, and maximum employment.

This afternoon I would like to discuss the challenges to formulating macroprudential policy for the U.S. financial system.

The U.S. financial system is extremely complex. We have one of the largest nonbank sectors as a percentage of the overall financial system among advanced market economies. Since the crisis, changes in the regulation and supervision of the financial sector, most significantly those related to the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (Dodd-Frank Act) and the Basel III process, have addressed many of the weaknesses revealed by the crisis. Nonetheless, challenges to our efforts to preserve financial stability remain.

The Structure, Vulnerabilities, and Regulation of the U.S. Financial System
To set the stage, it is useful to start with a brief overview of the structure of the U.S. financial system. A diverse set of institutions provides credit to households and businesses, and others provide deposit-like services and facilitate transactions across the financial system. As can be seen from panel A of figure 1, banks currently supply about one-third of the credit in the U.S. system. In addition to banks, institutions thought of as long-term investors, such as insurance companies, pension funds, and mutual funds, provide anotherone-third of credit within the system, while the government-sponsored enterprises (GSEs), primarily Fannie Mae and Freddie Mac, supply 20 percent of credit. A final group, which I will refer to as other nonbanks and is often associated with substantial reliance on short-term wholesale funding, consists of broker-dealers, money market mutual funds (MMFs), finance companies, issuers of asset-backed securities, and mortgage real estate investment trusts, which together provide 14 percent of credit.

Fed-Fig-1In the first quarter of this year, U.S. financial firms held credit market debt equal to $38 trillion, or 2.2 times the gross domestic product (GDP) of the United States. As the figure shows, the size of the financial sector relative to GDP grew for nearly 50 years but declined after the financial crisis and has only started increasing again this year.

From the perspective of financial stability, there are two important dimensions along which the categories of institutions in figure 1 differ. First, banks, the GSEs, and most of what I have called other nonbanks tend to be more leveraged than other institutions. Second, some institutions are more reliant on short-term funding and hence vulnerable to runs. For example, MMFs were pressured during the recent crisis, as their deposit-like liabilities–held as assets by highly risk-averse investors and not backstopped by a deposit insurance system–led to a run dynamic after a large fund broke the buck. In addition, nearly half of the liabilities of broker-dealers consists–and consisted then–of short-term wholesale funding, which proved to be unstable in the crisis.

The pros and cons of a multifaceted financial system
The significant role of nonbanks in the U.S. financial system and the associated complex web of interconnections bring both advantages and challenges relative to the more bank-dependent systems of other advanced economies. A potential advantage of lower bank dependence is the possibility that a contraction in credit supply from banks can be offset by credit supply from other institutions or capital markets, thereby acting as a spare tire for credit supply. Historical evidence suggests that the credit provided by what I termed long-term investors–that is, insurance companies, pension funds, and mutual funds–has tended to offset movements in bank credit relative to GDP, as indicated by the strong negative correlation of credit held by these institutions with bank credit during recessions. In other words, these institutions have acted as a spare tire for the banking sector.

However, complexity also poses challenges. While the financial crisis arguably started in the nonbank sector, it quickly spread to the banking sector because of interconnections that were hard for regulators to detect and greatly underappreciated by investors and risk managers in the private sector.6 For example, when banks provide loans directly to households and businesses, the chain of intermediation is short and simple; in the nonbank sector, intermediation chains are long and often involve a multitude of both banks and other nonbank financial institutions.

Regulatory, supervisory, and financial industry reforms since the crisis
U.S. regulators have undertaken a number of reforms to address weaknesses revealed by the crisis. The most significant set of reforms has focused on the banking sector and, in particular, on regulation and supervision of the largest, most interconnected firms. Changes include significantly higher capital requirements, additional capital charges for global systemically important banks, macro-based stress testing, and requirements that improve the resilience of banks’ liquidity risk profile.

Changes for the nonbank sector have been more limited, but steps have been taken, including the final rule on risk retention in securitization, issued jointly by the Federal Reserve and five other agencies in October of last year, and the new MMF rules issued by the Securities and Exchange Commission (SEC) in July of last year, following a Section 120 recommendation by the Financial Stability Oversight Council (FSOC). More recently, the SEC has also proposed rules to modernize data reporting by investment companies and advisers, as well as to enhance liquidity risk management and disclosure by open-end mutual funds, including exchange-traded funds. Other provisions include the central clearing requirement for standardized over-the-counter derivatives and the designation by the FSOC of four nonbanks as systemically important financial institutions. The industry has also undertaken important changes to bolster the resilience of its practices, including notable improvements to internal risk-management processes.

Some challenges to macroprudential policy
The steps taken since the crisis have almost certainly improved the resilience of the U.S. financial system, but I would like to highlight two significant challenges that remain.

First, new regulations may lead to shifts in the institutional location of particular financial activities, which can potentially offset the expected effects of the regulatory reforms. The most significant changes in regulation have focused on large banks. This focus has been appropriate, as large banks are the most interconnected and complex institutions. Nonetheless, potential shifts of activity away from more regulated to less regulated institutions could lead to new risks.

It is still too early to gauge the degree to which such adaptations to regulatory changes may occur, although there are tentative signs. For example, we have seen notable growth in mortgage originations at independent mortgage companies as reflected in the striking increase in the share of home-purchase originations by independent mortgage companies from 35 percent in 2010 to 47 percent in 2014. This growth coincides with the timing of Basel III, stress testing, and banks’ renewed appreciation of the legal risks in mortgage originations. As another example, there have also been many reports of diminished liquidity in fixed-income markets. Some observers have linked this shift to new regulations that have raised the costs of market making, although the evidence for changes in market liquidity is far from conclusive and a range of factors related to market structure may have contributed to the reporting of such shifts.

Despite limited evidence to date, the possibility of activity relocating in response to regulation is a potential impediment to the effectiveness of macroprudential policy. This is clearly the case when activity moves from a regulated to an unregulated institution. But it may also be relevant even when activity moves from one regulated institution to an institution regulated by a different authority. This scenario can occur in the United States because different regulators are responsible for different institutions, and financial stability traditionally has not been, and in a number of cases is still not, a central component of these regulators’ mandates. To be sure, the situation has improved since the crisis, as the FSOC facilitates interagency dialogue and has a shared responsibility for identifying risks and reporting on these findings and actions taken in its annual report submitted to the Congress. In addition, FSOC members jointly identify systemically important nonbank financial institutions. Despite these improvements, it remains possible that the FSOC members’ different mandates, some of which do not include macroprudential regulation, may hinder coordination. By contrast, in the United Kingdom, fewer member agencies are represented on the Financial Policy Committee at the Bank of England, and each agency has an explicit macroprudential mandate. The committee has a number of tools to carry out this mandate, which currently are sectoral capital requirements, the countercyclical capital buffer, and limits on loan-to-value and debt-to-income ratios for mortgage lending.

A second significant challenge to macroprudential policy remains the relative lack of measures in the U.S. macroeconomic toolkit to address a cyclical buildup of financial stability risks. Since the crisis, frameworks have been or are currently being developed to deploy some countercyclical tools during periods when risks escalate, including the analysis of salient risks in annual stress tests for banks, the Basel III countercyclical capital buffer, and the Financial Stability Board (FSB) proposal for minimum margins on securities financing transactions. But the FSB proposal is far from being implemented, and a number of tools used in other countries are either not available to U.S. regulators or very far from being implemented. For example, several other countries have used tools such as time-varying risk weights and time-varying loan-to-value and debt-to-income caps on mortgages. Indeed, international experience points to the usefulness of these tools, whereas the efficacy of new tools in the United States, such as the countercyclical capital buffer, remains untested.

In considering the difficulties caused by the relative unavailability of macroprudential tools in the United States, we need to recognize that there may well be an interaction between the extent to which the entire financial system can be strengthened and made more robust through structural measures–such as those imposed on the banking system since the Dodd-Frank Act–and the extent to which a country needs to rely more on macroprudential measures. Inter alia, this recognition could provide an ex post rationalization for the United States having imposed stronger capital and other charges than most foreign countries.

Implications for monetary policy
Though I remain concerned that the U.S. macroprudential toolkit is not large and not yet battle tested, that does not imply that I see acute risks to financial stability in the near term. Indeed, banks are well capitalized and have sizable liquidity buffers, the housing market is not overheated, and borrowing by households and businesses has only begun to pick up after years of decline or very slow growth. Further, I believe that the careful monitoring of the financial system now carried out by Fed staff members, particularly those in the Office of Financial Stability Policy and Research, and by the FSOC contributes to the stability of the U.S. financial system–though we have always to remind ourselves that, historically, not even the best intelligence services have succeeded in identifying every significant potential threat accurately and in a timely manner. This is another reminder of the importance of building resilience in the financial system.

Nonetheless, the limited macroprudential toolkit in the United States leads me to conclude that there may be times when adjustments in monetary policy should be discussed as a means to curb risks to financial stability. The deployment of monetary policy comes with significant costs. A more restrictive monetary policy would, all else being equal, lead to deviations from price stability and full employment. Moreover, financial stability considerations can sometimes point to the need for accommodative monetary policy. For example, the accommodative U.S. monetary policy since 2008 has helped repair the balance sheets of households, nonfinancial firms, and the financial sector.

Given these considerations, how should monetary policy be deployed to foster financial stability? This topic is a matter for further research, some of which will look similar to the analysis in an earlier time of whether and how monetary policy should react to rapidly rising asset prices. That discussion reached the conclusion that monetary policy should be deployed to deal with errant asset prices (assuming, of course, that they could be identified) only to the extent that not doing so would result in a worse outcome for current and future output and inflation.

There are some calculations–for example, by Lars Svensson–that suggest it would hardly ever make sense to deploy monetary policy to deal with potential financial instability. The contention that macroprudential measures would be a better approach is persuasive, except when there are no relevant macroprudential measures available. I believe we need more research into the question. I also struggle in trying to find consistency between the certainty that many have that higher interest rates would have prevented the Global Financial Crisis and the view that the interest rate should not be used to deal with potential financial instabilities. Perhaps that problem can be solved by seeking to distinguish between a situation in which the interest rate is not at its short-run natural rate and one in which asset-pricing problems are sector specific.

Of course, we should not exaggerate. It is one thing to say we have no macroprudential tools and another to say that having more macroprudential measures–particularly in the area of housing finance–could provide major financial stability benefits. It also seems likely that monetary policy should be used for macroprudential purposes with an eye to the tradeoffs between reduced financial imbalances, price stability, and maximum employment. In this regard, a number of recent research papers have begun to frame the issue in terms of such tradeoffs, although this is a new area that deserves further research.

It may also be fruitful for researchers to continue investigating the deployment of new or little-used monetary policy tools. For example, it is arguable that reserve requirements–a traditional monetary policy instrument–can be viewed as a macroprudential tool. In addition, some research has begun to ask important questions about the size and structure of monetary authority liabilities in fostering financial stability.

Conclusion
To sum up: The need for coordination across different regulators with distinct mandates creates challenges to the timely deployment of macroprudential measures in the United States. Further, the toolkit to act countercyclically in the face of building financial stability risks is limited, requires more research on its efficacy, and may need to be enhanced. Given these challenges, we need to consider the potential role of monetary policy in fostering financial stability while recognizing that there is more research to be done in clarifying the potential costs and benefits of doing so when conditions appear so to warrant.

After all of the successful work that has been done to reform the financial system since the Global Financial Crisis, this summary may appear daunting and disappointing. But it is important to highlight these challenges now. Currently, the U.S. financial system appears resilient, reflecting the impressive progress made since the crisis. We need to address these questions now, before new risks emerge.

Fed Still Expecting To Lift Rates

In a wide-ranging speech, at the Philip Gamble Memorial Lecture, Fed Chair Yellen discussed inflation in the US and monetary policy. The net summary is that the more prudent strategy is to begin tightening in a timely fashion and at a gradual pace, adjusting policy as needed in light of incoming data.

Consistent with the inflation framework I have outlined, the medians of the projections provided by FOMC participants at our recent meeting show inflation gradually moving back to 2 percent, accompanied by a temporary decline in unemployment slightly below the median estimate of the rate expected to prevail in the longer run. These projections embody two key judgments regarding the projected relationship between real activity and interest rates. First, the real federal funds rate is currently somewhat below the level that would be consistent with real GDP expanding in line with potential, which implies that the unemployment rate is likely to continue to fall in the absence of some tightening. Second, participants implicitly expect that the various headwinds to economic growth that I mentioned earlier will continue to fade, thereby boosting the economy’s underlying strength. Combined, these two judgments imply that the real interest rate consistent with achieving and then maintaining full employment in the medium run should rise gradually over time. This expectation, coupled with inherent lags in the response of real activity and inflation to changes in monetary policy, are the key reasons that most of my colleagues and I anticipate that it will likely be appropriate to raise the target range for the federal funds rate sometime later this year and to continue boosting short-term rates at a gradual pace thereafter as the labor market improves further and inflation moves back to our 2 percent objective.

By itself, the precise timing of the first increase in our target for the federal funds rate should have only minor implications for financial conditions and the general economy. What matters for overall financial conditions is the entire trajectory of short-term interest rates that is anticipated by markets and the public. As I noted, most of my colleagues and I anticipate that economic conditions are likely to warrant raising short-term interest rates at a quite gradual pace over the next few years. It’s important to emphasize, however, that both the timing of the first rate increase and any subsequent adjustments to our federal funds rate target will depend on how developments in the economy influence the Committee’s outlook for progress toward maximum employment and 2 percent inflation.

The economic outlook, of course, is highly uncertain and it is conceivable, for example, that inflation could remain appreciably below our 2 percent target despite the apparent anchoring of inflation expectations. Here, Japan’s recent history may be instructive, survey measures of longer-term expected inflation in that country remained positive and stable even as that country experienced many years of persistent, mild deflation. The explanation for the persistent divergence between actual and expected inflation in Japan is not clear, but I believe that it illustrates a problem faced by all central banks: Economists’ understanding of the dynamics of inflation is far from perfect. Reflecting that limited understanding, the predictions of our models often err, sometimes significantly so. Accordingly, inflation may rise more slowly or rapidly than the Committee currently anticipates; should such a development occur, we would need to adjust the stance of policy in response.

Considerable uncertainties also surround the outlook for economic activity. For example, we cannot be certain about the pace at which the headwinds still restraining the domestic economy will continue to fade. Moreover, net exports have served as a significant drag on growth over the past year and recent global economic and financial developments highlight the risk that a slowdown in foreign growth might restrain U.S. economic activity somewhat further. The Committee is monitoring developments abroad, but we do not currently anticipate that the effects of these recent developments on the U.S. economy will prove to be large enough to have a significant effect on the path for policy. That said, in response to surprises affecting the outlook for economic activity, as with those affecting inflation, the FOMC would need to adjust the stance of policy so that our actions remain consistent with inflation returning to our 2 percent objective over the medium term in the context of maximum employment.

Given the highly uncertain nature of the outlook, one might ask: Why not hold off raising the federal funds rate until the economy has reached full employment and inflation is actually back at 2 percent? The difficulty with this strategy is that monetary policy affects real activity and inflation with a substantial lag. If the FOMC were to delay the start of the policy normalization process for too long, we would likely end up having to tighten policy relatively abruptly to keep the economy from significantly overshooting both of our goals. Such an abrupt tightening would risk disrupting financial markets and perhaps even inadvertently push the economy into recession. In addition, continuing to hold short-term interest rates near zero well after real activity has returned to normal and headwinds have faded could encourage excessive leverage and other forms of inappropriate risk-taking that might undermine financial stability. For these reasons, the more prudent strategy is to begin tightening in a timely fashion and at a gradual pace, adjusting policy as needed in light of incoming data.

The Federal Reserve is losing credibility by not raising rates now

From The Conversation.

So the results are in: the Federal reserve decided to keep interest rates at around zero, delaying any increase in its target for at least six more weeks.

The move did not come as a surprise to Wall Street – which was betting 3-to-1 against the hike. But that’s not because investors didn’t think the US economy was ready for a rates “liftoff.” Rather, it shows that markets did not believe the Fed has the will and power to raise rates for the first time since June 2006.

Unfortunately, they guessed right.

The economy is ready if not eager for a liftoff and a return to a normal rates environment. Investors and businesses know this. It’s time the Fed recognized this too.

Ready for liftoff

The data clearly show that the US economy hasn’t looked stronger in a very long time – a sharp improvement from earlier this year when I wrote that it wasn’t ready for an increase in interest rates.

While the labor market may not have experienced strong growth in wages yet, joblessness has plunged to 5.1%, reaching what is known as the “natural rate” of unemployment (also called “full employment”). That’s significant because achieving maximum employment is one of the Fed’s two primary mandates, and anything below the natural rate risks fueling inflation.

And inflation, its other main policy goal, is also in range of its target of 2%. Indexes of consumer prices, both including and excluding volatile energy prices, and personal spending are forecast to be right in that sweet spot of 1.5% to 2% next quarter.

Furthermore, the US economy grew a stronger-than-forecast 3.7% in the second quarter, much better than the previous three-month period and signaling the recovery is on a pretty sound footing.

The output gap – or difference between what an economy is producing and what it is capable of – remains negative at about 3%, and deflation is still a threat.

But regardless of what the Fed does now and in coming months, its target short-term rate will remain well below the long-term “normal” level of about 4% for years to come, so there is little risk a small increase will drag down growth.

Why the Fed didn’t act

According to the Federal Open Market Committee statement, the main factors that persuaded the Fed to delay liftoff are the weakening global economy, “soft” net exports and subdued inflation.

Granted, developing economies, especially China and Russia, are indeed weak as are global financial markets and that could spill over into the US. And the devaluation of the yuan in China and the recession in Canada (the US’ two largest individual trading partners) – coupled with loose monetary policy in Europe – are causing the dollar to appreciate, making US exports decline and imports rise.

It is important to understand that all of these factors except inflation are outside the Fed’s jurisdiction and its dual mandate of maintaining full employment and stable prices. If these factors matter at all to US monetary policymakers, it should only be through their effects on the US economy, in terms of inflation, labor markets and GDP.

And while an appreciating dollar and low oil prices can indeed create deflationary pressures (and reduce US GDP), the data indicate that US prices nevertheless continue to rise, if slowly.

Furthermore, a higher interest rate and stronger dollar make US assets even more attractive to global investors, thus spurring more investment, while low oil prices stimulate consumer spending. Both of these factors boost economic activity and at least partially offset any decline due to lower net exports caused by a strong dollar.

What’s at stake

What’s more important is that the impact of a small rate hike has been with us for some time. Capital is already fleeing developing economies, and the dollar has been strong for a while. Hence, the direct marginal economic effects of a 0.25 percentage point increase in the target rate on the US economy would be negligible at best.

What was really at stake was repairing the Fed’s credibility in terms of successfully shaping US monetary policy and sending a powerful signal that the US economy is in strong shape.

Hoping to avoid previous bungled attempts to adjust monetary policy in recent years that led to significant market volatility, this time the Fed spent at least half of the year updating the language in its statements and gradually preparing the world for a hike. And since it did not deliver, this tells the world that the Fed is unable or unwilling to go against market expectations.

As a result, the central bank will have to either delay the liftoff until the next meeting, slowly reshaping market expectations to be consistent with a hike at that point, or risk a financial panic if it decides on an unexpected policy shift sooner. Delaying the timing further would mean losing precious time in normalizing monetary policy, necessary so that the Fed again has the tools it needs to fight future economic downturns. There’s also the increased risk that the economy will overheat and cause inflation to spiral out of control.

There is never a perfect time to start down this path; it is always possible to find reasons to delay. But each postponement requires even stronger data to justify an eventual liftoff the next time. The problem is that with the hesitant Fed sending mixed signals to the economy, that imaginary perfect day might not ever come.

Author: Alex Nikolsko-Rzhevskyy, Associate Professor of Economics, Lehigh University

 

The Fed Holds Rates

Just released by the FED, rates are on hold, because of concerns about global growth and current levels of employment and inflation. In addition they hint at lower rates for longer.

Information received since the Federal Open Market Committee met in July suggests that economic activity is expanding at a moderate pace. Household spending and business fixed investment have been increasing moderately, and the housing sector has improved further; however, net exports have been soft. The labor market continued to improve, with solid job gains and declining unemployment. On balance, labor market indicators show that underutilization of labor resources has diminished since early this year. Inflation has continued to run below the Committee’s longer-run objective, partly reflecting declines in energy prices and in prices of non-energy imports. Market-based measures of inflation compensation moved lower; survey-based measures of longer-term inflation expectations have remained stable.

Consistent with its statutory mandate, the Committee seeks to foster maximum employment and price stability. Recent global economic and financial developments may restrain economic activity somewhat and are likely to put further downward pressure on inflation in the near term. Nonetheless, the Committee expects that, with appropriate policy accommodation, economic activity will expand at a moderate pace, with labor market indicators continuing to move toward levels the Committee judges consistent with its dual mandate. The Committee continues to see the risks to the outlook for economic activity and the labor market as nearly balanced but is monitoring developments abroad. Inflation is anticipated to remain near its recent low level in the near term but the Committee expects inflation to rise gradually toward 2 percent over the medium term as the labor market improves further and the transitory effects of declines in energy and import prices dissipate. The Committee continues to monitor inflation developments closely.

To support continued progress toward maximum employment and price stability, the Committee today reaffirmed its view that the current 0 to 1/4 percent target range for the federal funds rate remains appropriate. In determining how long to maintain this target range, the Committee will assess progress–both realized and expected–toward its objectives of maximum employment and 2 percent inflation. This assessment will take into account a wide range of information, including measures of labor market conditions, indicators of inflation pressures and inflation expectations, and readings on financial and international developments. The Committee anticipates that it will be appropriate to raise the target range for the federal funds rate when it has seen some further improvement in the labor market and is reasonably confident that inflation will move back to its 2 percent objective over the medium term.

The Committee is maintaining its existing policy of reinvesting principal payments from its holdings of agency debt and agency mortgage-backed securities in agency mortgage-backed securities and of rolling over maturing Treasury securities at auction. This policy, by keeping the Committee’s holdings of longer-term securities at sizable levels, should help maintain accommodative financial conditions.

When the Committee decides to begin to remove policy accommodation, it will take a balanced approach consistent with its longer-run goals of maximum employment and inflation of 2 percent. The Committee currently anticipates that, even after employment and inflation are near mandate-consistent levels, economic conditions may, for some time, warrant keeping the target federal funds rate below levels the Committee views as normal in the longer run.

Voting for the FOMC monetary policy action were: Janet L. Yellen, Chair; William C. Dudley, Vice Chairman; Lael Brainard; Charles L. Evans; Stanley Fischer; Dennis P. Lockhart; Jerome H. Powell; Daniel K. Tarullo; and John C. Williams. Voting against the action was Jeffrey M. Lacker, who preferred to raise the target range for the federal funds rate by 25 basis points at this meeting.

The released economic data included a chart on members future expectations.Fed-Sept-2015Each shaded circle indicates the value (rounded to the nearest 1/8 percentage point) of an individual
participant’s judgment of the midpoint of the appropriate target range for the federal funds rate or the appropriate target level for the federal funds rate at the end of the specified calendar year or over the longer run.

Why the Fed is no longer center of the financial universe

From The Conversation.

Markets have been speculating for months about whether the US Federal Reserve would raise interest rates in September. The day has finally arrived, and interestingly, there’s much less certainty now about which way it will go than there was just a few weeks earlier.

In August, more than three-quarters of economists surveyed by Bloomberg expected a rate hike this month. Now, only about half do. Traders were also more certain back then, putting the odds at about 50-50. Now the likelihood of a rate hike based on Fed Funds Futures is about one in four.

The Federal Reserve may be on the verge of lifting rates for the first time in more than nine years because unemployment has dropped to pre-crisis levels, the housing market is the healthiest it’s been in 15 years and the economic recovery, while tepid, has continued.

Investors’ and economists’ uncertainty, meanwhile, has been fueled by weak growth in China, Europe and Latin America, giving the Fed pause about whether now’s the right time to start the return to normal.

There is growing alarm that a rate hike will make things even worse for the rest of the world. The Fed risks creating “panic and turmoil” across emerging markets such as China and India and triggering a “global debt crisis.”

The reality, however, will likely be very different. For one, the Fed lacks the power it once did, meaning the actual impact of a rate hike will be more muted than people think. Second, the effect of uncertainty and speculation may be far worse than an actual change in rates, which is why central bankers in emerging markets are pushing their American counterparts to hurry up and raise them already.

The Fed’s waning influence

The Fed, and more accurately the rate-setting Federal Open Market Committee (FOMC), is simply no longer the center of the universe it once was, because the central banks of China, India and the eurozone have all become monetary policy hubs in their own right.

The US central bank may still be preeminent, but the People’s Bank of China, the Reserve Bank of India, and the European Central Bank are all growing more influential all the time. That’s particularly true of China’s central bank, which boasts the world’s largest stash of foreign currency reserves (about US$3.8 trillion) and increasingly hopes to make its influence felt beyond its borders.

Some argue that central banks in general, not just the Fed, are losing their ability to affect financial markets as they intend, especially since the financial crisis depleted their arsenal of tools. Those emergency measures resulted in more than a half-decade of near-zero interest rates and a world awash in US dollars. And that poses another problem.

All eyes will be on Fed Chair Janet Yellen. Reuters

Can the Fed even lift rates anymore?

The old toolkit of market leverage that the Fed used is losing relevance since the FOMC has not raised rates since 2006. And it has rather frantically been trying to experiment with new methods to affect markets.

The Fed and the market (including companies and customers) are beginning to understand that it’s not a given that the Fed can even practically raise rates any more, at least not without resorting to rarely or never-tried policies. That’s because its primary way to do so, removing dollars from the financial system, has become a lot harder to do.

Normally, one way the Fed affects short-term interest rates is by buying or selling government securities, which decreases or increases the amount of cash in circulation. The more cash in the system, the easier (and cheaper) it becomes to borrow, thus reducing interest rates, and vice versa.

And since the crisis, the Fed has added an enormous quantity of cash into the system to keep rates low. Removing enough of that to discourage lending and drive up rates won’t be easy. To get around that problem, it plans to essentially pay lenders to make loans, but that’s an unconventional approach that may not work and on some level involves adding more cash into the equation.

Uncertainty and speculation

In addition, the uncertainty and speculation about when the Fed will finally start the inevitable move toward normalization may be worse than the move itself.

As Mirza Adityaswara, senior deputy governor at Indonesia’s central bank, put it:

We think US monetary policymakers have got confused about what to do. The uncertainty has created the turmoil. The situation will recover the sooner the Fed makes a decision and then gives expectation to the market that they [will] increase [rates] one or two times and then stop.

While the timing of its first hike is important – and the sooner the better – the timing of the second is more so. This will signal the Fed’s path to normalization for the market (that is, the end of an era of ultra-low interest rates).

Right now, US companies appear ready for a rate hike because the impact on them will be negligible, and some investors are also betting on it.

That’s no surprise. Companies have borrowed heavily in recent years, allowing them to lock in record-low rates and causing their balance sheets to bulge. This year, corporate bond sales are on pace to have a third-straight record year, and currently tally about $1 trillion. Most of that’s fixed, so even if rates go up, their borrowing costs won’t change all that much for some time.

At the end of 2014, non-financial companies held a record $1.73 trillion in cash, double the tally a decade ago, according to Moody’s Investors Service.

Beyond the US, there are reports that Chinese and Indian companies are ready for a rate hike as well.

Foregone conclusion

So for much of the world, a hike in rates is already a foregone conclusion – the risk being only that the Fed doesn’t see it. The important question, then, is how quickly, or slowly, should the pace of normalization be. While the Fed may find it difficult to make much of an impact with one move, the pace and totality of the changes in rates will likely make some difference.

But the time to start that process is now. It will end the uncertainty that has embroiled world markets, strengthen the dollar relative to other currencies, add more flexibility to the Fed’s future policy-making and, importantly, mark a return to normality.

Author: Tomas Hult, Byington Endowed Chair and Professor of International Business, Michigan State University

What Drives US Household Debt?

Analysis from the Federal Reserve Bank of St Louis shows that in the US, whilst overall household credit is lower now, this is being driven by reduced credit creation, and not increased credit destruction.  We see a very different profile of debt compared with Australia, where household debt has never been higher. However, our analysis shows that core debt is also being held for longer, so the same effect is in play here, although new debt is also accelerating, driven by housing.

6tl-hhfinHousehold debt in the United States has been on a roller coaster since early 2004. As the first figure shows, between the first quarter of 2004 and the fourth quarter of 2008, total household debt increased by about 46 percent—an annual rate of about 8.3 percent. A process of household deleveraging started in 2009 and stabilized at a level 13 percent below the previous peak in the first quarter of 2013. During those four years, the household debt level decreased at a yearly rate of 3 percent. Since then, it has moved only modestly back toward its previous levels.

This essay provides a simple decomposition of the changes in debt levels to shed light on the sources of those changes. The analysis is similar to the decomposition of labor market flows performed by Haltiwanger (2012) and the decomposition of changes in business credit performed by Herrera, Kolar, and Minetti (2011). We use the term “credit change” to refer to the change in household debt: the difference between household debt (D) in the current period, t, and debt in the previous period, t –1, divided by debt in the previous period, t –1:

The total household debt is the sum of debt for each household i, so this can also be written as

Equivalently, one can add the changes in debt for each household i:

The key advantage of using household-level data is that one can separate positive changes (credit creation) from negative changes (credit destruction) and compute the change in debt as

Credit change = Credit creationCredit destruction,

where

and

These concepts are interesting because they can be linked to different household financial decisions. Credit creation can be linked to additional credit card debt or a new mortgage and credit destruction can be linked to repaying debt or simply defaulting.

As this decomposition makes clear, a stable level of debt (a net change of 0) could be the result of a large credit creation offset by an equally large credit destruction. Or it could indicate no creation and no destruction at all. To differentiate between these cases, it is useful to consider “credit activity” (also called reallocation), which is defined as

Credit activity = Credit creation + Credit destruction.

This is a useful measure because it captures credit activity ignored by the change in total debt.

The second figure shows credit creation, destruction, change, and overall activity. Recall that credit change is the difference between credit creation and destruction, while credit activity is the sum of credit creation and destruction. The credit change shown in the second figure traces the increase in debt before the 2008 crisis, the deleveraging that followed, and the relative stability of debt over the past 3 years. Analy­sis of debt creation and destruction shows that the expansion of debt was due to above-average creation of debt before the crisis—not insufficient credit destruction; credit destruction was actually slightly above average. Thus, credit activity was extensive during that period, with large amounts of both destruction and creation.

The deleveraging involved a decrease in creation (or origination) of debt: Creation started at nearly 10 percent in the expansion period but dropped below 5 percent after the financial crisis. Credit destruction was not the main contributor to the deleveraging: Destruction did not grow during the deleveraging period; it was actually slightly lower than during the expansion period. Thus, the deleveraging period of 2009-11 saw a very low level of credit activity, mainly due to the small amount of new credit issuance.

Finally, the stability of debt from 2011 to 2013 masked the increasing credit activity since both destruction and creation increased but offset each other. In sharp contrast, during the past year, the stability of debt has been due to very low levels of creation and destruction. In fact, credit activity is currently as low as it was in the middle of the financial crisis: about 9 percent of total household debt.

Overall, this analysis of household debt suggests that reduced credit creation, and not increased credit destruction, has been the key driver of the recent evolution of U.S. household debt. A topic for future investigation is that U.S. households are currently engaging in record low levels of financial intermediation, which is not obvious by simply observing the level of household debt.

Structure and Liquidity in Treasury Markets

Extract from a speech by Governor Powell at the Brookings Institution, Washington. The move to fully electronic trading raises important questions about the benefits of fully automated high-speed trading which may lead to industry concentration and liquidity fracturing as the arms-race continues. So it is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.

Treasury markets have undergone important changes over the years. The footprints of the major dealers, who have long played the role of market makers, are in several respects smaller than they were in the pre-crisis period. Dealers cite a number of reasons for this change, including reductions in their own risk appetite and the effects of post-crisis regulations. At the same time, the Federal Reserve and foreign owners (about half of which are foreign central banks) have increased their ownership to over two-thirds of outstanding Treasuries (up from 61 percent in 2004). Banks have also increased their holdings of Treasuries to meet HQLA requirements. These holdings are less likely to turn over in secondary market trading, as the owners largely follow buy and hold strategies. Another change is the increased presence of asset managers, which now hold a bigger share of Treasuries as well. Mutual fund investors, who are accustomed to daily liquidity, now beneficially own a greater share of Treasuries.

Perhaps the most fundamental change in these markets is the move to electronic trading, which began in earnest about 15 years ago. It is hard to overstate the transformation in these markets. Only two decades ago, the dealers who participated in primary Treasury auctions had to send representatives, in person, to the offices of the Federal Reserve Bank of New York to submit their bids on auction days. They dropped their paper bids into a box. The secondary market was a bit more advanced. There were electronic systems for posting interdealer quotes in the cash market, and the Globex platform had been introduced for futures. Still, most interdealer trades were conducted over the phone and futures trading was primarily conducted in the open pit.

Today these markets are almost fully electronic. Interdealer trading in the cash Treasury market is conducted over electronic trading platforms. Thanks to advances in telecommunications and computing, the speed of trading has increased at least a million-fold. Advances in computing and faster access to trading platforms have also allowed new types of firms and trading strategies to enter the market. Algorithmic and high-frequency trading firms deploy a wide and diverse range of strategies. In particular, the technologies and strategies that people associate with high frequency trading are also regularly employed by broker-dealers, hedge funds, and even individual investors. Compared with the speed of trading 20 years ago, anyone can trade at high frequencies today, and so, to me, this transformation is more about technology than any one particular type of firm.

Given all these changes, we need to have a more nuanced discussion as to the state of the markets. Are there important market failures that are not likely to self-correct? If so, what are the causes, and what are the costs and benefits of potential market-led or regulatory responses?

Some observers point to post-crisis regulation as a key factor driving any decline or change in the nature of liquidity. Although regulation had little to do with the events of October 15, I would agree that it may be one factor driving recent changes in market making. Requiring that banks hold much higher capital and liquidity and rely less on wholesale short-term debt has raised funding costs. Regulation has also raised the cost of funding inventories through repurchase agreement (repo markets). Thus, regulation may have made market making less attractive to banks. But these same regulations have also materially lowered banks’ probabilities of default and the chances of another financial crisis like the last one, which severely constrained liquidity and did so much damage to our economy. These regulations are new, and we should be willing to learn from experience, but their basic goals–to make the core of the financial system safer and reduce systemic risk–are appropriate, and we should be prepared to accept some increase in the cost of market making in order to meet those goals.

Regulation is only one of the factors–and clearly not the dominant one–behind the evolution in market making. As we have seen, markets were undergoing dramatic change long before the financial crisis. Technological change has allowed new types of trading firms to act as market makers for a large and growing share of transactions, not just in equity and foreign exchange markets but also in Treasury markets. As traditional dealers have lost market share, one way they have sought to remain competitive is by attempting to internalize their customer trades–essentially trying to create their own markets by finding matches between their customers who are seeking to buy and sell. Internalization allows these firms to capture more of the bid-ask spread, but it may also reduce liquidity in the public market. At the same time it does not eliminate the need for a public market, where price discovery mainly occurs, as dealers must place the orders that they cannot internalize into that market.

While the changes I’ve just discussed are unlikely to go away, I believe that markets will adapt to them over time. In the meantime, we have a responsibility to make sure that market and regulatory incentives appropriately encourage an evolution that will sustain market liquidity and functioning.

In thinking about market incentives, one observer has noted that trading rules and structures have grown to matter crucially as trading speeds have increased–in her words, “At very fast speeds, only the [market] microstructure matters. Trading algorithms are, after all, simply a set of rules, and they will necessarily interact with and optimize against the rules of the trading platforms they operate on. If trading is at nanoseconds, there won’t be a lot of “fundamental” news to trade on or much time to formulate views about the long-run value of an asset; instead, trading at these speeds can become a game played against order books and the market rules. We can complain about certain trading practices in this new environment, but if the market is structured to incentivize those practices, then why should we be surprised if they occur?

The trading platforms in both the interdealer cash and futures markets are based on a central limit order book, in which quotes are executed based on price and the order they are posted. A central limit order book provides for continuous trading, but it also provides incentives to be the fastest. A trader that is faster than the others in the market will be able to post and remove orders in reaction to changes in the order book before others can do so, earning profits by hitting out-of-date quotes and avoiding losses by making sure that the trader’s own quotes are up to date.

Technology and greater competition have led to lower costs in many areas of our economy. At the same time, slower traders may be put at a disadvantage in this environment, which could cause them to withdraw from markets or seek other venues, thus fracturing liquidity. And one can certainly question how socially useful it is to build optic fiber or microwave networks just to trade at microseconds or nanoseconds rather than milliseconds. The cost of these technologies, among other factors, may also be driving greater concentration in markets, which could threaten their resilience. The type of internalization now done by dealers is only really profitable if done on a large scale, and that too has led to greater market concentration.

A number of observers have suggested reforms for consideration. For example, some recent commentators propose frequent batch auctions as an alternative to the central limit order book, and argue that this would lead to greater market liquidity. Others have argued that current market structures may lead to greater volatility, and suggested possible alterations designed to improve the situation. To be clear, I am not embracing any particular one of these ideas. Rather, I am suggesting that now is a good time for market participants and regulators to collectively consider whether current market structures can be improved for the benefit of all.

US to Drive Faster Payments

The Federal Reserve System announced the appointment of Federal Reserve Bank of Chicago Senior Vice President Sean Rodriguez as its Faster Payments Strategy Leader. In this role, Rodriguez will lead activities to identify effective approaches for implementing a safe, ubiquitous, faster payments capability in the United States.

Rodriguez will chair the Federal Reserve’s Faster Payments Task Force, comprised of more than 300 payment system stakeholders interested in improving the speed of authorization, clearing, settlement and notification of various types of personal and business payments. In addition to leading faster payments activities, Rodriguez will continue to oversee the Federal Reserve’s Payments Industry Relations Program.

“Sean’s leadership experience across payment operations, customer relations and industry outreach is exactly what we need to successfully advance the vision for a faster payments capability in the United States,” said Gordon Werkema, the Federal Reserve’s payments strategy director to whom Rodriguez will report. “His passion has contributed significantly to the momentum behind our initiative to date and we’re confident in his ability to carry our strategy forward in strong partnership with the industry.”

Rodriguez brings more than 30 years of experience with Federal Reserve Financial Services in operations, product development, sales and marketing.  He helped establish the Federal Reserve’s Customer Relations and Support Office in 2001 and served on the Federal Reserve’s leadership team for implementing the Check 21 initiative.  More recently, Rodriguez was instrumental in the design and launch of the Federal Reserve’s Payments Industry Relations Program charged with engaging a broad range of organizations in efforts to improve the U.S. payment system.

Additional information about the Federal Reserve’s Strategies for Improving the U.S. Payment System, including the Faster Payments Task Force, is available at FedPaymentsImprovement.org.

The Federal Reserve believes that the U.S. payment system is at a critical juncture in its evolution. Technology is rapidly changing many elements that support the payment process. High-speed data networks are becoming ubiquitous, computing devices are becoming more sophisticated and mobile, and information is increasingly processed in real time. These capabilities are changing the nature of commerce and end-user expectations for payment services. Meanwhile, payment security and the protection of sensitive data, which are foundational to public confidence in any payment system, are challenged by dynamic, persistent and rapidly escalating threats. Finally, an increasing number of individuals and businesses routinely transfer value across borders and demand better payment options to swiftly and efficiently do so.

Considering these developments, traditional payment services, often operating on decades-old infrastructure, have adjusted slowly to these changes, while emerging players are coming to market quickly with innovative product offerings. There is opportunity to act collectively to avoid further fragmentation of payment services in the United States that might otherwise widen the gap between U.S. payment systems and those located abroad.

Collaborative action has the potential to increase convenience, ubiquity, cost effectiveness, security and cross-border interoperability for U.S. consumers and businesses when sending and receiving payments.

Since the Federal Reserve commenced a payment system improvement initiative in 2012, industry dialogue has advanced significantly and momentum toward common goals has increased. Many payment stakeholders are now independently initiating actions to discuss payment system improvements with one another—especially the prospect of increasing end-to-end payment speed and security. Responses to the Federal Reserve’s Consultation Paper indicate broad agreement with the gaps/opportunities and desired outcomes advanced in that paper. Diverse stakeholder groups have initiated efforts to work together to achieve payment system improvements. There is more common ground and shared vision than was previously thought to exist. We believe these developments illustrate a rare confluence of factors that create favorable conditions for change. Through this Strategies to Improve the U.S. Payment System paper, the Federal Reserve calls on all stakeholders to seize this opportunity and join together to improve the payment system.