The Government Needs Fast Data: Why is the Federal Reserve … – insideBIGDATA

Back in May of this year, the Federal Reserve was deciding whether to hike interest rates yet again. Evercore ISI strategistssaid in a note that, The absence of any such preparation [for a raise] is the signal and gives us additional confidence that the Fed is not going to hike in June absent a very big surprise in the remaining data, though we should expect a hawkish pause.

Well, they were right. The Federal Reserveultimately decidedto keep its key interest rate at about 5% after ten consecutive meetings during which it was hiked. This brings about an important question: Should there ever be very big surprises (or any surprises, for that matter) in the data on which the Fed bases these critical decisions?

In my opinion, the answer is no. There shouldnt ever be a question of making an incorrect economic decision because the right dataisindeed available. But the truth is, the Federal Reserve has been basing most of its decisions on stale, outdated data.

Why? The Fed uses a measure of core inflation to make its most important decisions, and that measure is derived from surveys conducted by the Bureau of Labor Statistics. While they may also have some privileged information the public isnt privy to, by nature, surveys take a while to administer. By the time the data is processed and cleaned up, its essentially already a month old.

Everyone can agree that having faster, more up-to-date data would be ideal in this situation. But the path to getting there isnt linear: Itll require some tradeoffs, taking a hard look at flaws in current processes, and a significant shift in mindset that the Fed may not be ready for.

Here are some things to consider:

Fast vs accurate: We need to find a happy medium

At some point, the Fed will need to decide whether its worth trying a new strategy of using fast, imperfect data in place of the data generated by traditional survey methods. The latter may offer more statistical control, but it becomes stale quickly.

Making the switch to using faster data will require a paradigm shift: Survey data has been the gold standard for decades at this point, and many people find comfort in its perceived accuracy. However, any data can fall prey to biases.

Survey data isnt a silver bullet

Theres a commonly held belief that surveys are conducted very carefully and adjusted for biases, while fast data that comes from digital sources can never be truly representative. While this may be the case some of the time, survey biases are a well-documented phenomenon. No one solution is perfect, but the difference is that the problems associated with survey data have existed for decades and people have become comfortable with them. When confronted with the issues posed by modern methods, they are much more risk-averse.

In my mind, the Feds proclivity toward survey data has a lot to do with the fact that most people working within the organization are economists, not computer scientists, developers, or data scientists (who are more accustomed to working with other data sources). While theres a wealth of theoretical knowledge in this space, theres also a lack of data engineering and data science talent, which may soon need to change.

A cultural shift needs to occur

We need a way to balance both accuracy and forward momentum. What might this look like? To start, it would be great to see organizations like the U.S. Census, the Bureau of Labor Statistics, and the Bureau of Economic Analysis (BEA) release more experimental economic trackers. Were already starting to see this here and there: For example, the BEAreleased a trackerthat monitors consumer spending.

Traditionally, these agencies have been very conservative in their approach to data, understandably shying away from methods that might produce inaccurate results. But in doing so, theyve been holding themselves to an impossibly high bar at the cost of speed. They may be forced to reconsider this approach soon, though. For years, theres beena steady declinein federal survey response rates. How can the government collect accurate economic data if businesses and other entities arent readily providing it?

When it comes down to it, weve become accustomed to methodologies that have existed for decades because were comfortable with their level of error. But by continuing to rely solely on these methods, we may actually end up incurring more error as things like response rates continue to fall. We need to stay open to the possibility that relying on faster, external data sources might be the necessary next step to making more sound economic decisions.

About the Author

Alex Izydorczyk is the founder and CEO ofCybersyn, the data-as-a-service companymaking the worlds economic data available to businesses, governments, and entrepreneurs onSnowflake Marketplace. With more than seven years of experience leading the data science team at Coatue, a $70 billion investment manager,Alex brings a wealth of knowledge and expertise to the table. As the architect of Coatues data science practice, he led a team of over 40 people in leveraging external data to drive investment decisions. Alexs background in private equity data infrastructure also includes an investment in Snowflake. His passion for real-time economic data led him to start Cybersyn in 2022.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn:https://www.linkedin.com/company/insidebigdata/

Join us on Facebook:https://www.facebook.com/insideBIGDATANOW

Excerpt from:

The Government Needs Fast Data: Why is the Federal Reserve ... - insideBIGDATA

Related Posts

Comments are closed.