I just finished another stop on my “Stress-testing Dos and Don’ts” road show. This week I was in San Diego at the California DFI’s Annual Staff conference. Inevitably during the Q&A I was asked for my opinion on the Fed’s latest round of stress-tests. Here are four things I don’t like:
1) The metric under the microscope was a risk-based capital measurement and not just a plain leverage ratio.
It seems to me that since 2008 there’s been enough debate about the appropriate risk-weights for various asset categories to cast some doubt on the usefulness of established risk-based capital levels. If the Fed’s severe stress scenario is going to change a myriad of other intangible economic variables shouldn’t they consider the validity of the risk-weightings too? Using a simpler equity to asset (un-risk-weighted) ratio would have been a better alternative.
2) The test appears to only measure credit risk with no apparent consideration given to interest rate risk.
At first this seemed odd because recently examiners have been pressuring banks of all sizes to run an unprecedented number of additional interest rate stress-tests (higher shocks, twists, steeper, flatter, ramps, etc.) Ever since the 2010 IRR advisory the number of examiner motivated requests for additional stress-tests has skyrocketed. The Supervisory Stress Scenario barely changes rates or the shape of the curve at all (see graph). Then I remembered that it’s the Fed running these stress-tests. According to their predictions rates aren’t going anywhere for a while. They may be correct, they may be wrong. But aren’t these stress-tests supposed to try and give us a glimpse of what might happen under circumstances we can’t really foresee? I think it’s a little bold to predict things like 13% Unemployment, and a dive in the DOW down to 5668 without any noticeable changes in interest rates. I know I would predict bigger changes in rates. But then again, I’m no economist. However I’m certain I’m just as bad at predicting things as they are.
3) We only know the macro level variables.
Perhaps I’m among only a few that care since it’s my job to run an A/L model for over 250 community banks. Experience has shown me that some of the most critical estimates made in A/L modeling are (in no particular order) interest rates, prepayment speeds, credit spreads, default rates, future volume/mix changes, decay rates, beta factors, and early withdrawal rates - just to name a few. Obviously someone can make the case that macro variables like Real GDP Growth, Disposable Income, Unemployment, and the Commercial Real Estate Price Index impact these critical estimates – but how exactly? There seems to be a HUGE black box between the macro parameters shared by the Fed and the more tangible parameters necessary to actually run a cash-flow model. These black-box relationships are the things that need to be vetted and validated because last time I checked (in fact, every time I check) there is very little unanimous agreement among economists about what these macro parameters really tell us.
4) The “results” were made public.
Don’t get me wrong, I’m a big believer in making sure the regulatory process is as transparent as possible. But by comparison the regulators already share more data and financial information about banks than any other industry I can think of. You could easily bury yourself with data by simply downloading it from the FDIC, Fed, FFIEC, or the SEC. If investors, or depositors, or taxpayers (or whatever audience this stress-test was trying to speak to) really wanted to learn something about these 19 largest banks there is already a mound of data ready and waiting for them to dig into. Unfortunately your average Joe won’t dig through data like that. He’s far more likely to want to know which banks “passed” or “failed” some sort of test. Which is exactly how these results are being interpreted by the average Joe and by the popular press. Just look at the defensive responses coming from the banks that “failed”. They are trying to educate everyone about how much more complicated the process is and that a pass/fail grade really isn’t reasonable (and I agree, it isn’t reasonable).
Sadly, I think the Fed did recognize the potential for these results to be misinterpreted. Right in the introduction they say, “The Federal Reserve’s projections for the 19 BHCs under the Supervisory Stress Scenario should not be interpreted as expected or likely outcomes for these firms, but rather as possible results under hypothetical, highly adverse conditions.” But I think there was concern about more than just a mere “chance” of misinterpretation. Here are three similar statements that appear in the document (presented here in order, bolded and underlined for emphasis just like they were in the original document, not added by me):
- From page 3:
…it is important to recall that the Federal Reserve’s stress scenario projections…are not forecasts… (no emphases in the original)
- From page 8:
It is important to note that the Supervisory Stress Scenario is not a forecast… (bolding is original, not mine)
- Then finally from page 43:
It is important to note that the Supervisory Stress Scenario is not a forecast… (bolding and underlining is original, not mine)
Nice try, but unless you’re a real data geek like me, or someone with an above average financial background all of this is just “blah, blah, blah”. No amount of bolding or underlining is going to change that. Sadly, the only thing most people want to know is, “who passed and who failed?”