Every year at this time we’re inundated with articles that either offer predictions for the year to come, or a look back a past predictions. The look back is often especially entertaining to data geeks like me. More often than not the predictions are wrong (some more wrong than others). The fact is we’re generally terrible at predicting the future. However, every so often someone will get it right and we’ll believe for a moment that this person has found the “right” model to successfully predict future events. That moment in the sun usually doesn’t last very long.
In the world of bank risk management the fascination with modeling exposures is no different. I’ve written about this frequently over the past few years including my series on “Seven ways to back-test your model”. There is a constant quest to back-test and improve the predictive capabilities of our models. Recently the standard-bearers for this quest have been regulators, accountants, and auditors. They have intensified their focus on back-testing and their attention has turned to the Comparison back-test in particular.
The comparison back-test can be a fantastic way to learn more about the usefulness of a risk model. Sadly however most reviews of such comparisons boil down to weather the model was “right” or “wrong”. That’s too bad because most of time (if not all the time) the model is wrong and there usually isn’t a way to change it so that it will be “right”. The best we can do is learn and improve, but we have to toss aside this notion that we’ll somehow fix our risk models and get the “correct” answer.
In our quarterly A/L BENCHMARKS report we deliver to clients a short report that shows the comparison back-test. We’re often asked, “how close the should the forecast be?” Reviewers want to know if there is some sort of tolerance limit that will allow us to judge the forecast to be “right” or “wrong”. My first response to such questions has always been – no. The forecast will almost always be wrong. The value of the comparison back-test lies in the process, not in calculating the final difference. However I have to admit that I am intrigued by differences though. How far off are the forecasts that clients submit? Each quarter they are using either a static or dynamic forecast to model their interest rate risk sensitivity…how accurate is the forecast? I know they’ll be off, but by how much? I’m most curious about differences when market conditions are changing versus when they are rather stable. After all we’d like our models to be the most accurate when times are changing.
I collected the forecast projection data from over 200 of our bank clients. Each quarter we use either a static (flat balance sheet) or dynamic forecast to run their earnings stress-test. For each bank I compared their projected Total Assets amount (from one year ago) to the actual Total Assets amount then calculated the difference as a percentage of the projected amount. Overall the differences are smaller than you might expect, but remember this is a projection of Total Assets. It is probably the easiest overall target to hit compared to more volatile performance measures like margin, net income, and other specific balance sheet categories like total loans or total deposits (even the mix within these categories would be more difficult to project as accurately.)
Here is a quick overview of each environment. Note that when I talk about the “average” projection difference I’m referring to the median which is technically not the average, but for this analysis using the median makes things a little clearer.
When rates were rising
In the second quarter of 2005 the average Fed Funds rate was 2.94%. By the second quarter of 2006 the average was 4.91%, an increase of nearly +200bp. On average banks using a static forecast were 3.20% too low on their projection. The typical range for static forecast projections was from –1.4% up to 9.81%. Banks using a dynamic forecast were an average of 2.06% off with a typical range of –1.75% to 6.80%.
When rates were falling
Between the second quarter of 2007 and the second quarter of 2008 the average Fed Funds rate fell by –316bp from 5.25% down to 2.09%. On average projections were slightly farther off. Those banks using a static forecast were 4.70% too low, with a typical range between 0.09% and 11.45%. The average projection difference for banks using a dynamic forecast was 2.34% with typical differences ranging from –3.01% up to 7.15%.
When rates were stable
From second quarter of 2011 up to the second quarter of 2012 while long-term rates dropped quite a bit, short-term rates remained quite stable (and historically low.) The average Fed Funds rate was at or below 0.25%. The average projection difference for banks using a static forecast was 2.41%. For banks using a dynamic forecast the average difference was slightly closer at 1.73%.
What’s the takeaway?
As I mentioned before, we’re looking at projections of total assets which tend to be a little easier to predict. For most banks even in the best of times Total Assets doesn’t grow by more than 10% to 15%. I think the takeaway here is that if you are consistently running a comparison back-test on your model’s one-year projection, you should aim for a Total Assets difference of no more than 4% to 6%. If you are outside of that range on a consistent basis there’s probably some important balance sheet activity that your model is missing. Occasional quarters that show a difference of higher than that will happen, but don’t sweat them. Every now and then there’s bound to be an unusual quarter or two.