The OCC recently published its Fall 2014 Semiannual Risk Perspective. One highlight in the document was information the OCC compiled from supervisory reviews performed in 4Q 2013 and 1Q 2014. The OCC's goal was to collect and analyze the range of practices used to measure IRR. The data collected included repricing assumptions for five deposit categories in a +/-100 bp shock analysis. A link to their data is shown to the right.
Over a year and a half ago we collected the same type of data from over 200 of our client banks (here's a link to that post.) It's interesting to compare the data as both sets look very similar. The only real noticeable difference is in the median for both Interest Checking and for Savings - the A/L BENCHMARKS data is lower than the OCC data for both deposit types. The difference is likely because the data was gathered from two different time frames almost two years apart.
Ever since 2012Q4 there has been increased industry and regulatory scrutiny of core deposit beta factors. The perception now is these deposits are much more sensitive than what was originally believed. Since these beta factors are largely subjective assumptions, I think banks have been, on average, modeling them to be more sensitive. I think this is wise. I think many banks aren't as asset sensitive as they think (this was the subject of another previous post...here.)
How do your current core deposit beta factors compare?