It may be particularly passé to jump on the “the Internet is making us stupider” bandwagon, but I have to admit – I think there’s some truth to it. I thoroughly enjoyed this book. There seem to be numerous experts and studies that highlight the negative effects from the deluge of information found on the web. The author does a great job of giving us a behind-the-scenes look at how our brains process information. There is a large amount of discussion about information overload, like this:
“…development of personal digital information systems and global hypertext seems not to have solved the problem identified but exacerbated it.” In retrospect, the reason for the failure seems obvious. By dramatically reducing the cost of creating, storing, and sharing information, computer networks have placed far more information within our reach than we ever had access to before. And the powerful tools for discovering, filtering, and distributing information developed by companies like Google ensure that we are forever inundated by information of immediate interest to us—and in quantities well beyond what our brains can handle. As the technologies for data processing improve, as our tools for searching and filtering become more precise, the flood of relevant information only intensifies. More of what is of interest to us becomes visible to us. Information overload has become a permanent affliction, and our attempts to cure it just make it worse.
Carr, Nicholas (2011-06-06). The Shallows: What the Internet Is Doing to Our Brains (p. 170). W. W. Norton & Company. Kindle Edition.
I find the whole topic particularly interesting because I think our A/L modeling niche is a microcosm of society’s experience with the web. Increasingly (in the ALM for community banks world) there are more and more experts telling us about new and interesting stress-tests to run. Technology is making it easier and easier to produce the data and information. Sadly all this information yields little new knowledge. In fact we’re running so many stress-tests that we’re becoming numb and we’re starting to ignore it altogether. When running interest rate risk stress-tests for example, we’re asked to focus on so many different potential changes in rates that we often forget to focus our attention on the key assumptions that drive the analysis. Our understanding of the resulting information is limited as the lines blur between assumptions based on data and assumptions based on estimates. Which is which? Do we understand the measurement process deeply enough to appreciate the difference? Or are we just hanging out in the shallows?