Banks have long been built on massive amounts of data. But as "big data" gets even bigger, legacy storage systems are growing increasingly inefficient and even obsolete, according to industry experts.
In fact, financial institutions will have to completely rethink and recreate the way they store data to effectively deal with the crush of information they possess, says Barbara Murphy, CMO of Panasas, a Sunnyvale, Calif.-based data storage provider. "The challenge that the banking industry has in consolidating different data types is, can you take all the different file types and have them rest in a single location?" she said. "You need the scalability that can handle that, which traditional systems don't have. Infinite scale is now a requirement. There needs to be an entirely different architecture."
[Learn more about managing and sharing data across departments and consolidating reporting systems.]
Financial services firms now recognize the emergent need for quick access to centralized data, and they are not content to propagate the traditional model of storing data in multiple silos, Murphy added. "Our customers are asking for one huge, big system where all the data they could ever want is on that system," she said. "That's a very different model [from] the traditional storage system model."
John Macaluso, SVP of bank solutions for Fiserv (Brookfield, Wis.), said it is the need for data access that compounds the big-data storage problem. "Historically, data was accumulated and stored, but now it needs to be accessible," he explained. "The biggest problems banks have is that data, in many cases, is siloed and in disparate locations. The magic is being able to bring that together and allow that data to become useful for them."
More than 700 IT pros gave us an earful on database licensing, performance, NoSQL, and more. That story and more--including a look at transitioning to Win 8--in the new all-digital Database Discontent issue of InformationWeek. (Free registration required.)