Researchers at Rice University in Houston have come up with an approach to networking that may hold implications for enterprise uses of big data.
Backed by a grant from the National Science Foundation, computer networking researchers at the school are creating a customized, energy-efficient optical network they say can feed rivers of data into Rice’s supercomputers.
"Advances in computing and sensing technologies have led to a similar problem across many disciplines in science and engineering today," T.S. Eugene Ng, associate professor of computer science and principal investigator on the project, said in a statement announcing the breakthrough. "Experiments produce mountains of data, and there is often no efficient way to process that data to make discoveries and solve problems.”
It’s a problem many enterprises face as they try to jury-rig their networks to cope with the massive data flows they’re facing--data flows that are growing every day. Ng said via email that he envisions the new network as a potential boon for researchers and enterprises that use technologies like Hadoop to process big data but struggle to keep up with mushrooming data flows.
The new network, dubbed “big data and optical lightpaths-driven networked systems research infrastructure,” or BOLD for short, will combine electronic and optical switches. It also will feature a new type of optical switch that has none of the moving parts associated with traditional optical switches. Qianfan Xu, an assistant professor of electrical and computer engineering who specializes in creating ultra-compact optical devices on chips, plans to build the new silicon-photonic switches in his laboratory.
[Read about the critical role of text analytics in big data's transformation of business in "Text Analytics Key To Unlocking Big Data Value."]
While Xu works on the new switch, Ng’s team is working on related tools that will allow networks to take advantage of the combination of electronic, optical and silicon-photonic switches.
"To make use of these three types of technology, we need an intelligent layer that can analyze data flow and demand, all the way up to the application layer, and dynamically allocate network resources in the most efficient way," Ng said in the announcement.
Ng is working with two co-principal investigators on optimizing network design and performance. Meanwhile, another co-principal investigator, Bill Symes, who figures to benefit greatly from the new technology, is helping with algorithm design and testing BOLD’s impact on tackling big data problems.
Symes, a professor of earth science, directs an industry-funded consortium that solves complex seismic data processing challenges. One such operation, used in 3-D seismic analysis, requires two time-dependent simulations--one running forward in time, the other backward. The resulting computation can generate hundreds of terabytes of intermediate data that has to be loaded, cached, recalled, modified and saved repeatedly.
Ng said Symes’ work is a good example of the kind of data-intensive computations that BOLD can streamline. He envisions BOLD improving the performance of computationally intensive research at Rice for many years.
Meanwhile, there are numerous applications for large corporations. For instance, Ng said via email that large Internet service providers rely on Hadoop for big data management tasks involving log processing, business intelligence apps and the deployment of new products and platforms. Those tasks, he said, will be made quicker and simpler by the BOLD network.
Get deep insight into how big data is managed, including governance and analytics, in "Navigating the Big Spectrum of Big Data's Solutions." at Interop New York Sept. 30-Oct. 4]