Kurt Marko

Contributing Editor


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Intel's Data Center Plan: More Than Chips

For networks, Intel preached the gospel of SDN, OpenFlow and NFV (using Vyatta as an example) and showed a reference server chassis, the Intel Open Network Platform, for OpenFlow switches and controllers.

Essentially a 2U rack-mount server optimized for network applications, it uses Xeon processors, the Fulcrum-acquired FM6700 switch silicon, and Intel's 89xx communications chipset, all controlled by the Wind River embedded OS running an OpenFlow software stack, complete with Quantum plugin. The platform already has one design win, from Quanta, with more on the way.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Intel had few concrete details to share around storage and big data, although it did reveal impressive results on a data sorting benchmark using the Intel-optimized Hadoop distribution (yes, it's participating in Apache projects) paired with an E5-series Xeon with 10 GbE NICs and using SSDs that cut the time to sort 1TB of data from more than four hours to seven minutes.

Of course, this is still Intel, so there's was plenty of news about processor roadmaps, new SoCs optimized for everything from microservers and storage arrays to network appliances and HPC grids. But the takeaway after a day of briefings is that Intel wants a greater hand in defining how its components are used within hyperscale microservers, network switches, SDN controllers, HPC appliances using MIC (many integrated cores, aka Xeon Phi), and storage arrays -- that is, devices at every level of the data center technology stack. Furthermore, Intel wants to exert more influence over, and contribute to, application architectures and technology to ensure that they run best on Intel hardware, obviously in hopes that software performance will drive hardware sales.

It's an extremely aggressive agenda and one Intel isn't hubristic enough to tackle on its own, hence its participation in so many open source projects and industry consortia, including Open Compute, OpenFlow, OVS, OpenStack, OpenDaylight.

It's clear that Intel sees both an opportunity and a threat as the data center makes a generational change to cloud-like, massively distributed, software controlled infrastructure, and it doesn't want to miss this one like it did the mobile client transition. But we're still early in this cycle. I am encouraged by Intel's direction, however the company has plenty of powerful competitors such as Cisco and EMC pushing their own agendas in areas where Intel has never been a force, so it's unlikely Intel's dream will play out exactly as scripted.

Still, I think the company is pointed in the right direction and seems willing to make major changes that potentially undermine some of its cash cow businesses like Xeon CPUs and chipsets to ensure long term success in the data center. And if it scrapes a few elbows with other big IT vendors, so much the better for IT customers.

Full disclosure: The event was sponsored by Intel and the vendor paid for all travel and accommodations.

Kurt Marko is an IT pro with broad experience, from chip design to IT systems.


Page: « Previous Page | 1 2  


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

August 2013
Network Computing: August 2013



TechWeb Careers