Greg Ferro

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Packets Are the Past, Flows Are the Future

Speeds and feeds have been the networking story for the last two decades. Vendors have battled to see which device can push out the most packets per second, or pare down Ethernet frame latency to the fewest milliseconds, or offer up the biggest MAC tables. And as Ethernet speeds increased, so did feeds: 100-Mbps Ethernet is now 10 Gbps, while port counts have jumped from tens to hundreds per chassis.

However, networking is changing because the future is about flows. Packet and ports are no longer the metrics by which the network will be judged. Application clients and servers aren't aware of packets or ports. The OS opens a session with a remote host and makes the connection via the transport layer using TCP, and dispatches a stream of data over the network. (That's why TCP is a definition for propagating a stream of data.)

More Insights


More >>

White Papers

More >>


More >>

That stream, or flow, is run through the IP process to segment the stream into a consistent packet data structure, which is easily routed through network paths. Packets enable resilient network designs by creating uniform, stateless, independent data chunks. Packet networking has been the preferred method for networking for the last 20 years (but it's not only one). The use of packets remains a sound technical concept for abstracting the forwarding path from the payload: Focus on the network, don't be concerned about the data.

But consider common use cases for the network. The user wants to see a Web page load smoothly and quickly. The Web developer wants the Web browser to request an image, an HTML file or a script file from the Web server and load rapidly. For a sys admin, the OS opens a socket to manage a connection to a remote system and to stream data through it. These use cases are all about flows of data, but the network supports the delivery as a series of packets.

And consider the hosts at either end of the connection. In the past, servers were always located on a single port of a switch. Clients were located at a specific WAN connection or on a specific switch in the campus LAN. Packets ingressed and egressed the network at known locations--the ports.

The time of packets and ports is passing. In virtualized server environments, the network ingress for a given application isn't physical, can change regularly, and is configured by the VM administrator. Users are moving from desktops to handsets and laptops. The network must now deliver data flows from server to client. That's a fundamental change in network services. Clearly, networking by packets doesn't match the use cases, yet today's network engineer is concerned only with packets and ports.

Communication between application clients and servers has always been about flows, not packets, but the technical challenges of configuring the network have hidden that issue behind technical abstraction of packets. As technology has improved, packet forwarding has become commoditized. The port density and performance of the current generation of network devices is good enough for majority of the market.

And that's why emerging technologies such as software-defined networking (SDN) and OpenFlow are gaining acceptance. They are designed to enable applications and data flows, not just to move packets from point A to point B. In a subsequent post, I'll look at how SDN is a business technology while OpenFlow is a networking technology.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Next Gen Network Reports

Research and Reports

Network Computing: April 2013

TechWeb Careers