Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IBM Pulse 2013: Lessons Learned

Even if you are not an IBM customer, there are lessons to be learned from the company's recent Pulse 2013 conference in Las Vegas. From my perspective, three subjects stood out: the continued diffusion of the center of computing, the need for open standards in cloud computing and the spread of data-driven operational IT intelligence. Before we explore what they are and why they are important, a little orientation is in order.

Although IBM's SmarterPlanet initiative was not emphasized (and I cannot recall its being mentioned), it was there in spirit. The company also continued to build on the "three I's" theme from prior years: Everything will be Instrumented, Interconnected and Intelligent. To enable this, IT infrastructures need to have visibility, control and automation.

Among other things, the "three I's" mean that more physical infrastructures will join the digital information world. Visibility (seeing what is happening), control (managing IT services as they are provided) and automation (eliminating routine and repetitive tasks) are critical as the three Is become ubiquitous. IBM's consistency in emphasizing those concepts for years is resonant because, frankly, they are mantras that all IT (not just IBM customers) should internalize.

At Pulse2013, IBM linked cloud computing, mobile enterprise and the smarter physical infrastructure with the need for security intelligence (to protect against threats) and operational big data analytics (to uncover new insights).

Continued Dispersion of Enterprise Computing

When we think of the center of computing, we still think of the corporate data center. Although that may still be true for legacy enterprise mission-critical systems, the heliocentric view of the data center as the sun of the computing universe has not been true for a long time and is becoming less so. Even though traditional data centers will remain important, they are only part of the computing universe, and IT has to contend with a growing range of related solutions and services.

Take access, for example. User access to online applications was once provided by dumb terminals in buildings hardwired to a computer in the same (or a nearby) building. The replacement of the dumb terminal with PCs changed the locus of computing a little. The advent of laptops moved part of computing outside corporately controlled boundaries, but usually required a plug-in connection (such as in a hotel room). Recent moves to even greater mobility with tablets and smartphone means untethered access (Wi-Fi or cellular networks). Co-mingle personal and enterprise data on the same device and access to SaaS and other applications in the public cloud, and IT faces both security and management control challenges.

Take sharing of applications. Server visualization not only enables the consolidation of many applications on the same physical server, but also increases the ability for applications to move from one physical server to another. And that move may even be between data centers for workload balancing, disaster recovery, or cost reasons. This application mobility shell game means that there is no longer one single, static locus for computing. We are now in a dynamic world that provides many benefits, but also ups the ante on what is needed to manage it.

Now, add in a smarter physical infrastructure. The Cartagena, Columbia ocean container shipping terminals (Manga and Contecar) had a very interesting story to tell at IBM Pulse 2013. Container ships (such as those originating from Asia) use the Panama Canal to carry huge containers to, say, the East Coast of the United States and to Europe.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

Containers that arrive in Cartagena via the Panama Canal may take a different ship to their final destination. This is a many-to-many problem in which containers from many arriving ships have to be moved to many departing ships (although each container goes only from one ship to another).

But the intricate dance of ships, cranes, trucks and storage slots is more than an algorithmic exercise in cost optimization. It is also about reliability, safet, and customer service. Physical infrastructure technologies--such as container number, truck plate, and container video--are automatically gathered at the gate where a container arrives in or leaves a port, as well as the use of GPS for a smart rail system. More traditional IT information, such as work orders for each vessel arrival, is also part of the system.

In Cartagena, a mix of traditional data center and distributed capabilities lead to a new breed of enterprise mission-critical system, and the most critical hardware component is the Wi-Fi network. Think of the management issues that arise, including cloud computing (such as now having to manage Wi-Fi and sensor-based information with the necessary reliability), security (such as now including a non-wired network) and mobility (such as machine generated sensor-based data from maybe fixed station devices, but external from the data center).

This is just one example. Is it any wonder that IBM (as well as many others vendors) recognizes that the journey to the cloud will be a long one, and that virtualization alone does not make a cloud?

Next page: Open Cloud Standards

  • 1