IBM Pulse 2013: Lessons Learned

Three compelling themes emerged from IBM’s annual gathering: the continued diffusion of the data center, the need for open cloud standards and the rise of data-driven IT intelligence.

David Hill

March 27, 2013

8 Min Read
Network Computing logo

Even if you are not an IBM customer, there are lessons to be learned from the company's recent Pulse 2013 conference in Las Vegas. From my perspective, three subjects stood out: the continued diffusion of the center of computing, the need for open standards in cloud computing and the spread of data-driven operational IT intelligence. Before we explore what they are and why they are important, a little orientation is in order.

Although IBM's SmarterPlanet initiative was not emphasized (and I cannot recall its being mentioned), it was there in spirit. The company also continued to build on the "three I's" theme from prior years: Everything will be Instrumented, Interconnected and Intelligent. To enable this, IT infrastructures need to have visibility, control and automation.

Among other things, the "three I's" mean that more physical infrastructures will join the digital information world. Visibility (seeing what is happening), control (managing IT services as they are provided) and automation (eliminating routine and repetitive tasks) are critical as the three Is become ubiquitous. IBM's consistency in emphasizing those concepts for years is resonant because, frankly, they are mantras that all IT (not just IBM customers) should internalize.

At Pulse2013, IBM linked cloud computing, mobile enterprise and the smarter physical infrastructure with the need for security intelligence (to protect against threats) and operational big data analytics (to uncover new insights).

Continued Dispersion of Enterprise Computing

When we think of the center of computing, we still think of the corporate data center. Although that may still be true for legacy enterprise mission-critical systems, the heliocentric view of the data center as the sun of the computing universe has not been true for a long time and is becoming less so. Even though traditional data centers will remain important, they are only part of the computing universe, and IT has to contend with a growing range of related solutions and services.

Take access, for example. User access to online applications was once provided by dumb terminals in buildings hardwired to a computer in the same (or a nearby) building. The replacement of the dumb terminal with PCs changed the locus of computing a little. The advent of laptops moved part of computing outside corporately controlled boundaries, but usually required a plug-in connection (such as in a hotel room). Recent moves to even greater mobility with tablets and smartphone means untethered access (Wi-Fi or cellular networks). Co-mingle personal and enterprise data on the same device and access to SaaS and other applications in the public cloud, and IT faces both security and management control challenges.

Take sharing of applications. Server visualization not only enables the consolidation of many applications on the same physical server, but also increases the ability for applications to move from one physical server to another. And that move may even be between data centers for workload balancing, disaster recovery, or cost reasons. This application mobility shell game means that there is no longer one single, static locus for computing. We are now in a dynamic world that provides many benefits, but also ups the ante on what is needed to manage it.

Now, add in a smarter physical infrastructure. The Cartagena, Columbia ocean container shipping terminals (Manga and Contecar) had a very interesting story to tell at IBM Pulse 2013. Container ships (such as those originating from Asia) use the Panama Canal to carry huge containers to, say, the East Coast of the United States and to Europe.

[ Join us at Interop Las Vegas for access to 125+ IT sessions and 300+ exhibiting companies. Register today! ]

Containers that arrive in Cartagena via the Panama Canal may take a different ship to their final destination. This is a many-to-many problem in which containers from many arriving ships have to be moved to many departing ships (although each container goes only from one ship to another).

But the intricate dance of ships, cranes, trucks and storage slots is more than an algorithmic exercise in cost optimization. It is also about reliability, safet, and customer service. Physical infrastructure technologies--such as container number, truck plate, and container video--are automatically gathered at the gate where a container arrives in or leaves a port, as well as the use of GPS for a smart rail system. More traditional IT information, such as work orders for each vessel arrival, is also part of the system.

In Cartagena, a mix of traditional data center and distributed capabilities lead to a new breed of enterprise mission-critical system, and the most critical hardware component is the Wi-Fi network. Think of the management issues that arise, including cloud computing (such as now having to manage Wi-Fi and sensor-based information with the necessary reliability), security (such as now including a non-wired network) and mobility (such as machine generated sensor-based data from maybe fixed station devices, but external from the data center).

This is just one example. Is it any wonder that IBM (as well as many others vendors) recognizes that the journey to the cloud will be a long one, and that virtualization alone does not make a cloud?

Next page: Open Cloud StandardsAt Pulse, IBM emphasized its support of open cloud standards, including OpenStack, a collaboration effort for an open source cloud computing platform (IaaS). It also supports OASIS (Organization for the Advancement of Structured Information Standards), and TOSCA (Topology and Orchestration Specification for Cloud Applications) (PaaS).

Why are cloud standards important? Let's look at what TOSCA is attempting. Recall that the movement to a cloud is really a codename for IT as a service, which is more complicated than it sounds. For application and infrastructure cloud services, TOSCA will enable their description, the relationship between parts of the service, and the operational behavior of these services (which means such behaviors as deploying, patching and shutdown)--on an interoperable basis.

Interoperability means that everything is independent of the supplier creating the service--that is, anyone, such as a particular cloud provider, can use the service. For example, a customer that uses SAP CRM will be able to leverage TOSCA for a cloud environment. This means that a customer could run SAP CRM without having to worry about the underlying hardware.

These standards are needed as an enterprise's cloud will not only be its private cloud, but will also use public cloud services (becoming a so-called hybrid cloud). Without open standards, a vendor-neutral cloud ecosystem that supports portable deployment of new applications and smoother migration of existing applications to one or more compliant clouds would not be feasible.

Please note that an enterprise may use multiple public clouds to serve different purposes, such as running different applications, providing a backup or disaster recovery service, or simply as an alternative source to a current service provider. Standards make this open cloud world feasible (as a necessary, but not sufficient, condition, as mathematicians are wont to say).

Why do IBM and many others endorse such standards? Because they enable a dramatic cloud market expansion and greater business opportunities. IBM feels comfortable that, with its breadth of products and services, the opportunities to win business will far outweigh any losses.

Note that the dispersion of computing requirements within an enterprise that was discussed is only the internal dimension--i.e., that computing that IT has direct contact with. The external dimension--computing resources not directly under IT control--will continue to expand beyond where it already is, thanks to open cloud standards. The four walls of the traditional data center have been blown away (logically, not physically) to embrace a dispersed and distributed perspective. No wonder that efforts such as IBM's SmartCloud portfolio have to be so broad, as there a lot of issues that have to be tackled.

IT Operational Analytics

IBM has invested $17 billion in analytics in recent years. A lot of this effort has been on developing customer-facing solutions, but key efforts have also addressed how internal IT organizations can better manage their application and information infrastructures. Proactive management--including preventing problems before they occur and faster, less intrusive and smaller impact fixes when problems do occur--is necessary to be able to truly offer IT as a service. Fingers-crossed daily in the running of the IT train, hoping that nothing will fail and providing "emergency room" response to problems that occur is unacceptable in the IT as a service world.

IBM is working to provide analytics for successful management. For example, one demo at Pulse 2013 focused on how integrated network, customer and endpoint information can be used to manage the customer mobile experience through the use of IBM's Netcool Network Analytics, Cognos and Pure Data. Another demo showed how IBM Analytics software can provide early problem detection and problem isolation to resolve issues before end users are impacted.

Note that enabling IT to manage assets more efficiently and effectively is a key component in the move to true global computing. The tools may be internally facing and used by IT, but they support IT's ability to deliver stronger and better SLAs as part of the move to IT as a service.

Mesabi Musings

Country-singer Carrie Underwood was the headliner in an IBM sponsored concert for Pulse 2013 attendees. Her signature song from her new album is "Blown Away." Perhaps IBM should adopt this as a theme song for what needs to be done in the data center. The old way of thinking of IT as a self-contained monolithic entity that will be the single focus of the cloud needs to be blown away.

Expanding the IT computing viewpoint both internally (such as incorporating smart physical infrastructures) and externally (such as being able to being able to incorporate multiple clouds if appropriate) can lead to IT as a service. That will hopefully fulfill twin objectives of greater efficiency from an IT delivery perspective, as well as empowering the business to innovate more effectively to achieve better business outcomes. And so the focus that IBM had on creating opportunities that could lead to better outcomes for the enterprise would be well on its way to being fulfilled.

IBM is a client of David Hill and the Mesabi Group.

About the Author(s)

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox
More Insights