Mike Fratto

Network Computing Editor


Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Standardized Cloud APIs Aren't Possible

Rackspace President Lew Moorman drew a line in the sand for cloud standards: On one side, he put those companies and commenters that think cloning Amazon's APIs is the way forward. On the other side are those that think standards need to be open and developed independently of any particular vendor. I'm definitely in the latter camp, so I'm keeping good company, but the real question is: What, exactly, needs to be standardized?

The discussion seems to be around cloud APIs. Many want to standardize the semantics, headers, method calls, and so on. That would make integrators' jobs easier because they could create a single API, write to it and have it work anywhere. You don't have to peek too far under the covers to see that isn't possible or even desirable.

More Insights

Webcasts

More >>

White Papers

More >>

Reports

More >>

Standards create their own special form of lock-in. Yes, I said it, and I will say it again: Standards equal lock-in. Standards define a set of mutually agreed-upon ways of doing something. For standards to be useful, they have to remain fairly static and unchanged for long periods of time. Imagine what would happen if standards changed rapidly--developers would always be writing to a moving target. Think about what happened with HTML and the number of versions that were published, all of which browser vendors had to support. It was and is a mess. The basic protocols that power the Internet--IP, TCP, UDP, DNS and so on--are valuable because they've remained largely unchanged for years and years. As a result, we're locked into them. They're handcuffs with a soft edge.

Think about it--we're so locked into IPv4 that moving to IPv6 is going to be a huge challenge for vendors, service providers, application developers and users--basically, anyone who uses the Internet. We aren't going to move to IPv6 until forced to do so, and it will be a painful process. Isn't that one of the lock-in boogeymen?

That's OK. We accept that lock-in because of the enormous benefits we gain, such as a stable, reliable and widely adopted interface and protocol upon which other things can be developed and standardized like HTTP. If the industry didn't willingly lock in to the IP/TCP/UDP standards, we'd not have a global Internet--at least, not one where you can go anywhere and get online.

So what in cloud needs to be standardized? The APIs, method calls, formats and other application-layer stuff? No. That's too high level and too service-specific, and prohibitively limits what commercial or open-source developers can do. Even if such standards could be defined, they wouldn't help because each vendor has its own features that it wants to provide. Integrators would still create service-specific integration.

Standards set a low bar that everyone has to meet: You must be so tall to ride this ride. Table stakes, call it what you want. Sure, there are common things--actions--that everyone wants to perform on a cloud, like spin up a new instance, define a network and provision storage. But having a standardized interface limits what vendors can offer within the action or methods used to complete a request. That's not useful. I don't think there are building blocks that are simple enough to be useful and provide any value.

If standards include the ability to extend the standardized methods to perform proprietary things like provisioning storage with or without thin provisioning--an interesting option--then the service API is still going to be custom for each service. If each cloud provider implemented a standard API in addition to its own API for its own features, the result would be where cloud APIs are today. Integrators would still have to support each provider individually or support only the minimal set of functions defined in the standard, which is suboptimal. Nothing changes. What do you gain? Nothing. Nada. Zip.

I think what needs to be standardized are universal agreements on the fundamental building blocks, like communications protocols such as ws-* services, container formats like JSON or XML, responses, etc. That will make integration easier for everyone and provides a good foundation for providers to innovate on top of them. Perhaps a quality standard is needed--one that defines behaviors cloud providers need to embrace. Good practices such as never deprecating an API, supporting all API versions, and having quality feedback and response codes--dare I say it--are best practices. At the very least, those best practices define expected behavior and will separate good API stewards from ones that aren't. API stability is far more important that common method calls.

Guess what, kids--integration is hard. Get over it. It doesn't matter too much how many developers are on a project or how many lines of code there are. It matters that a cloud service provides value to customers. If cloud software providers make integration difficult or apply unacceptable (legal) restrictions on API use, then developers will flee and the cloud service provider will wither.

Here's what I expect integrators will do: They'll create an abstraction layer that stands between the cloud services and the application stack that defines a common interface layer facing outward, and translates that to each respective cloud provider, which is hidden from view. That abstraction layer will either be written by the integrator or it will use a public library to handle the details.

Standardize cloud APIs? Nah, that's not possible.

Mike Fratto is editor of Network Computing. You can email him, follow him on Twitter, or join the Network Computing group on LinkedIN. He's not as grumpy as he seems.


Related Reading


Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
 
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013



TechWeb Careers