Joe Onisick

Upcoming Events

Where the Cloud Touches Down: Simplifying Data Center Infrastructure Management

Thursday, July 25, 2013
10:00 AM PT/1:00 PM ET

In most data centers, DCIM rests on a shaky foundation of manual record keeping and scattered documentation. OpManager replaces data center documentation with a single repository for data, QRCodes for asset tracking, accurate 3D mapping of asset locations, and a configuration management database (CMDB). In this webcast, sponsored by ManageEngine, you will see how a real-world datacenter mapping stored in racktables gets imported into OpManager, which then provides a 3D visualization of where assets actually are. You'll also see how the QR Code generator helps you make the link between real assets and the monitoring world, and how the layered CMDB provides a single point of view for all your configuration data.

Register Now!

A Network Computing Webinar:
SDN First Steps

Thursday, August 8, 2013
11:00 AM PT / 2:00 PM ET

This webinar will help attendees understand the overall concept of SDN and its benefits, describe the different conceptual approaches to SDN, and examine the various technologies, both proprietary and open source, that are emerging. It will also help users decide whether SDN makes sense in their environment, and outline the first steps IT can take for testing SDN technologies.

Register Now!

More Events »

Subscribe to Newsletter

  • Keep up with all of the latest news and analysis on the fast-moving IT industry with Network Computing newsletters.
Sign Up

See more from this blogger

Keep Old Apps Private, Make New Apps Public

Private and public clouds have many elements in common, including high degrees of virtualization and automation and a usage-based service model. But that doesn't mean an enterprise application will run equally well in either environment. Most legacy applications are better off staying in your data center. At the same time, the public cloud should be the first place you consider deploying new applications. Here's why.

Traditional enterprise applications are better suited to the private cloud. These applications are typically built in a silo-centric fashion and are dependent on the hardware, particularly homegrown applications. Don't let virtualization fool you; even if the applications are virtualized, they're still silo-centric. The only difference is it's now virtual hardware. You're still working on the outdated model of one application, one OS.

More Insights


More >>

White Papers

More >>


More >>

Cloud promoters tout the ability to migrate applications from a private to a public cloud to improve resiliency and business continuity, but the fact is, the technical requirements for such migrations are steep. You have to address Layer 2 adjacency, latency and storage considerations. Cold migrations (that is, with the virtual machine powered down) will require compatible infrastructure and configuration locally and in the chosen cloud, which presents its own set of requirements and complexities.

Legacy applications will require heavy lifting in the form of architectural redesign and code customization, if not complete rewrite, to be migrated to the public cloud. In a legacy data center architecture, applications get high availability through hardware redundancy and software features like clustering and fault tolerance. This can provide uptimes levels in the range of 99.999%-plus for critical apps. In a public cloud, the infrastructure itself typically provides a 99.95% uptime if design requirements are met. To extend that uptime to match what you'd get in your own data center, you'd have to redesign the application to fit the architectural requirements of your chosen public cloud.

Beyond uptime, applications will need to be reworked to take advantage of the elastic scale of cloud. Traditional enterprise applications aren't built to expand and contract based on real-time demand. The fluid nature of cloud computing provides inherent advantages for scale, reliability and flexibility, but those are not plug-and-play abilities for legacy code.

Write for Public The flip side of this coin is the applications that are ideal for public cloud environments. New applications with little to no dependencies on legacy code or data are ideal. If the service is still to be defined, or being written from scratch, you should be looking for reasons not to place it in the public cloud rather than vice versa. Apps that are purpose-built for the cloud can gain a great deal from public cloud scale and geographical diversity. If you do build it on local infrastructure, that decision should be driven by solid reasons such as security and compliance or cost--or because you've already built a large-scale private cloud that offers many of the public cloud's benefits.

End users are accustomed to Web-based latency with many of the applications and services they use for things like photos and email. This means that low to moderate latency shouldn't cause user experience to suffer--as long as the application itself does not add unnecessary drag. As long as the data can be stored in close proximity to users to avoid unnecessary data access latency, most apps can be written to perform well in the cloud. These applications will be more flexible, scalable and accessible if designed originally for cloud deployment.

Deploying applications in a public cloud environment with a service-centric view will also make it easier to address bring-your-own-device initiatives. The ubiquitous access of a public cloud and a service delivery model makes it easier for users to access information from any device. This can either be a supplement to or replacement for application and desktop virtualization for BYOD purposes. With services deployed in an accessible public could, native apps can be written for mobile devices and browsers.

Applications are not created equal and nor are the underlying platforms that run them. While private and public clouds share common features and characteristics, they are not identical. Thus, you should weigh the requirements of each application to ensure that you choose the correct deployment model.

Joe Onisick is the Founder of Define the Cloud. You can follow his angry rants at or on Twitter @jonisick.

Related Reading

Network Computing encourages readers to engage in spirited, healthy debate, including taking us to task. However, Network Computing moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing/SPAM. Network Computing further reserves the right to disable the profile of any commenter participating in said activities.

Disqus Tips To upload an avatar photo, first complete your Disqus profile. | Please read our commenting policy.
Vendor Comparisons
Network Computing’s Vendor Comparisons provide extensive details on products and services, including downloadable feature matrices. Our categories include:

Research and Reports

Network Computing: April 2013

TechWeb Careers