ONUG conference highlights the tension between innovation and interoperability with existing infrastructure.
At this week's Open Networking Users Group conference, IT leaders from large organizations talked about wanting to update their infrastructure with automation and new technologies. At the same time, they stressed the importance of preserving existing applications and infrastructure. Much of this seems to be from a desire to have it all, which may be impossible to achieve.
ONUG is heavily represented by large financial institutions, which are well-known for aspiring to mimic the behavior of cloud service providers. This is driven mostly by their large scale and interest in using disaggregated ODM “white-box” switches for their networks that runs software from third parties.
However, a common lament expressed at the conference by infrastructure leaders at these companies was that they want to make “new IT” work with the IT infrastructure they have today, and also to have interoperability between vendors.
Are innovation and interoperability compatible?
Being innovative and working with their existing systems may be very hard to achieve. We’ve had standards bodies such as IEEE and IETF that have enabled a large amount of interoperability and make the Internet what it is. But the IT organizations involved in ONUG want even more interoperability between hardware and software from various vendors supplying technologies such as software-defined WAN and SDN controllers. They want to avoid lock-in, but also a rich set of features.
A variety of ONUG working groups have defined use cases to show their requirements for interoperability, which is a nice goal, but one needs to recognize that if you want innovation, you may need to accept some lack of interoperability, which comes from products that settled on a stable set of features.
Many end users at both ONUG and last week's Interop 2016 in Las Vegas cited the challenges of changing people and process, as opposed to technology for adopting new models for computing.
Reluctance to adopt public cloud illustrates this issue. Large financial firms cite regulatory compliance, security, costs, and compatibility with existing apps. There's some truth to this, but changing internal processes also likely plays a large role. At Interop, IT pros at small and midsize organizations also expressed reluctance to adopt public cloud services, even though their firms have fewer regulatory restrictions and scale requirements.
The difficulty in adopting innovative technologies and issues related to people and processes may be a result of overly managing risk. Is acting too conservatively actually hurting IT shops?
An example is the issue of multi-cloud versus betting on a single cloud provider. A rational person, acting to manage risk, wants to rely on multiple-cloud providers. This seems to provide the benefit of making different vendors compete on terms, which benefits the buyer. But in practice, according to Tayloe Stansbury, CTO at Intuit, is that it imposes additional costs. It means a customer needs to understand multiple security models (AWS, Google, and Microsoft), programming models, management models or contracts, he said during an ONUG session.
What if you just bet on the best one that fits your needs? If you’re right, you run your organization efficiently, and reap the benefits. There’s a chance you may be wrong, but that’s unlikely if you prepared properly. If you’re wrong, you incur the costs of porting to another platform, but that’s acceptable, since you saved all the resources that would have been wasted dealing with three providers. Preparing for too many contingencies exacts a price. One needs to bet on good outcomes, instead of minimizing the bad outcomes.
Ultimately, enterprises have to figure out what's most important for their business goals and understand the tradeoffs. There’s the old IT adage of choosing two constraints out of three: among speed, reliability, cost, choose two. If you want everything, you end up with nothing. I spoke with an IT manager of one of the largest money management firms in the US whose experience illustrates this. Management asks him to make an IT decision based on lowering costs so he optimizes deployment on price, but is later asked to make it faster. He may be able make it cheap and fast. But later on, he’s asked to make it reliable. Now he’s out of choices, since he can’t make it cheap, fast and reliable.
At the end of the conference, there was a poll that indicated how the audience was conservative. The panelists asked attendees who was adopting 25 Gigabit Ethernet speeds in the data center. Few hands went up. This speed, supported by switches such as the newest Cisco Nexus 9000, is being adopted by many innovative cloud native infrastructure organizations.
I thought that attendees would be at the forefront of adopting these new Ethernet speeds since they talk about embracing modern technologies used by web-scale companies. Instead, a surprising number have stuck with their existing port speeds and equipment. Although web companies such as Yahoo have saturated their network ports to run modern workloads like Hadoop and are often first to upgrade network speeds, large enterprises like the financial firms don't have the same requirements, at least not yet. Thus, they are making a decision to keep the older networks devices. They may aspire to behave like Web scale companies but in practice, are not quite there.
So perhaps the “pick two out of speed, cost or reliability” adage can be supplemented with “pick two out of innovation, legacy compatibility or risk management.” That’s the reality of managing a large, complex IT environment with legacy equipment and software. It’s great to aspire to adopt new, innovative technologies, but real-life requirements and budgets always set the agenda. Pockets of innovation are always possible, but adopting changes across the board is a challenge.