Open Standards Vs. Open Source

The process for creating new networking standards, as exemplified by the IETF, is more complex and challenging than open source projects.

Russ White

September 2, 2015

5 Min Read
NetworkComputing logo in a gray background | NetworkComputing

In my last blog post, I examined the recent commercialization and hence rapid growth of the open source software movement. This trend provides some stark contrasts to the open standards movement, particularly as embodied in the Internet Engineering Task Force.

While there is a strong argument to be made that the IETF began on a more commercial footing than the open source movement, it has evolved into a community movement as well as a commercial one. One specific reason for this moderation is the immediacy of the work in terms of commercial application for those involved in actually doing it.

While the folks working on open source projects quite often have a direct opportunity to profit from the work they are doing, open standards are often once removed from any commercial activity. A Border Gateway Protocol (BGP) developer working on some new extension to the protocol, for instance, will have some commercial interest in the changes (especially if he or she is directly tied to marketing and sales, which is often not necessarily the case in a larger company). Operators may have some commercial interest in simplifying their network operation or perhaps in deploying a new feature.

The BGP feature, in both cases, is the means to an end, rather than a sellable end in itself; the profit is “once removed” from the work of designing the feature itself. Further, while there is some chance that interoperability will increase the profit potential, there’s an offsetting value being given to competitors “for free” through the open standards process itself.

The IETF: A slow-moving monster?

The “one off” nature of the open standards process appears, from the outside, to result in a slow process that is further slowing down. While an open source project can kick off, grab some funding, and produce some software that is then deployed in the real world in what seems to be a short period of time, the process of producing a new standard seems to take years. The days of small, lightweight, easy-to-read standards, like the original BGP and Open Shortest Path First (OSPF) specifications, seem to be gone forever.

Any given protocol requires a plethora of drafts (for instance, at the time of this writing there are some 18 drafts related to Ethernet Virtual Private Networks outstanding). Because of the scope of some of these systems, one person can just barely follow all of any single set of drafts, much less understand all the implications and interactions between any set of drafts for one system and another set of drafts about another system.

The impossibility of actually reading all the drafts -- for one area, or for many -- can lead to the reinvention of different mechanisms to solve the same problem, or several parts of the same or overlapping systems not working together correctly. It also encourages a level of specialization not seen in the open standards world before; someone who knows everything there is to know about one particular protocol (or set of overlapping standards) isn’t likely to know the rest of the protocols or systems.

All of this makes the jobs of those who are trying to manage the process, such as area directors and IAB members, difficult, at best. Even if the process is followed to the best of everyone’s ability, the more complex the standards are, the more likely it is that something “big” is going to slip through that doesn’t work right, or has some very bad unintended consequences.

Many contributing factors

What is at the root of these problems? More than one cause seems apparent.

The problems are more complex. As the networking world moves from what might now be considered simple problems into more complex problems, the solutions must ramp to an equivalent level of complexity. Complexity isn’t, of course, just the scale of modern networks, which have reached sizes very few people in the early days of connecting T1’s and Frame Relay circuits could have imagined. Complexity has grown in the applications and information load, as well.

The scope of work is expansive. The scope of the problem set has become much larger. Rather than connecting a few supercomputers, the scope runs from sensors to supercomputers, from moon shots to personal area networks.

The tendency to boil the ocean. Most modern protocols aren’t aimed at simple problems any longer. The use cases and requirements become book length as the problem scope is expanded beyond what any solution can effectively resolve.

The long tail tends to dominate. In any community, whether open source or open standards, the long tail tends to dominate over time. As the “simpler” problems are solved, the more difficult ones tend to be tabled. For instance, BGP, OSPF, and Intermediate System to Intermediate System (IS-IS) solved a “simple” set of problems (well, not really, as routing is still a difficult problem in the real world, but nonetheless the perception is there) for a large user base. Now, with the solutions in place for the larger user bases, the IETF tends to move into smaller user bases. At some point, the user base potentially becomes a single set of researchers involved in some interesting work with a very narrow range of commercial applications.

This tends to fragment the work done in the IETF, making the workload of a single person trying to understand the complete picture much harder. It also tends to lead to some of the logical fallacies that surface in the IETF process, such as ad hominem attacks (you just don’t understand the problem, because you don’t live in the narrow bit of the world I do), straw man (you’re addressing a problem that doesn’t really exist anyway), etc. Fragmentation is very difficult to manage; “herding cats” is just the beginning of the analogy. It’s more like herding cats, elephants, and aardvarks towards the same watering hole.

So with these challenges, should the networking community move towards open source and abandon the open standards model? I'll discuss the issue in my next blog post.

Alvaro Retana, Distinguished Engineer at Cisco, and IETF Routing Area Director Alia Atlas, Distinguished Engineer at Juniper, provided input to IETF Routing Area Director Russ White for this article

About the Author

Russ White

Russ WhiteArchitect, LinkedIn

Russ White is an architect a LinkedIn who writes regularly here and at 'net Work. Russ is CCIE #2635, CCDE 2007:001, is a Cisco Certified Architect, and has earned an MSIT from Capella University and an MACM from Shepherds Theological Seminary. He is currently working towards a PhD in philosophy at Southeastern Baptist Theological Seminary.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights