06:54 PM
Repost This

9 Immutable Laws of Network Design

Follow these simple rules to ensure your network is stable, secure and built to last as you overlay new services and applications.

Each year, my company has the opportunity to work with many clients on their network architectures, designs and configurations. We also work with clients when they have network issues and need troubleshooting assistance. Based on those many years of experience with a variety of environments and customers, I've developed this list of nine immutable laws of network design.

Following these simple rules helps you create and maintain a stable, long-lasting network infrastructure that will be invaluable as your organization begins to overlay additional services or applications. Whether you’re redesigning for wireless, preparing for software-defined networking (SDN) or simply expanding your virtualized environments, designing by these rules will increase the stability, manageability and security of your network.

1. Know, Don’t Guess

Two phrases uttered frequently during network design are “I’m pretty sure” and “I think.” As a professional tasked with discovering, researching and documenting client networks, I can tell you those phrases don’t cut it in our organization, and they shouldn’t be accepted in yours. There’s more than a 50% chance what you think you know is wrong. Networks are inherited, many admins may touch them, and they’re frequently changed in a fit of fury, troubleshooting or testing. When documenting a network or committing even a minor change, you should always look, verify and know--never guess. The mantra in our office is, “No information is better than wrong information.”

2. Avoid Dangling Networks

As SDN, virtualization and application-based technologies creep into our networks, we need to take a hard look at configuration sprawl and prepare for a massive cleanup. Avoid dangling and mismatched networks and VLANs throughout the infrastructure. It’s not unusual to see VLANs tagged where they should be untagged, or a VLAN dead end into an untagged VLAN. There are some instances of think-outside-the-box moments where a configuration like this is needed, either for a transition period or to work around a specific situation, but the practice should be the exception, not the rule.

3. Route Where Needed, Not Where Possible

Routing at the edge sounds like an advanced approach to network architecture, but it can cause more problems than it solves. Sure, you may get some additional speed, but in most networks, that speed will never be measurable, and the complexities of overly distributed routing lead to management and security headaches.

4. See All, Manage All

You certainly can’t manage what you can’t see. Visibility into the network has always been important, and it’s going to be even more essential as networks evolve to solve the demands of virtualization and applications. Know what you have, where it is, and monitor it constantly.

5. Know When To Standardize

There are times when standardizing offers great advantages, and other times when it will be working at cross-purposes to your objectives. This might mean standardizing on a single vendor for interoperability, or it may mean standardizing on configurations, security settings and management. Either way, make sure your choice is serving a purpose and providing flexibility as your network grows in the future. Don’t get pigeonholed in to a single-vendor solution when the costs outweigh the benefits, and don’t miss opportunities to standardize on platforms that can increase effectiveness of management and security.

[ Common errors like mismatched masks and duplicate IPs can spell disaster on a network. Find out the top mistakes to avoid in "The 10 Deadliest Networking Mistakes."]

6. Layer 1 Is King

Your sleek new infrastructure of VLANs and virtual devices is complete trash if the foundation of your network is faulty. Layer 1 is king, and disruptions in Layer 1 still contribute to a huge volume of detrimental network outages. As network capabilities develop and grow, Layer 1 requirements will evolve and remain the most critical consideration.

7. Simple Always Wins

Just because you can do it doesn’t mean you should. Labs and test environments are the place to play and think outside the box with your configurations. In an enterprise production environment, you’re best served following the K.I.S.S. model, and keeping your network as simple as it can be while maintaining the required connectivity and security.

8. Power Is Important

To say we’ve been spoiled in recent decades with our power sources seems strange, but it’s true. As power demands increase with newer technology, availability and consistency of power is more critical than ever. The addition of virtualized machines and software-based appliances that are more sensitive to power issues compounds the problem. Oftentimes, power issues can cause widespread network disruptions without ever triggering an alert. Clean, conditioned, consistent power used to be a luxury, but is now a necessity in the network.

9. Embrace Documentation

You may have flashbacks of writing book reports in high school, but maintaining documentation on your network is the easiest way to ensure you’re following best practices, tracking changes and creating the means to troubleshoot effectively. As we layer on more technology and applications, documentation will increase in significance. Embrace it, live it, love it, do it. Twenty minutes of documentation now, even if you feel you don’t have 20 minutes to spare, may save you 20 hours down the road.

Do these rules resonate with you? Are there any you would add to the list? Please add your comments below.

[Find out how to eliminate unnecessary network traffic, ensure you get the throughput you paid for, and rid the network of packet loss in "11 Things You Can Do When You Get Back to the Office to Improve Network Performance." at Interop New York Sept. 30-Oct. 4. Register today!]

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Rob Parten
Rob Parten,
User Rank: Apprentice
9/18/2013 | 9:54:04 PM
re: 9 Immutable Laws of Network Design
Actually, I am not "wrapped around Cisco pretty tightly" - I have promoted the use of the lower cost 8-port PoE procurves in situations where I only need <4 connections (eg PoE cameras). I am also an advocate for Juniper where a physical router is needed (eg serial hand offs, full table BGP and other multiprotocol environments). However, the majority fo the corporate world is clearly running on Cisco or they're ripping out equipment (Exteme, foundry, 3com..for the most part) and replacing it with Cisco.

I don't understand where the extra processing at the route layer is a big concern because we're no longer process switching/routing traffic. Back in the 6500 Sup2 days EVERY packet was process switched, but that was before ASICS were put into play.

The newest ASIC technology, at least with Cisco, Juniper, and Brocade is where the decicions are made and they're happening at line rate. For example, I run multiple sites advertising numerous /16 (yes, I advertise one of "those" still), /20, /22 and /24 subnets in a dual multihomed environment. With the correctly sized RAM to hold the tables the routers only put the successor route into the FIB, stored in ASICs. Even with two versions of full BGP tables, 200Mbps links each, utilizing about 60% of each of them, the routers never skip a single beat and they're only 3725 routers, nothing to write home about.

In all honesty, a L3 design makes more sense when using say, OSPF. Area 0 is the area at the core on out to the edge of each remote location. From there, you make each area a TSSA (If you have Cisco or have the extensions in other platforms to do TSSA) and connect those remote offices (areas) to the backbone. As long as you have one ABR you're TSSA (area # stub no-summary) the ABR will only inject a default route into it's respective non-backbone area for all internal routers and routing devices to use it as a gateway. From there, just configure an area range into the backbone to consolidate routes. Heck, you can even get away with some half baked IP subnet plan, as long as nothing overlaps, OSPF handles the rest!

To introduce redundancy, just create a L3 port-channel from the remote offce (non-backbone area) to the core. Or, a much more intelligent design, just use it as a seperate link to your redundant routing device and let ECMP handle the rest.

Side-by-side, my L3 edge networks operate seamlessly. Once they're deployed and my MPLS tunnels are up I let OSPF handle the rest and within 10 seconds of the site going live I can route traffic to it. No tagging, no worrying about MTU values and my routes are instantly propagated throughout the entire enterprise network with no user intervention.

I'll pit my design against anyones L2 based on overall resiliency, reliability and speed, especially in a data center with VxLAN deployed
User Rank: Apprentice
9/18/2013 | 5:55:45 PM
re: 9 Immutable Laws of Network Design
I understand your point of view, and what you're suggesting is not incorrect, however I'll remind you these are the "immutable" laws of network design that should apply across the board, and across protocol fads (for the most part).
There are certainly a lot of ways to skin a cat, but careful planning of a network would mean not every packet needs to be routed, thereby reducing the load you're alluding to. I know you're wrapped around Cisco pretty tightly, and it's great that they (and other vendors) have solutions to address immediate needs. I certainly suggest using those where appropriate, but I will still maintain that you should route "where needed". If it's needed at the edge in a more distributed enterprise environment, then absolutely route there- but don't do it because you CAN, do it for the need.
Rob Parten
Rob Parten,
User Rank: Apprentice
9/18/2013 | 4:57:01 PM
re: 9 Immutable Laws of Network Design
#3 I strongly disagree. Careful planning of the IP network helps push L3 at the edge. Cisco introduced IP LITE in the 2960X series to promote this behavior because it is highly recognized that ECMP is a more predictable, better understood open standard for load balancing across links, compared to LACP or Etherchannel which doesn't always hit the mark and has proven to be less than reliable because the hashing algorithms, for the lack of better words, suck.

Ever see a 4 port 1G bundle have 1 port saturated and the rest remain silent while other traffic drops to the floor? Even with tuning the algorithm you're stuck with the issue if you have one single source of traffic from many clients. This is creeping fast into networks with SAN architectures that have fast-cache and disk arrays with Read/Write so fast they're no longer the bottleneck. L3 at the edge is a proven design in campus architectures, it just requires a little more planning and a greater than basic level of understanding on how to subnet

This also means having a better understanding of tried and true routing protocols, OSPF and ISIS are two commonly used IGRPs, for achieving a L3 edge design. Once again, a better than basic understanding of networking and the behavior of a routing protocol will be needed to facilitate propagating routes from the edge to the core.

I know the argument of "K.I.S.S" comes to mind; however, complexity is relative to each person. I see OSPF as a simple protocol with a clearly defined set of operations which take place; however, those who choose the "when in doubt, static route" or L2 everywhere approach will disagree primarily out of a lack of understanding.

I wouldn't be so confident with #3 with SDN and overlay networking on the rise, making how we configure the "physical" network more of a commodity service than a primary means of interconnecting virtualized resources.
User Rank: Apprentice
9/18/2013 | 2:50:02 PM
re: 9 Immutable Laws of Network Design
Uh oh! Imitation- in network equipment, or in the article? Did I miss something someone else posted?
User Rank: Apprentice
9/18/2013 | 6:29:56 AM
re: 9 Immutable Laws of Network Design
10. Imitation is the sincerest form of flattery.

That said, nice article :)
More Blogs from Commentary
SDN: Waiting For The Trickle-Down Effect
Like server virtualization and 10 Gigabit Ethernet, SDN will eventually become a technology that small and midsized enterprises can use. But it's going to require some new packaging.
IT Certification Exam Success In 4 Steps
There are no shortcuts to obtaining passing scores, but focusing on key fundamentals of proper study and preparation will help you master the art of certification.
VMware's VSAN Benchmarks: Under The Hood
VMware touted flashy numbers in recently published performance benchmarks, but a closer examination of its VSAN testing shows why customers shouldn't expect the same results with their real-world applications.
Building an Information Security Policy Part 4: Addresses and Identifiers
Proper traffic identification through techniques such as IP addressing and VLANs are the foundation of a secure network.
SDN Strategies Part 4: Big Switch, Avaya, IBM,VMware
This series on SDN products concludes with a look at Big Switch's updated SDN strategy, VMware NSX, IBM's hybrid approach, and Avaya's focus on virtual network services.
Hot Topics
Converged Infrastructure: 3 Considerations
Bill Kleyman, National Director of Strategy & Innovation, MTM Technologies,  4/16/2014
Heartbleed's Network Effect
Kelly Jackson Higgins, Senior Editor, Dark Reading,  4/16/2014
White Papers
Register for Network Computing Newsletters
Current Issue
Twitter Feed