Networking Pros Will Need These Skills
Recently, I’ve come across a number of blogs about networking skills. I have some thoughts on the subject, which may be a good bit different than what you’ve been reading. I’m curious whether people agree with me or I’m just being contrary -- so leave a polite comment either way!
Programming/scripting for networks
Let’s start with programming and scripting. Cisco was pushing hard for learning coding, then eased up a bit. Several people I highly respect are enthusiastic about programming, or at least scripting. For that matter, I like coding. Getting someone to pay me to do it is the problem!
Concerning tools like Python, Puppet, Chef, Ansible, etc. — knock yourself out. Having some idea what they can do and how to use them might be helpful. I agree with Cisco’s toned-down version of “all networkers must learn programming,” namely, “must be able to credibly talk to programmers” — although I might quibble about which programmers, i.e. not just tools developers. (How many firms will be building their own tools, instead of DevOps teams? More on this below.)
What concerns me about lots of people scripting is the amount of bad coding that might happen. I’ve programmed in 15-20 different languages, developed one big GUI-based program, and had my share of humbling bugs in all of them. PERL is especially handy if you like obscure bugs. Regular expressions, ditto. Another thing I’ve noticed over the years is the amount of bad or sloppy and under-documented code out there, including my own. I’ve worked at it and write very carefully indented and self-documenting C or PERL code — just because I know I’m going to have to fix it at some point.
Think about it: Someone succeeds in automating some aspect of your network. They leave. Who maintains it? Or your attempt at scripting breaks in some obscure way. Worst case: After breaking routing in numerous routers due to mildly different CLI syntax. From experience, the testing one does is seldom as much as the testing that should have been done, and bugs do hit production.
In summary: One’s odds of avoiding a CLM (career-limiting move) might be better with supported code from a vendor, be it Cisco, Apstra, or whomever.
For reporting, OK, that’s less dangerous. An API is useful while a product is being developed, in case their canned reports don’t do what’s needed. Having said that, in the last year or two I’ve used Python to probe at some APIs. My conclusion: badly under-documented. Telling me syntax and a table schema does not give me context. For instance, one network management product gives me performance data. I couldn’t readily determine what time period the data rolled up, in part because I had something wrong with my query and no good example of accessing that particular API element.
Is scripted automation and reporting really all there is? It’s useful, and bigger shops might pay you to do it. Smaller ones, I doubt it.
Yes, we probably need to be able to script to get more from network management and automation tools. Network management is a relatively small market, and just doesn’t seem to generate the R&D to automatically pull data together into a network model and provide correlated results, let alone root cause. Just getting good graphs out of present tools can be a bit of a hassle.
In fact, that’s one of my gripes. Without naming vendors (and there are several), some seem to think I should use their API to make up for their immature product’s lack of a good set of canned reports. That just doesn’t work for me. If I sit through a sales pitch and find out the product is rather incomplete, I “enable product dampening” — as in you lose points with me for years.
While we’re at it, too many APIs, inconsistent across products from a single vendor — that redefines inefficient use of my coding time. Put differently, APIs are an enabler. Too many APIs, a disabler. Inconsistent semantics, a disabler. Sure, I can write code that does different things based on device model. Waste of my time having to do that?
Catching bigger fish
Maybe we should be widening our skills perspective.
To catch bigger fish, you might use a bigger net.
It is useful to understand how ACI, VMware NSX, and containers do networking. Try to figure out or learn the best practices around hierarchy of deployment, manageable addressing and routing, manageable and secure networking, etc.
Example: ACI can control NSX security. I’m not sure I like the approach Cisco recommends. For one mostly NSX deployment (80-90 percent virtualized), I plan to use ACI as a fabric automation and management tool, and use NSX management natively. That way we won’t be adding troubleshooting complexity due to the control plane interaction between the two. Learning one tool well: good. Learning the two or three competing tools in an area: better. Learning how to make them interoperate: small market for the skill, high risk/complexity?
Another example might be more storage related. I’ve been wondering about where hyper-converged systems are appropriate, and where not. As you scale them up, you add network traffic and latency. From a Google search, I see that “scaling hyperconverged systems” is a thing — but no clear answers. Detecting that applications are slow due to storage IOPS is something we’ve run into a few times lately, as part of proving it’s not the network.
The implication here: perhaps some server/virtualization/storage skills are relevant for in-house datacenter networking teams.
The meta skills point here is: Don’t just learn one way of doing things; learn to be able to discuss the alternatives, their pros/cons, and where they fit or don’t fit.
Next page: DevOps and cloud
Recommended For You
Low-Power WANs offer an alternative to 5G for connecting a fast-growing array of basic devices and sensors that transmit small amounts of data.
An effective network visibility strategy requires understanding the technical, financial, political, and legal aspects impacting your network operations.
Emerging organizational structures for IT include placement of IT pros in user areas and departments forming their own "micro IT's."
Comparing a good and bad trace helps identify performance issues. Dynamic baselining can be used when you do not have a good trace to reference.
Combining commodity server platforms and FPGA-based SmartNICs will allow network applications to operate at hundreds of gigabits of throughput with support for millions of simultaneous flows.
SD-WAN implementations are on the rise thanks to the potential cost savings, increased network resiliency, and better application performance they deliver.