In my last blog post, I described how I built a lab for testing network functions virtualization (NFV), and the pros and cons of various configuration options. In this post, I'll discuss issues to consider in your NFV lab -- licensing, storage and networking -- and provide tips based on what I found worked best in mine. I'll also provide a short list of NFV appliances to get started in your lab.
Licensing virtual appliances
NFV is in its early stages. Some vendors are giving away virtual editions of their software just to get folks out there to try it. Other vendors are already monetizing their virtual editions, and offer free product downloads as time-limited evaluations. In some cases, there are lab-specific editions that are cheap to buy and fully featured, but might be throughput limited.
My point is: Know what you’re getting when you download and install these various pieces of software. Licensing is a potential issue. Cost is a potential issue. That said, when using time-limited evaluation editions, you might be able to have your evaluations extended. Oftentimes, you can simply ask your vendor rep for the extension.
Another concern with virtual appliance licenses is that they may not be portable. Therefore, if you license your virtual appliance on your desktop VMware Workstation and then move that appliance to a dedicated ESXi host, the license might break. This is not an insurmountable issue, but it likely means a discussion with your vendor to resolve.
Assuming you go the route of a dedicated server or two, there are two major storage considerations in the context of an NFV home lab. One is speed in powering up and shutting down virtual machines. The other is capacity for things like logs, storing virtual machine images, etc. Depending on how complicated you choose to get into your lab scenarios, you can really eat up some terabytes.
To accomplish speedy power up and shutdown of virtual machines, I recommend an SSD in your ESXi host. To be sure, SSDs are pricier per gigabyte, but the performance improvement is substantial over traditional spinning rust.
ESXi also supports thin provisioning, which means that you don’t immediately allocate all of the disk space you’ve assigned the VM --a useful feature when trying to conserve a low-capacity SSD. The space is not allocated until the VM actually requires it. Practically speaking, this means that you can stretch an SSD a long way in an NFV environment. Most NFV images tend to have relatively small storage requirements and don’t change much over time.
Now, is it that important to have fast disk just to power up and shut down lab machines? I suppose it isn’t. But I can promise you that once you’ve experienced SSD performance in your lab, you’ll be happy.
For big, slow storage with high capacity, you could use a multi-terabyte HDD inside the ESXi host. If you’d like to get fancy (and spendy), you could consider an external drive array from a vendor like Synology. ESXi can mount external disk volumes via iSCSI and NFS over the network and present them as local disks to virtual machines. I have followed this strategy, and will typically use a thin-provisioned SSD volume to load the virtual machine and then present it with a second storage volume mapped to an external disk array if I’m looking to collect a bunch of data from the NFV VM.
When testing NFV, I have focused on functionality and not throughput. Therefore, I’ve been quite happy with a 1 Gbps switch that supports VLAN tagging (802.1q). This allows me to create as complex a group of network segments as I might desire and then interact with another ESXi host or other physical machines on the network. VLAN tags take care of identifying the network segment between the virtual machines and the vSwitch in the hypervisor and between the hypervisor and the physical switch.
It is helpful if the physical switch can perform L3 routing for certain scenarios, but even if your home lab switch is only L2 capable, you could route between segments with one of those fancy NFV virtual machines you’re testing. You might also want your lab switch to act as a DHCP server, which is a handy function to have in your NFV lab.
I happen to be using a Cisco SG300 in my home lab, part of Cisco’s small business line. The switch is reasonably priced and capable of VLAN tagging and enough L3 routing to be adequate for my home lab use. The SG300 line is not quite like Cisco’s Catalyst line, for those familiar with that product. The SG300 is more like the Catalyst’s cousin; there's sort of family resemblance at the CLI, but many differences.
Network device images
Once you’ve built your lab, you’ll want to start working with NFV. There are many different NFV appliances available. In the table below, I’ve listed a few that I’ve successfully made to work on VMware vSphere ESXi v5.5.
A10’s vThunder and Silver Peak’s VRX virtual appliances are also available for a free trial, according to their websites, but I haven’t gotten so far as to download them and spin them up myself. Other VMs I run include an open source SNMP monitoring package called Observium, which is conveniently packaged at TurnkeyLinux. Observium is not an NFV appliance, but it's a useful tool to monitor NFV VMs I’m working with. I also run some small virtual Linux servers for testing purposes; they generate traffic and act as servers in some testing scenarios.
Are there other NFV virtual appliances you’re using? Do you have other home lab scenarios that your peers might be interested in? Please let us know in the comments. We’d like to hear all about the home lab you’ve built.
Attend Ethan Banks' live session, IT Infrastructure in 2025: What Will The Future Look Like? It's one of the dozens of learning opportunities at Interop Las Vegas this spring. Don't miss out! Register now for Interop, April 27 to May 1, and receive $200 off.