This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
When using a network protocol analyzer, you will eventually have a situation where you have to work with a large trace file. My definition of a large trace file is anything over 1 gigabyte. With 1 Gigabit Ethernet, 10 GbE, and higher speed networks, 1 GB trace files are becoming more common in network analysis and troubleshooting.
There are many products out there that are specifically designed products to process, report and help analyze large trace files. Unfortunately, there will be situations when you are in the field and can’t access to your fancy tools or just can’t justify purchasing these products since you don’t run into large trace files enough.
In this video, I cover the most common network analysis techniques for working with large trace files. My demonstration uses Wireshark, but these techniques can be used with any protocol analyzer.
Specifically, I cover packet slicing with the editcap utility, using a read filter in the Wireshark GUI, and TShark. Note that TShark can only packet slice on live captures, not trace files.
It is worth mentioning that another technique would be to capture using a ring buffer creating several small trace files. The big difference between chopping up a large trace file and creating many smaller files is that you might miss some packets when using a ring buffer, as your system stops the capture, saves the trace file, and starts the capture. For this precise reason, I prefer creating larger trace files and chopping them up later.
Each technique has its pros and cons and I encourage you to give them all a try. In some scenarios, you might actually use a combination of these techniques. For example, I once used packet slicing, then a read filter, and finally exported the data in a CSV format for Excel analysis.
By being aware of – and anticipating – these three trends in IT monitoring, DevOps and SREs can be better prepared to effectively troubleshoot performance issues and improve business outcomes.
Proper configuration management will help mitigate network complexity and properly prepare organizations for the future in an increasingly connected world.
Massive amounts of data streaming out of the systems can help IT administrators and others keep up to date about that performance and the overall end-user experience.
To find the root cause and speed up issue resolution, IT teams need a clear view of the correlation between user experience, measurable network behavior, and underlying network issues.