The Network Forensics (Full Packet Capture) Revival TourBy Rich
I hate to admit that of all the various technology areas, I’m probably best known for my work covering DLP. What few people know is that I ‘fell’ into DLP, as one of my first analyst assignments at Gartner was network forensics. Yep – the good old fashioned “network VCRs” as we liked to call them in those pre-TiVo days.
My assessment at the time was that network forensics tools like Niksun, Infinistream, and Silent Runner were interesting, but really only viable in certain niche organizations. These vendors usually had a couple of really big clients, but were never able to grow adoption to the broader market. The early DLP tools were sort of lumped into this monitoring category, which is how I first started covering them (long before the term DLP was in use).
Full packet capture devices haven’t really done that well since my early analysis. SilentRunner and Infinistream both bounced around various acquisitions and re-spin-offs, and some even tried to rebrand themselves as something like DLP. Many organizations decided to rely on IDS as their primary network forensics tool, mostly because they already had the devices. We also saw Network Behavior Analysis, SIEM, and deep packet inspection firewalls offer some of the value of full capture, but focused more on analysis to provide actionable information to operations teams. This offered a clearer value proposition than capturing all your network data just to hold onto it.
Now the timing might be right to see full capture make a comeback, for a few reasons. Mike mentioned full packet capture in Low Hanging Fruit: Network Security, and underscored the need to figure out how to deal with these new more subtle and targeted attacks. Full packet capture is one of the only ways we can prove some of these intrusions even happened, given the patience and skills of the attackers and their ability to prey on the gaps in existing SIEM and IPS tools. Second, the barriers between inside and outside aren’t nearly as clean as they were 5+ years ago; especially once the bad guys get their initial foothold inside our ‘walls’. Where we once were able to focus on gateway and perimeter monitoring, we now need ever greater ability to track internal traffic.
Additionally, given the increase in processing power (thank you, Moore!), improvement in algorithms, and decreasing price of storage, we can actually leverage the value of the full captured stream. Finally, the packet capture tools are also playing better with existing enterprise capabilities. For instance, SIEM tools can analyze content from the capture tool, using the packet captures as a secondary source if a behavioral analysis tool, DLP, or even a ping off a server’s firewall from another internal system kicks off an investigation. This dramatically improves the value proposition.
I’m not claiming that every organization needs, or has sufficient resources to take advantage of, full packet capture network forensics – especially those on the smaller side. Realistically, even large organizations only have a select few segments (with critical/sensitive data) where full packet capture would make sense. But driven by APT hype, I highly suspect we’ll see adoption start to rise again, and a ton of parallel technologies vendors starting to market tools such as NBA and network monitoring in the space.
I loved the Silent Runner product. They had a rather novel brassy approach that required a prospect to attend a three day course…just to evaluate! “Yeah, Baby, Yeah!”
Full packet capture is a bit like trying to eat a whale all in one bite. Clearly, it’s only for those with real big appetite, and bigger wallet/purse to pay the pricey tech monger bill.
For those who are watching their network, I do like the idea of packet-based monitoring and reporting because it provides the greatest amount of useful management information, especially as compared to control device logs and log scrapers. Our approach at Congruity Technologies tracks every flow in and out, sampling every packet at the network-level just inside the gateway and culling salient data from each packet to produce a comprehensive and digestible profile of performance and usage information. In such cases where an admin, engineer or analyst would like more detailed information, they can select the network element (port, protocol, device, user etc) from the Congruity Inspector report interface and initiate one or multiple full packet traces, generating file(s) which can be imported separately into Wireshark for deep analysis. All this can be done quickly and efficiently from anywhere on the network via a Web interface.
We saw a need for a synthesized packet-capture approach, creating our own unique data source and eliminating the overhead and complexity of the full-packet gear and SIM/SIEM products. This is reflected in our pricing which is well within the discretionary budgets of most organizations, even in these tough economic times. It also installs in mere minutes and doesn’t require a huge honking piece of iron to operate (late model Win Box is fine).
Fast 3-click end-to-end navigation. Features found in many other full-packet/SIEM appliances and applications. Full packet capture products can dig deeper, but aren’t as easy to use and come at a considerable premium of resources and cost.
We keep information in our integrated database, aging out the oldest data after a user-definable period. DBA features do enable users to off-load information from the IBM DB2 database as they like for longer-term storage.
Our goal from the outset was to create a comprehensive and user-friendly program that anybody can sit in front of, with modest direction, and grasp what’s happening on the network. We focus on the here and now, delivering a consistent process with information to optimize network, bandwidth, systems and the business bottom-line. Inspector is on-demand featuring an objective data source, operational transparency (no one has to be tasked to get the information) and 100% network visibility for all levels of management where ever and whenever they want it.
Full packet capture may have a niche whereas our full flow/packet sample technique offers a more practical, affordable and broad-based monitoring and management solution for the data starved, packet-yearning masses.
Thanks for posting. A couple of comments re: your comments.
1/2. The line rate issue has been resolved in a number of ways. Would be glad to discuss the merits or not of single appliance versus parallel architectures some time with you. The issue these days is not really ability to capture at line rate—several companies, including NetWitness have figured that part out and do it every day on networks exceeding 10G. The issue is being able to do something OTHER than simply write packets to disk. Many companies write packets to disk, then expect you to perform post facto analysis on literally hundreds of terabytes of barely indexed or un-indexed captured packets. What you get essentially is a very large PCAP library where searching for anything beyond some basic Wireshark-like indices is a time-consuming after-the-fact problem. That approach is designed to fail and is not very useful. The key to usability today is to create metadata (indexes) at line speed at capture time. These metadata, which should consist of critical elements from layer 2 to layer 7, also must be aligned with the problem set of the end analyst, e.g., the SOC analyst, or other security-related consumer. Ultimately, if the metadata is robust enough, and essentially represent indices into a large set of captured sessions, you have a solution that lends itself to many interesting situations. That’s how we do it.
3. I would argue that storage runs across a range of cost structures from expensive to relative cheap, because there are different classes of storage and storage requirements. If you need the kind of storage you would buy to house your customer/transaction database for your online business, then the TCO of storage is high. But storage for the purpose of full packet capture can be a different class of commodity storage that is reasonably priced at this point.
To your last point, the real key to full packet capture is to NEVER turn it off. It’s when you turn it off that you miss something important and you can’t do the kind of real-time analysis and alerting that you describe. You can’t turn it on after the compromise. You need to have it running all the time.
Would love to discuss with you in detail. Drop me a line.
eddie at netwitness dot com
By Eddie Schwartz
There are several challenges with full packet capture:
1. Ensure line rate capture: Bigger pipes can carry a lot of data that need to be collected fast and then copied to a storage device. Overall I/O, including writing to disk is slower than network activity.
2. When you use parallel computing powers to solve problem #1 you need to solve issues like packet ordering, time stamping etc. Not simple as it sounds.
3. Storage is still expensive…
Having said that, I think that the bigger challenges are to identify when to turn on and off full packet capture so it’ll be used for forensics and find the right way to provide real time analysis of the captured data fast enough so it could be use in order to trigger a security policy.
By Sharon Besser
I agree, Rich, it is time for network forensics as the next generation of network security monitoring. For years after IDS, security operations teams waited for something better, in terms of network visibility and situational awareness. NBAD offered some interesting data, but the product category suffered from incomplete content and context and a basis grounded in statistical baselines with errors programmed by design. DLP also made some sense, but it was a compliance-focused initiative that was not designed with the intense and agile needs of network security in mind. So, as you say, all hail the return of network forensics….
Some of us never went away. At NetWitness, we’ve been defining network forensics for over a decade. Brian Girardi, our Director of product management just blogged this evening about network forensics ca. 1999. (See: http://www.networkforensics.com) Some of the companies you mentioned above and some that are laying claim to network forensics today still are doing what NetWitness pioneered in 2002. In 2010, network forensics is about a lot more than cheap hard drives, 10G interfaces, and the ability to reconstruct Web pages. If anyone wants to see what state-of-the-art real-time network forensics really means and how it can address the toughest network visibility issues, take a look at NetWitness Investigator Freeware as a starting point, and the extensibility in our enterprise framework. Or, drop me a line and I’ll be glad to explain.
Thanks Rich for bringing this issue forward on your blog.
eddie at netwitness dot com
By Eddie Schwartz
I’ve used both Silent Runner (back when it was a Raytheon product running on SGI) and Shadow, and they shared a fundamental problem that seems to have gone away somewhat: storage.
Silent Runner was a neat idea, but we could never afford enough disk space to capture and store off of enough of the network to make it of value (we were using it to detect insider abuse). I think that today’s relatively low cost storage makes these products and their need for masses of data far more realistic.
Having been a (mostly unhappy) NBA proponent for a very long time, they never really have matured enough to supplant full packet capture, and for me anyway, their adoption was a compromise, trading fidelity of data stream for breadth of coverage that the use of netflow enabled.
Nice article Rich,
I think you can add a great Network Forensics tool to your list:
NetWitness (there are free and commercial grade versions of it). More info at http://www.netwitness.com
By Sandro Suffert