Securosis

Research

Applied Network Security Analysis: The Advanced Security Use Case

The forensics use case we discussed previously is about taking a look at something that already happened. You presume the data is already lost, the horse is out of the barn, and Pandora’s Box is open. But what if we tried to look at some of these additional data types in terms of making security alerts better, with the clear goal of reducing the window between exploit and detection: reacting faster? Can we leverage something like network full packet capture to learn sooner when something is amiss and to improve security? Yes, but this presents many of the same challenges as using log-based analysis to detect what is going on. You still need to know what you are looking for, and an analysis engine that can not only correlate behavior across multiple types of logs, but also analyze a massive amount of network traffic for signs of attack. So when we made the point in Collection and Analysis that these Network Security Analysis platforms need to be better SIEMs than a SIEM, this is what we were talking about. Pattern Matching and Correlation Assuming that you are collecting some of these additional data sources, the next step is to turn said data into actionable information, which means some kind of alerting and correlation. We need to be careful when using the ‘C’ word (correlation), given the nightmare most organizations have when they try to correlate data on SIEM platforms. Unfortunately the job doesn’t get any easier when extending the data types to include network traffic, network flow records, etc. So we continue to advocate a realistic and incremental approach to analysis. Much of this approach was presented (in gory detail) in our Network Security Operations Quant project. Identify high-value data: This is key – you probably cannot collect from every network, nor should you. So figure out the highest profile targets and starting with them. Build a realistic threat model: Next put on your hacker hat and build a threat model for how you’d attack that high value data. It won’t be comprehensive but that’s okay. You need to start somewhere. Figure out how you would attack the data if you needed to. Enumerate those threats in the tool: With the threat models, design rules to trigger based on the specific attacks you are looking for. Refine the rules and thresholds: The only thing we can know for certain is that your rules will be wrong. So you will go through a tuning process to hone in on the types of attacks you are looking for. Wash, rinse, repeat: Add another target or threat and build more rules as above. With the additional traffic analysis you can look for specific attacks. Whether it’s looking for known malware (which we will talk about in the next post), traffic destined for a known command and control network, or tracking a buffer overflow targeted at an application residing in the DMZ, you get a lot more precision in refining rules to identify what you are looking for. Done correctly this reduces false positives and helps to zero in on specific attacks. Of course the magic words are “done correctly”. It is essential to build the rule base incrementally – test the rules and keep refining the alerting thresholds – especially given the more granular attacks you can look for. Baselining The other key aspect of leveraging this broader data collection capability is understanding how baselines change from what you may be used to with SIEM. Using logs (or more likely NetFlow), you can get a feel for what is normal behavior and use that to kickstart your rule building. Basically, you assume what is happening when you first implement the system is what should be happening, and alert if something varies too far from that normal. That’s not actually a safe assumption but you need to start somewhere. As with correlation this process is incremental. Your baselines will be wrong when you start, and you adjust them over time based with operational experience responding to alerts. But the most important step is the start, and baselines help to get things going. Revisiting the Scenario Getting back to the scenario presented in the Forensics use case, how would some of this more pseudo-real-time analysis help reduce the window between attack and detection? To recap that scenario briefly, a friend at the FBI informed you that some of your customer data showed up as part of a cybercrime investigation. Of course by the time you get that call it is too late. The forensic analysis revealed an injection attack enabled by faulty field validation on a public-facing web app. If you were looking at network full packet capture, you might find that attack by creating a rule to look for executables entered into the form fields of POST transactions, or some other characteristic signature of the attack. Since you are capturing the traffic on the key database segment, you could establish a content rule looking for content strings you know are important (as a poor man’s DLP), and alert when you see that type of data being sent anywhere but the application servers that should have access to it. You could also, for instance, set up alerts on seeing an encrypted RAR file on an egress network path. There are multiple places you could detect the attack if you know what to look for. Of course that example is contrived and depends on your ability to predict the future, figuring out the vectors before the attack hits. But at lot of this discipline is based on a basic concept: “Fool me once, shame on you. Fool me twice, shame on me.” Once you have seen this kind of attack – especially if it succeeds – make sure it doesn’t work again. It’s a bit of solving yesterday’s problems tomorrow, but many security attacks use very similar tactics. So if you can enumerate a specific attack vector based on what you saw, there is an excellent

Share:
Read Post

Virtual USB? Not.

Secure USB devices – ain’t they great? They offer us the ability to bring trusted devices into insecure networks, and perform trusted operations on untrusted computers. If I could drink out of one, maybe it would be the holy grail. Services like cryptographic key management, identity certificates and mutual authentication, sensitive document storage, and a pr0nsafe web browser platform. But over the last year, as I look at the mobile computing space – the place where people will want to use secure USB features – the more I think the secure USB market is in trouble. How many of you connect a USB stick to your Droid phone? How about your iPad? My point is that when you carry your smart device with you, you are unlikely to carry a secure USB device with you as well. The security services mentioned above are necessary, but there has been little integration of these functions into the devices we carry. USB hardware does offer some security advantages, but USB sticks are largely part of the laptop model (era) of mobile computing, which is being marginalized by smart phones. Secure on-line banking, go-anywhere data security, and “The Key to the Cloud” are clever marketing slogans. Each attempts to reposition the technology to gain user preference – and fails. USB sticks are going the way of the zip drive and the CD – the need remains but they are rapidly being marginalized by more convenient media. That’s really the key: the security functions are strategic but the medium is tactical. So where does the Secure USB market segment go? It should go with the users are: embrace the new platforms. And smart device users should look for these security features embedded in their mobile platforms. Just because the media is fading does not mean the security features aren’t just as important as we move on to the next big thing. These things all tend to cycles, but the current strong fashion is to get “an app for that” rather than carry another device. Lack of strong authenication won’t make users carry and use laptops rather than phones. It is unclear why USB vendors have been so slow to react, but they need to untie themselves from their fading medium to support user demand. I am not saying secure USB is dead, but saying the vendors need to provide their core value on today’s relevant platforms. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.