FireStarter: Risk Metrics Are Crap

I recently got into a debate with someone about cyber-insurance. I know some companies are buying insurance to protect against a breach, or to contain risk, or for some other reason. In reality, these folks are flushing money down the toilet. Why? Because the insurance companies are charging too much. We’ve already had some brave soul admit that the insurers have no idea how to price these policies because they have no data and as such they are making up the numbers. And I assure you, they are not going to put themselves at risk, so they are erring on the side of charging too much. Which means buyers of these policies are flushing money down the loo. Of course, cyber-insurance is just one example of trying to quantify risk. And taking the chance that the ALE heads and my FAIR-weather friends will jump on my ass, let me bait the trolls and see what happens. I still hold that risk metrics are crap. Plenty of folks make up analyses in attempts to quantify something we really can’t. Risk means something different to everyone – even within your organization. I know FAIR attempts to standardize vernacular and get everyone on the same page (which is critical), but I am still missing the value of actually building the models and making up plugging numbers in. I’m pretty sure modeling risk has failed miserably over time. Yet lots of folks continue to do so with catastrophic results. They think generating a number makes them right. It doesn’t. If you don’t believe me, I have a tranche of sub-prime mortgages to sell you. There may be examples of risk quantification wins in security, but it’s hard to find them. Jack is right: The cost of non-compliance is zero* (*unless something goes wrong). I just snicker at the futility of trying to estimate the chance of something going wrong. And if a bean counter has ever torn apart your fancy spreadsheet estimating such risk, you know exactly what I’m talking about. That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP). But you don’t need to be able to say the risk of firing the admin is 92, and the risk of not upgrading the perimeter is 25. Those numbers are crap and smell as bad as the vendors who try to tie their security products to a specific ROI. BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks. And we can always default to Shrdlu’s next-generation security metrics, which are awesome. But I think spending a lot of time trying to quantify risk continues to be a waste. I know you all make decisions every day because Symantec thinks today’s CyberCrime Index is 64 and that’s down 6%. Huh? WTF? I mean, that’s just making sh*t up. So fire away, risk quantifiers. Why am I wrong? What am I missing? How have you achieved success quantifying risk? Or am I just picking on the short bus this morning? Photo credits: “Smoking pile of sh*t – cropped” originally uploaded by David T Jones Share:

Read Post

React Faster and Better: Piecing It Together

We have been through all the pieces of our advanced incident response method, React Faster and Better, so it is time to wrap up this series. The best way to do that is to actually run through a sample incident with some commentary to provide the context you need to apply the method to something tangible. It’s a bit like watching a movie while listening to the director’s commentary. But those guys are actually talented. For brevity we will use an extremely simple high-level example of how the three response tiers evaluate, escalate, and manage incidents: The alert It’s Wednesday morning and the network analyst has already handled a dozen or so network/IDS/SIEM alerts. Most indicate probing from standard network script-kiddie tools and are quickly blocked and closed (often automatically). He handles those himself, just another day in the office. The network monitoring tool pings an alert for an outbound request on a high port to an IP range located in a country known for intellectual property theft. The analyst needs to validate the origin of the packet, so he looks and sees the source IP is in Engineering. Ruh-roh. The tier 1 analyst passes the information along to a tier 2 responder. Important intellectual property may be involved and he suspects malicious activity, so he also phones the on-call handler to confirm the potential seriousness of the incident. Tier 2 takes over, and the tier 1 analyst goes back to his normal duties. This is the first indication that something may be funky. Probing is nothing new and tier 1 needs to handle that kind of activity itself. But the outbound request very well may indicate an exfiltration attempt. And tracing it back to a device that does have access to sensitive data means it’s definitely something to investigate more closely. This kind of situation is why we believe egress monitoring and filtering are so important. Monitoring is generally the only way you can tell if data is actually leaking. At this point the tier 1 analyst should know he is in deep water. He has confirmed the issue and pinpointed the device in question. Now it’s time to hand it off to tier 2. Note that the tier 1 analyst follows up with a phone call to ensure the hand-off happens and that there is no confusion. How bad is bad? The tier 2 analyst opens an investigation and begins a full analysis of network communications from the system in question. The system is no longer actively leaking data, but she blocks any traffic to that destination on the perimeter firewall by submitting a high priority request to the firewall management team. After that change is made, she verifies that traffic is in fact being blocked. She sets an alert for any other network traffic from that system and calls or visits the user, who predictably denies knowing anything about it. She also learns that system normally doesn’t have access to sensitive intellectual property, which may indicate privilege escalation – another bad sign. Endpoint protection platform (EPP) logs for that system don’t indicate any known malware. She notifies her tier 3 manager of the incident and begins a deeper investigation of previous network traffic from the network forensics data. She also starts looking into system logs to begin isolating the root cause. Once the responder notices outbound requests to a similar destination from other systems on the same subnet, she informs incident response leadership that they may be experiencing a serious compromise. Then she finds that the system in question connected to a sensitive file server it normally doesn’t access, and transferred/copied some entire directories. It’s going to be a long night. As we have been discussing, tier 2 tends to focus on network forensics because it’s usually the quickest way to pinpoint attack proliferation and severity. The first step is to contain the issue, which entails blocking traffic to the external IP – this should temporarily eliminate any data leakage. Remember, you might not actually know the extent of the compromise, but that shouldn’t stop you from taking decisive action to contain the damage as quickly as possible. At this point, tier 3 is notified – not necessarily to take action, but so they are aware there might be a more serious issue. It’s this kind of proactive communication that streamlines escalation between response tiers. Next, the tier 2 analyst needs to determine how much the issue has spread within the environment. So she searches through the logs and finds a similar source, which is not good. That means more than one device is compromised and it could represent a major breach. Worst yet, she sees that at least one of the involved systems purposely connected to a sensitive file store and removed a big chunk of content. So it’s time to escalate and fully engage tier 3. Not that it hasn’t been fun thus far, but now the fun really begins. Bring in the big guns Tier 3 steps in and begins in-depth analysis of the involved endpoints and associated network activity. They identify the involvement of custom malware that initially infected a user’s system via drive-by download after clicking a phishing link. No wonder the user didn’t know anything – they didn’t have a chance against this kind of attack. An endpoint forensics analyst then discovers what appears to be the remains of an encrypted RAR file on one of the affected systems. The network analysis shows no evidence the file was transferred out. It seems they dodged a bullet and detected the command and control traffic before the data exfiltration took place. The decision is made to allow what appears to be encrypted command and control traffic over a non-standard port, while blocking all outbound file transfers (except those known to be part of normal business process). Yes, they run the risk of blocking something legit, but senior management is now involved and has decided this is a worthwhile risk, given the breach in progress. To limit potential data loss through the C&C channels left open, they

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.