Securosis

Research

Friday Summary, April 26, 2013: Birthday Edition

On March 13th I received a birthday card. It was from my Dad. It was a nice card, it was clear he had put some thought into the card selection, and I was genuinely swayed by his thoughtful memento. On the Ides of March I received a birthday card from my grandmother. Another nice card and it was thoughtful that she remembered my birthday. Two weeks later a birthday gift arrived from my mother. Not for me, mind you, but for my wife. It was a beautiful gift, obviously expensive, and again a superbly wonderful gesture. We don’t get to keep in close contact, so I was both surprised and appreciative. April 1st a gift card arrived, this time for me, again from my mom. There is not much to this story unless you know a couple additional facts. First, all three of the aforementioned blood relatives live under the same roof. Second, my birthday is in April; this week, in fact. My wife’s is another month away. And they have not sent my wife a birthday gift in, well, at least 20 years. As it is with human nature, gifts and cards arriving on seemingly random dates makes you wonder what’s up. You question motivation. Are they OK? And for the first time I started to worry about my parents’ health and well-being. Were they forgetting the date? Did they know what date it was? Jokingly my wife has said ‘Happy Birthday’ to me each day since March 13th. To make a long story short, a phone call cleared up the situation and all is well. I think that my parents just happened to find gifts they liked and sent them, dates be damned. Which is what you do when you think the person will really like the gift and you can’t wait to give it to them. Given my profession – it’s certainly not a job – where segregation between work and … well, that’s the point. My life and my work are not separate. The two are fully merged. There is no such thing as a work day, and there is no such thing as a day off. I work weekends, I don’t really do vacations, but on the plus side I do try to make the best of every day. When I want to do something I do it, and adjust work/life accordingly. All of which makes me realize that the gifts and cards from my relatives were nice, but I was ambivalent. But the idea that a specific date did not matter struck me as profound. Why limit your ability to celebrate? In that spirit I decided, what the heck, my birthday would not be a single day. I decided I would declare the entire week birthday week, and decide to do one fun birthday related event every day. Birthday cake each and every day. Over-the-top dinner each night. One outing every day. One thing I have wanted to accomplish every day this week. And because work/life does not go away, each day I have averaged 4-5 hours of work, as evidenced by my writing this post, and why a couple of you got wine-infused replies to various email and phone calls last night (you know who you are). The experiment is thus far a success, and each day offered extra time away from the computer to have some fun. This is working so well that I will do it every year going forward. Happy Birthweek! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post on Database Blocking. Favorite Securosis Posts Adrian Lane: How to Use the 2013 Verizon Data Breach Investigations Report. Rich has put a lot of thought into his analysis and offers a unique perspective. David Mortman: Big Data Security Jazz. Mike Rothman: CipherCloud Loses Argument with Internet. Rich: Teaching Updated Cloud Security Class at Black Hat USA. Jamie and I are working on added material to make the class truly worthy of Black Hat. Other Securosis Posts Incite 4/24/2013: F Perfect. Question everything, including the data. The CISO’s Guide to Advanced Attackers: Verify the Alert. Security Analytics with Big Data [New Series]. The CISO’s Guide to Advanced Attackers: Mining for Indicators. Token Vaults and Token Storage Tradeoffs. No news is just plain good: Friday Summary, April 18, 2013. Favorite Outside Posts David Mortman: Cryptography is a systems problem (or) ‘Should we deploy TLS’. Adrian Lane: Why You Should Overload WebSite Errors. Are you paying attention, developers? This is not security through obscurity – it’s about not handing data to adversaries so they can hack your site. James Arlen: How I Got Here: Chris Hoff. Mike Rothman: Sriacha hot sauce purveyor turns up the heat. Rich: Just How Did Apple “Journalism” Get This Bad? While Ian writes this specifically about Apple, it also applies to a lot of security writing. Project Quant Posts Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Top News and Posts PC owners have to watch 24 sources for fixes CISPA cybersecurity bill Privacy advocates warn about coming tsunami of surveillance cameras London already knows the result – cameras don’t deliver. Silicon Valley companies quietly try to kill Internet privacy bill Twitter has 2-factor authentication. Brad Arkin promoted to CSO of Adobe. Brad is as good as they get, this is great news for all of us. Blog Comment of the Week This week’s best comment goes to @VZDBIR, in response to How to Use the 2013 Verizon Data Breach Investigations Report. I am breaking with tradition this week to favorite a tweet: @VZDBIR: Sometimes it’s scary how @securosis gets all up in my brain. Those guys are smart. #Dbir https://t.co/kV995yrxUX I would bet that Twitter account, like the Associate Press, was hacked. Share:

Share:
Read Post

Incite 4/24/2013: F Perfect

Perfect is my least favorite word in the English language. Nothing is perfect. There are always things that can be improved upon, no matter how good they are. And striving for perfection is an express train to disappointment and unhappiness. I’m a card-carrying disciple of “good enough”. It doesn’t need to be perfect to add value. So I don’t obsess about typos, misplaced pixels, or any other such nonsense. Which can irritate certain business partners [and editors] at times. But I’m not going to change it. If I do work (or anything else), I get it to a point where I’m happy with it and move on. That doesn’t mean I strive to be mediocre. Or that I accept subpar effort from myself or anyone else. I do my best. I focus on consistent effort, not super-human perfection. Some folks believe you need to push beyond your self-imposed mental limits to achieve truly great things. I get that. I have tried that. It made me unhappy because I found I had a high bar for what I expected to achieve. I have the hyper self-motivation gene. I didn’t need an external party to push me. What I needed was to get comfortable with good enough. In hindsight, it’s sad that I felt failure even in the face of significant accomplishment. That’s no way to go through life. At least not for me – you can do what you want. This is a hard lesson to teach your kids, especially when the bar is set by someone else. The Boss and I expect our kids to work hard and achieve to their level of ability. XX2 has a large personality. She is passionate and talented and has tremendous potential. We see that potential and so do her teachers. Unfortunately her teacher this year is a perfectionist who thinks all the kids should be perfect. A few months ago her teacher had beaten her down and we saw it. She stopped trying because she knew she couldn’t achieve the perfection her teacher expected. Her behavior and grades went down a little because she didn’t care anymore. It was time to intervene. So the Boss sat down with the teacher and they worked out a set of criteria that represents a good day for XX2. We thought some of the criteria were stupid but they were based on stuff that irritates the teacher. She gets check marks every day based on the criteria and we sign off daily. She gets a prize from the teacher at the end of the week if she got all positive check marks. Right, she needs to be perfect to get her prize from the teacher. Back to Square 1. Clearly we weren’t going to move the teacher off her perfection fixation. So we went around the teacher. We made it clear to XX2 that we don’t expect perfection. F Perfect. F that teacher too. We put an alternative incentive plan in place. If XX2 gets 5 of 6 checks every day for the week, she gets something from us. And her success criteria is now how she did in our eyes, not the teacher’s. Win! Of course we also talk about what she did that day and what she can do better the next day. We push her to be her best. But not to be perfect. To be human – perfectly imperfect – and we want her to be comfortable with that. –Mike Photo credits: 19. originally uploaded by silangel Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Security Analytics with Big Data Introduction The CISO’s Guide to Advanced Attackers Verify the Alert Mining for Indicators Intelligence, the Crystal Ball of Security Sizing up the Adversary Newly Published Papers Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Implementing and Managing Patch and Configuration Management Incite 4 U You! Yes, you! You’re a target: Most folks who are compromised spend their days blissfully unaware. They figure who would be interested what they have? As this post on DealBook shows, every company with any kind of intellectual property is a target for these cyber attacks. DRINK! Yeah, the article gets a 15-yard penalty for excessive use of ‘cyber’. But their point is reasonable: start-up tech companies, who may think they know everything, have no specific mandate or requirement to do security. The authors put the impetus on investors to make sure the management team is challenged to ensure proper intellectual property protections are in place. But good luck with that. That’s like the blind asking the blind whether the moon is out. – MR Break the abuse cycle: It is well known that human behavior favors certainty over novelty. It varies based on our genes, but in general we like things to stay the same – it’s an inertia thing. That makes sense, considering that for many years changes signified impending death, so you might as well sprinkle a few red shirts with the explorer gene, but keep the rest of us safe at home (and no, I promise I didn’t learn all this watching The Croods with my kids). So it comes as no surprise that, almost 13 years on, Windows XP is still used in many organizations. To be honest, I think Gartner’s 10% estimate is low, especially if you count the entire retail and hospitality industry that runs their point of sale systems on XP. Really. Not only is it time to get off XP, because security support ends next year, but it is time to break the abuse cycle. We can’t afford to lock ourselves into 10+-year-old operating systems in today’s threat environment. We need to architect systems and operational processes (such as user training) to allow more frequent upgrades.

Share:
Read Post

Question everything, including the data

The good news about being in security is that you don’t have to look too far for criticism of your work. Most of the time it’s constructive criticism, so overall interaction with the security community makes your work markedly better. Which is why we live by the Totally Transparent Research process. It makes our work better. But when our pals at Verizon clogged up my Twitter timeline this morning with their annual DBIR masterpiece (you can also check out our guidance on the DBIR), they dragged my attention back to a post by Jericho from Attrition: “Threat Intelligence”, not always that intelligent, prompted by Symantec’s most recent security trends report. Jericho summed up the value of security trend reports as only he can, and explained why folks tend not to challenge them often. The reason? Security companies, professionals, and journalists are complacent. They are happy to get the numbers that help them. For some, it sells copy. For others, it gets security budget. Since it helps them, their motivation to question or challenge the data goes away. They never realize that their “threat intelligence” source is stale and serving up bad data. It’s not in the machine’s best interest to question the data. That’s why most folks (besides, me I guess) don’t poke at the vendor-sponsored survey data or other similar nonsense put forth as gospel in the security business. Anything that helps sell security is good, right? Well, no. Decisions based on faulty data tend to be faulty decisions. So Jericho presents a number of inconsistencies between Symantec’s vulnerability data and the OSVDB dataset he contributes to. It’s pretty compelling stuff. But we shouldn’t minimize either the effort involved in building these reports or the value they do provide. There is a lot of value in these threat and data breach reports, if the data is reasonably accurate. We’re security people. We question everything, so it’s reasonable to question the data you use to make the case for your existence. Photo credit: “Question” originally uploaded by ACU Library Share:

Share:
Read Post

Teaching Updated Cloud Security Class at Black Hat USA

This summer James Arlen and I are teaching the recently updated cloud security class we developed for the Cloud Security Alliance (CCSK Plus). We are pretty excited to teach this at Black Hat, and will be bringing a few extra tricks to handle the more advanced audience we expect. The class runs two days and covers a huge amount of material. The first day is mostly lecture, covering: Introduction to cloud computing and cloud architectures. Securing cloud infrastructure (public and private). Governing and managing risk in cloud computing (yep, we have to cover compliance, but we also include incident response). Securing cloud data. Application security and identity management for cloud. Selecting and managing cloud providers. This gives you everything you need to take the CCSK test if you want. The second day is where the real fun starts – we spend pretty much the entire time in labs. Including: Assessing cloud risk. This is a tabletop risk management exercise focused on practical scenarios. Launching and securing public cloud instances. You’ll learn the ins and outs of Amazon EC2 as you launch and secure your first instance. This includes a deep dive into security groups, picking AMIs, and using initialization scripts to auto-update and configure instances. Encrypting cloud data. We encrypt a storage volume using dm-crypt and dig into different key management scenarios and encryption options. We may have some new demos here of products just hitting the market. Building secure cloud applications. We expand on what we have created to build a multi-tier secure application, focusing on proper use of hypersegregation by splitting application components. Federated identity and using IAM to harden the management plane. We add a little OpenID to our application. Up to this point everything builds out into a complete stack and all the exercises tie together. We also work with AWS IAM and how to use different kinds of credentials and templates to segregate things at the management plane. Securing a private cloud. Using your laptops and our virtual machines we build a running OpenStack cloud in the classroom and run through the security essentials. But here is the trick for Black Hat. Aside from teaching a very recently updated version of the class, we are preparing for a more technical audience. We will be bringing more advanced exercise options (on top of the basics so people with less experience can still get something out of the class), and even a demo attack tool PoC. We will feel the audience out but we already have some advanced (self-guided) exercises together. If you’re interested you can sign up now. Also, although this isn’t an instructor class, anyone who takes this (and contacts us ahead of time) will be eligible to complete additional, web-based instructor training free of charge after Black Hat. We aren’t a training organization, and we care more about getting more teachers out there than keeping it all to ourselves. Hope to see you in Vegas! Share:

Share:
Read Post

CipherCloud Loses Argument with Internet

There are two ways to respond to criticism of your security product, especially when encryption is involved. Respond cautiously, openly, and positively as demonstrated last week by AgileBits, the folks behind 1Password. Do what CipherCloud did. The TL;DR is that some people over on StackExchange were trying to figure out how CipherCloud works (specifically its homomorphic encryption, which CipherCloud states isn’t actually part of the product). Some public materials were posted, and then the CipherCloud legal team smacked StackExchange with a DMCA takedown notice over screenshots of the product as people tried to figure out how it works. They also issued a takedown request based on “false and misleading statements”, which does little more than fully engage the Streisand effect. CipherCloud has since issued a kinda-sorta apology and an update that, judging from the few comments doesn’t satisfy anyone. They apologize for the takedown requests and blame their legal department, but barely address the actual issue. First of all from what I have seen they have a good product which does what they claim it does. I have been briefed and know some large organizations evaluating or using it. The problem here isn’t the product – it’s their approach. When someone posts potentially unfavorable information about you on the Internet, trying to squash it always backfires. Also, if the posts are mostly trying to cut through your marketing material to see how the product works, that means people are interested in your product and you should treat them with respect. CipherCloud’s response to the DMCA takedown criticism is to state that the conclusions coming out of StackExchange were wrong and based on an older video demo. That’s totally fine, but they fail to actually fill the information gap with accurate information. There is a little about what they don’t do, the usual platitudes about FIPS-140, and that’s about it. They say they will provide this information to customers, prospects, and partners, but want to keep their IP otherwise out of the public eye: I understand and appreciate the interest in the market to better understand our technology, and I am happy to discuss additional details around our encryption implementation with our customers, prospects and partners. If you are interested in learning more, please contact CipherCloud directly via our website at info@ciphercloud.com This isn’t how to respond. I know their competitors, and trust me, they all have a good idea of how CipherCloud works. The ones who care set up straw buyers/prospects to get their hands on demos, however unethical that is. I don’t think they need to reveal everything, but this was a great opportunity to get some additional attention, explain why they feel they are better than the competition, and generate some goodwill among those interested in the product. Instead they look like they are hiding something. 1Password nailed it with their reasoned response to a security concern, and the industry is well trained to be skeptical of security vendors – especially in encryption – who aren’t transparent about their technology. Also, when you make a mistake like letting loose the legal dogs, you need to sound truly apologetic, not defensive. Anyway, big companies can get away from this, but now CipherCloud has to deal with negative coverage as the second result on their Google search. I am not a marketing exec, but that coverage is not good, and they will have to live with it for a while. Share:

Share:
Read Post

Big Data Security Jazz

I tend to avoid “security jazz” blog posts – esoteric arguments contrasting what we should be doing in security against what we do today. These rants don’t really help IT professionals get their jobs done so I skip them. But this is going to be such a post because I need to talk about big data security approaches. Many of you will to stop reading at this point. But for you data architects, CISOs, and security product development teams learning about how to plan for big data security (particularly those of you who have been asking me lately) and wanting to understand the arcane research that influences my recommendations, read on. I got started on this topic by considering what big data security will look like in coming years. I was reacting to the apparently random recommendations in the general security press. I eventually decided that this is simply unknown. I can’t fairly slam the press for their apparently moronic recommendations, because I cannot be sure they will not be correct in the future. Stock picking monkeys have made fools of professional traders, and it is likely to happen again with big data security predictions. As big data continues its metamorphosis – in data storage, data and node management, system orchestration, and query methods – the ways we secure these clusters will change. A series of industry research papers (PDF), blog posts, and academic research projects on big data convince me that we are still very early in big data’s evolution. In each case we see some evolutionary changes (such as the Berkeley AMPLab’s Spark product), as well as some total rethinks of how to do analysis with big data (such as Google’s Pregel). I am raising this topic on here because I think merits an open discussion. I am being asked frequently how to approach big data security, and given that big data currently looks like Hadoop and Cassandra, there are specific actionable steps that make sense for these types of clusters. But for someone architecting security products, this model might well be obsolete by the time the product goes live. Based upon research findings from last year things like masking, encryption, tokenization, identity management, and API security all make sense in Hadoop. When I speak with vendors who are looking to design big-data-specific security products, I need to caveat all recommendations with “as far as we know today”. I certainly cannot say that in 5 years anyone will still be using Hadoop. I guess Hadoop will still be a big player, but who knows? It could be Dremel, a SQL-like system, in which case we will be able apply many techniques we have already evolved for relational stores. If fashion dictates a Pregel-like ant swarm of worker threads, not so much. Here is where I come to the predictions and recommendations. I would like to recommend that you embed as much security into the application layer as you can. That’s the best place to control access and control who can see what. The application is the gateway to the data, where you can abstract away many underlying data management layer complexities to focus on user rights and business logic enforcement. Application-layer controls also scale security with the application. These are reasons I think (Updated) Intel Mashery, Axway Vordel, and CA Layer7 are important. But we cannot yet tell where big data is going – we don’t know what applications, orchestration, queries, data storage, or even architectures will look like going forward – so it is impossible to know whether any security model we design will be absurd in a few years. The safe approach, based upon the uncertainty of big data environments, would be to protect at the data layer. That means using encryption, masking, and tokenization technologies that don’t expose sensitive data to big data environments. Making that work currently requires big data security clusters fronting big data analytics clusters – not terribly efficient, and you need another cluster (perhaps twice as many, depending on scale). Then I realize that IT folks, trying to get their jobs done, will ignore all this overly abstract mumbo-jumbo and fall back on what they are comfortable with: the encapsulation/walled garden model of security. Put a firewall “in front” of the cluster, sealing it off (virtually) from the rest of IT. Hard firewall shell on the outside, chewy lack of security on the inside. At this point we appreciate the Jacquith/Hoff Security Hamster Sine Wave of Pain model as a useful tool. You can show how each of these choices is right … and wrong. We will play catch-up because we have no choice in the matter. Share:

Share:
Read Post

The CISO’s Guide to Advanced Attackers: Verify the Alert

All the discussion so far in our CISO’s Guide to Advanced Attackers has been of preparation for the main event. The bell rings when an alert fires and it’s time for your incident response process to kick in. But as we have seen through our adversary analysis and intelligence gathering, “advanced attackers” present some unique challenges. In particular, they significant resources and time, which makes them difficult to deter – even if you successfully block one attack or stop a specific exfiltration, there will be more. A lot more. As usual we depend on process as the key to dealing with advanced attackers. But this class of adversaries requires you to put a premium on analyzing malware to isolate the root cause of the attack, looking for indicators to identify additional compromised devices, and then trying to piece together the bigger picture of the attack. React Faster and Better, CISO Style Let’s turn back the clock and review some of the Incident Response Fundamentals we introduced a few years ago. The process remains largely the same, but you are likely to need some of the data sources covered in React Faster and Better and some of the analysis techniques presented in the Malware Analysis Quant process maps to deal with advanced attackers’ tactics. If you weren’t worried enough about this, remember that your perceived success as CISO is directly correlated to your ability to respond effectively to incidents and keep your organization out of headlines. You don’t need a SIEM to do that correlation, by the way. During the Attack Once the alert sounds it is time to figure out whether the attack is legitimate, what it looks like, and the proper escalation path (if necessary). Here are the general steps in that effort: Gather information: For an investigator to make heads or tails of anything, your first tier needs to collect some information. Things like who triggered the alert and what systems and devices were involved. Were you notified by a third party (not a good sign)? Could you find an alert (perhaps one that was ignored) around the time period of the attack? You are trying to get a feel for whether this is an operational failure or something designed to evade your defenses. Escalate: Next you decide how far up the chain of command this needs to go. If there are critical systems involved (those on your list of things where compromise would be bad), then your spidey senses need to start tingling and you need the big guns involved. The escalation scenarios must be defined and agreed on ahead of time so your first tier responders know what to do and when. Size up: Once your second tier (or even third tier) responders are involved, the key is determining the scope of the situation. Was this a total compromise? Does extensive lateral movement indicate potential exfiltration? You need to know what you might be dealing with, and to assemble a list of the stuff you really need in order to investigate the incident. Initial Containment: Depending on your initial assessment of the situation you may need to quarantine devices, step up monitoring, or remove the device’s access to sensitive data. As with escalation, the initial set of containment actions should be documented in a playbook, with documented approval from all stakeholders, to ensure containment steps are not held up by bureaucracy. At this point you should have initial defenses in place and a feel for whether you are dealing with folks who know what they’re doing. If the attack doesn’t seem sophisticated or coordinated you can probably just wipe the machine and move on, hopefully using it as a teaching moment so the user doesn’t do something stupid again. Is it a risk to just wipe and move on? You bet! You lose any ability to seriously analyze the attack, but part of the CISO’s job is to allocate resources to the stuff that matters. Being able to tell the difference between an advanced attacker, an operational failure, and a stupid user error becomes a key determinant of success in the job, along with resource allocation. If there is a chance that you are dealing with an advanced attacker (or something else is pushing you to do a broader investigation), you will start working through a more detailed forensics process. That means quarantining the affected devices, taking forensic images, and working to determine the root cause of the attack. That requires you to dig into the malware and determine how the devices were compromised, then assess the extent of the damage. Digging for the Root (Cause) Malware analysis is a discipline all its own. We have documented the entire process in Malware Analysis Quant, but CISO types rarely fire up BackTrack or ship file up to malware sandboxes, so here is what you need to make sure the right stuff is happening to identify the root cause of a compromise. Build Testbed: It is rarely a good idea to analyze malware on production devices connected to production networks. So your first step is to build a testbed to analyze what you found. This is mostly a one-time effort but you will always be adding to the testbed, with the evolution of your attack surface. There are services that can do this as well, without the hardware investment. Static Analysis: The first actual analysis step is static analysis of the malware file to identify things like packers, compile dates, and functions used by the program. Dynamic Analysis: There are three aspects of what we call Dynamic Analysis: device analysis, network analysis, and proliferation analysis. To dig a layer deeper, first observe the impact of the malware on the specific device, dynamically analyzing the program to figure out what it actually does. Here you seek insight into memory usage, configuration, persistence, new executables, and anything else interesting associated with execution of the malware. This is managed by running the malware in a sandbox. Once you understand what the malware does to a device you can begin to figure out its communications paths. This includes command and control traffic, DNS tactics, exfiltration

Share:
Read Post

How to Use the 2013 Verizon Data Breach Investigations Report

A few hours after this post goes live, the Verizon Enterprise risk team will release their 2013 Data Breach Investigations Report. This is a watershed year for the report, as they are now up to 19 contributing organizations including law enforcement agencies, multiple emergency response teams (CERTs), and even potential competitors. The report covers 47,000 incidents, among which there were 621 confirmed data disclosures. This is the best data set since the start of the report, so it provides the best insight into what is going on out there. We were fortunate enough to get a preview of the report and permission to post this a few hours before the report is released. In the next 24-72 hours you will see a ton of articles; as analysts we aren’t here to make a story or nab a headline, but to help you get your job done. We offer a very brief overview of the interesting things we saw in the report, but our main focus for this post is to save you a little time in using the results to improve security in your own organization. The best part this year is that the data reflects a more balanced demographic than in the past, and the Verizon risk team did a great job of breaking things out so you can focus on the pieces that matter to you. The report does an excellent job of showing how different demographics face different security risks, from different attackers, using different attack techniques. Instead of a bunch of numbers jumbled together, you can focus on the incidents most likely to affect your organization based on your size and industry. You probably know that from the beginning, but now you have numbers to back you up. But first: If you are an information security professional, you must read this report. Don’t make decisions based on news articles, this post, or any other secondary analysis. It’s a quick read, and well worth your time, even if you only skim it. Got it? There is a ton of good analysis in the report, and no outside summaries will cover the important things you need for making your own risk decisions. Not even ours. We could easily write a longer analysis of the DBIR than the DBIR itself. Key context Before we get any deeper, Verizon made two laudable decisions when compiling the report that might cause some hand wringing among those who don’t understand why: They almost completely removed references to lost record counts such as the number of credit card numbers lost. The report is much more diverse this year, and record counts (which are never particularly useful in breach analysis) were just being misused and misunderstood. Only 15% of confirmed incidents had anything close to a measurable lost records count, so it made no sense to mention counts. The report focuses on the 621 confirmed data loss incidents, not the 47,000 total incidents. Another great decision – most organizations have different definitions of ‘incident’, which made data normalization a nightmare. This is the Data Breach Investigations Report, not an analysis of every infected desktop on your network. These two great decisions make the report much more focused and useful for making risk decisions. A third piece of context is usually lost in much of the press coverage: When the DBIR says something like “password misuse was involved in an incident”, it means it was one of multiple factors in the incident – not necessarily the root cause. Later in the report they tie in the first of the chain of attacks used, but you can’t read, “76% of network intrusions exploited weak or stolen credentials” as “76% of incidents were the result of weak or stolen credentials”. Attacks use chains of techniques, and these are only one factor. Context really is king because your goal is to break the attack chain at the most efficient and cost effective point. The last piece of context is an understanding of what happens when 19 organizations participate. Some use VERIS (the open incident recording methodology published by Verizon) and others use their own frameworks. The Verizon risk team converts between methodologies as needed, and usually excludes data if there isn’t enough to cover the core needed to merge the data sets. This means they sometimes have more or less detail on incidents, and they are clear about this in the report. There is no way to completely avoid survey bias in a sample set like this – incidents must be detected to be reported, and a third party response team or law enforcement must be engaged for Verizon to get the data. This is why, for example, lost and stolen devices are practically nonexistent in this report. You don’t call Verizon or Deloitte for a forensics investigation when a salesperson loses a laptop. Then again, we know of approximately zero cases where a lost device resulted in fraud. They definitely incur costs due to loss reporting and customer notification, but we can’t find any ties to fraud. There is one choice we disagree with, and one area we hope they will drop, but they probably have to keep: The DBIR includes many incidents of ATM skimming and other physical attacks that don’t involve network intrusion. These are less useful to the infosec audience, and we believe the banking community already has these numbers from other places. Tampering with ATMs in order to install skimmers is the vast majority of the ‘Physical’ threat action, which represents 35% of the breaches in the DBIR. Year-over-year trends are nearly worthless now, due to the variety of contributors. It is a very different sample set from last year, the year before, or previous years. Perhaps if they filtered out only Verizon incidents, they could offer more useful trends. But people love these trend charts, despite the big changes in the sample set. ATM skimming attacks are still data breaches, but the security controls to mitigate them are managed outside information security in most financial institutions. For the most part this doesn’t negatively affect the data too much, but

Share:
Read Post

Security Analytics with Big Data [New Series]

Big Data is being touted as a ‘transformative’ technology for security event analysis – promised to detect threats in the ever-increasing volume of event data generated from in-house, mobile, and cloud-based services. But a combination of PR hype, vendor positioning, and customer questions has pushed it to the top of my research agenda. Many customers are asking “Wait, don’t I already have SIEM for event analysis?” Yes, you do. And SIEM is designed and built solve the same problems – but 7-8 years ago – and it is failing to keep up with current problems. It’s not just that we’re trying to scale up to a much larger set of data, but we also need to react to events an order of magnitude faster than before. Still more troubling is that we are collecting multiple types of data, each requiring new and different analysis techniques to detect advanced attacks. Oh, and while all that slows down SIEM and log management systems, you are under the gun to identify attacks faster than before. This trifecta of issues limit the usefulness of SIEM and Log Management – and makes customers cranky. Many SIEM platforms can’t scale to the quantity of data they need to manage. Some are incapable of even storing basic data as fast as it comes in – forget about storing and analyzing non-standard data types. ‘Real-time’ analysis is a commonly cited as SIEM feature but after collection, storage, normalization, correlation, and enrichment, you are lucky to access new events within an hour – much less within a minute. The good news is that big data, correctly deployed, can solve these issues. In this paper we will examine how big data addresses scalability and performance, improves analysis, can accommodate multiple data types, and will be leveraged with existing environments. Or goal is to help users differentiate reality from wishful thinking, and to provide enough information to make informed purchasing decisions. To do this we need to demystify big data and contrast how it differs from traditional data management systems. We will offer a clear and unique definition of big data and explain how it helps overcome current technical limitations. We will offer a pragmatic way for customers to leverage big data, enabling them to select a solution strategically. We will highlight the limitations of SIEM and Log Management, key areas of customer dissatisfaction, areas where big data excels in comparison. We will also discuss some changes required for big data analysis and data management, as well as a change in mindset necessary to take full advantage. This is not all theory and speculation – big data is currently being employed to detect security threats, address new requirements for IT security, and even help gauge the effectiveness of other security investments. Big data natively addresses ever-increasing event volume and the rate at which we need to examine new events. There is no question that it holds promise for security intelligence, both in the numerous ways it can parse information and through its native capabilities to sift proverbial needles from monstrous haystacks. Cloud and mobile architectures force us to reexamine how we manage security data, and to scale across broader sets of systems and events – neither of which mesh with the structured data repositories on which most organizations rely. But most IT and security practitioners do not yet fully understand big data or how to employ it so they are unable to weed through all the hype, FUD, and hyperbole. To take full advantage, however, requires both a deeper understanding of the technology and a subtle shift in mindset to enable informed decisions on incorporate big data into existing IT systems, perhaps by shifting to newer big data platforms. This research paper will highlight several areas: Use Cases: We will discuss issues customers cite with performance and scalability, particularly for security event analysis. We will discuss in detail how SIEM, Log Management, and event-centric systems struggle under new requirements for data velocity and data management, and why existing technologies aren’t cutting it. We will also discuss the inflexibility of pre-BD analysis, alerting, and reporting – and how they demand a new approach to security and forensics, as we struggle to keep pace with the evolution of IT. New Events and Approaches: This post will explain why we need to consider additional data types that go beyond events. Existing technologies struggle to meet emerging needs because threat data does not conform to traditional syslog and netflow event types. There is a clear trend toward broader data analysis to detect advanced attacks and better understand risks. What is Big Data and how does it work? This post will offer a basic definition of big data, along with a discussion of the native capabilities that make big data different than traditional analysis tools. We will discuss how features like HDFS, MapReduce, Hive, and Pig work together to address issues of scale, velocity, performance, and multiple data types. The promise of big data: We will explain why big data is viewed as a disruptive technology for security analytics. We will show how big data solutions mitigate problems and change security and event analysis. We will discuss how big data platforms handle collecting and parsing event data, and cover different queries and reports that support new threat analyses. How big data changes security platforms: This post will discuss how to supplement existing systems – through standalone instances, partial integration of big data with existing systems, systems that natively leverage big data infrastructure, or fully integrated systems that run atop NoSQL structures. We will also discuss operational changes to SIEM usage, including the growing importance of data scientists to security. Integration roadmap and planning: In this section we will address the common concerns, limitations, and realities of merging big data into your IT systems. Specifically, we will discuss: Integration and deployment issues Platform selection (diversity of platforms and data) Policy and report development Data privacy and sharing Big data platform security basics Our next post will cover use cases, the key areas where SIEM needs to improve,

Share:
Read Post

The CISO’s Guide to Advanced Attackers: Mining for Indicators

The key to dealing with advanced attackers is not closing off every window of vulnerability. As we have discussed throughout this series, advanced attackers will figure out a way to gain a foothold in your environment. Actually they will find multiple ways into your environment. So if you hope for any semblance of success, your goal cannot be to stop them – instead you need to work on shorteneing the window between compromise and detection. We have called that Reacting Faster and Better for years. 5 years to be exact, but who’s counting? The general concept is that you want to monitor your environment, gathering key security information that can either identify typical attack patterns as they are happening (yes, a SIEM-like capability), or more likely searching for indicators identified via intelligence activities. Collecting All the Security Data We say “all the security data” a bit tongue-in-cheek, but not too much. We have been saying Monitor Everything almost as long as we have been talking about Reacting Faster, because if you fail to collect data you won’t have an opportunity to get it later. Unfortunately most organizations don’t realize their security data collection leaves huge gaps until the high-priced forensics folks let you know they can’t truly isolate the attack, or the perpetrator, or the malware, or much of anything, because you just don’t have the data. Most folks only need to learn that lesson once. So the first order of business is to lay down a collection infrastructure to store all your security data. The good news is that you have likely been collecting security data for quite some time, and your existing investment and infrastructure should be directly useful for dealing with advanced attackers. This means existing log management system may be useful after all. But perhaps not – you might have tools that aren’t at all suited to helping you find advanced attackers in your midst. One step at a time – now let’s delve into the data you need to collect. Network Security Devices: Your firewalls and IPS devices generate huge logs of what’s blocked, what’s not, and which rules are effective. You will receive intelligence that typically involves port/protocol/destination combinations or application identifiers for next-generation firewalls, which can identify potential attack traffic. Configuration Data: One key area to mine for indicators is the configuration data from your devices. It enables you to look for very specific files and/or configurations that have been identified as indicators of compromise. Identity: Similarly information about logins, authentication failures, and other identity-related data is useful for matching against attack profiles from third-party threat intelligence providers. NetFlow: This is another data type commonly used in SIEM environments; it provides information on protocols, sources, and destinations for network traffic as it traverses devices. NetFlow records are similar to firewall logs but far smaller, making them more useful for high-speed networks. Flows can identify lateral movement by attackers, as well as large exfiltration file transfers. Network Packet Capture: The next frontier for security data collection is actually to capture all network traffic on key segments. Forensics folks have been doing this for years during investigations, but proactive continuous full packet capture – for the inevitable incident responses which haven’t even started yet – is still an early market. For more detail on how full packet capture impacts security operations check out our Network Security Analytics research. Application/Database Logs: Application and database logs are generally less relevant, unless they come from standard applications or components likely to be specifically targeted by attackers. But you might be able to discover unusual application and/or database transactions – which might represent bulk data removal, injection attempts, or efforts to attack your critical data. Vulnerability Scans: This is another information source with limited value, detailing which devices are vulnerable to specific attacks. They help eliminate devices from your search criteria to streamline search activities. Of course this isn’t an exhaustive list, and you are likely already capturing much of this data. That’s a good thing, but capturing and analyzing data within the context of a compliance audit is fundamentally different than trying to detect advanced attacker activity. We are sticking to the CISO view for this series so we won’t dig into the technical nuances of the collection infrastructure. But they must be built on a strong analytical foundation which provides a threat-centric view of the world rather than one a focused on compliance reporting. More advanced organizations may already have a Security Operations Center (SOC) leveraging a SIEM platform for more security-oriented correlation and forensics to pinpoint and investigate attacks. That’s a start, but you will likely require some kind of Big Data thing, which should be clear after we discuss what we need this detection platform to do. Attack Patterns FTW As much as we have talked about the futility of blocking every advanced attack, that doesn’t mean we shouldn’t learn from both the past and the misfortune of others. We spent a time early in this process on sizing up the adversary for some insight into what is likely to be attacked, and perhaps even how. That enables you to look for those attack patterns within your security data – the promise of SIEM technology for years. The ultimate disconnect with SIEM was the hard truth that you needed to know what you were looking for. Far too many vendors forgot to mention that little requirement when selling you a bill of goods. Perhaps they expected attackers to post their plans on Facebook or something? But once you do the work to model the likely attacks on your key information, and then enumerate those attack patterns in your tool, you can get tremendous value. Just don’t expect it to be fully automated. The best case is that you receive an alert about a very likely attack because it’s something you were looking for. But the quickest way to get killed is to plan for the best case. So we also need to ensure we are ready for the worst case. That is advanced attackers using attacks you haven’t seen before, in ways you don’t expect. That’s when

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.