Securosis

Research

RSAC 2010 Guide: Top Three Themes

As most of the industry gets ramped up for the festivities of the 2010 RSA Conference next week in San Francisco, your friends at Securosis have decided to make things a bit easier for you. We’re putting the final touches on our first Securosis Guide to the RSA Conference. As usual, we’ll preview the content on the blog and have the piece packaged in its entirety as a paper you can carry around at the conference. We’ll post the entire PDF tomorrow, and through the rest of this week we’ll be highlighting content from the guide. To kick things off, let’s tackle what we expect to be the key themes of the show this year. Key Themes How many times have you shown up at the RSA Conference to see the hype machine fully engaged about a topic or two? Remember 1999 was going to be the Year of PKI? And 2000. And 2001. And 2002. So what’s going to be news of the show in 2010? Here is a quick list of three topics that will likely be top of mind at RSA, and why you should care. Cloud/Virtualization Security Cloud computing and virtualization are two of the hottest trends in information technology today, and we fully expect this trend to extend into RSA sessions and the show floor. There are few topics as prone to marketing abuse and general confusion as cloud computing and virtualization, despite some significant technological and definitional advances over the past year. But don’t be confused – despite the hype this is an important area. Virtualization and cloud computing are fundamentally altering how we design and manage our infrastructure and consume technology services – especially within data centers. This is definitely a case of “where there’s smoke, there’s fire”. Although virtualization and cloud computing are separate topics, they have a tight symbiotic relationship. Virtualization is both a platform for, and a consumer of, cloud computing. Most cloud computing deployments are based on virtualization technology, but the cloud can also host virtual deployments. We don’t really have the space to fully cover virtualization and cloud computing in this guide, though we will dig a layer deeper later. We highly recommend you take a look at the architectural section of the Cloud Security Alliance’s Security Guidance for Critical Areas of Focus in Cloud Computing (PDF). We also draw your attention to the Editorial Note on Risk on pages 9-11, but we’re biased because Rich wrote it. Cyber-crime & Advanced Persistent Threats Since it’s common knowledge that not only government networks but also commercial entities are being attacked by well-funded, state-sponsored, and very patient adversaries, you’ll hear a lot about APT (advanced persistent threats) at RSA. First let’s define APT, which is an attacker focused on you (or your organization) with the express objective of stealing sensitive data. APT does not specify an attack vector, which may or may not be particularly advanced – the attacker will do only what is necessary to achieve their objective. Securosis has been in the lead of trying to deflate the increasing hype around APT, but vendors are predictable animals. Where customer fear emerges the vendors circle like vultures, trying to figure out how their existing widgets can be used to address the new class of attacks. But to be clear, there is no silver bullet to stop or even detect an APT – though you will likely see a lot of marketing buffoonery discussing how this widget or that could have detected the APT. Just remember the Tuesday morning quarterback always completes the pass, and we’ll see a lot of that at RSA. It’s not likely any widget would detect an APT because an APT isn’t an attack, it’s a category of attacker. And yes, although China is usually associated with APT, it’s bigger than one nation-state. It’s a totally new threat model. This nuance is important, because it means the adversary will do what is necessary to compromise your network. In one instance it may be a client-side 0-day, in another it could be a SQL injection attack. If the attack can’t be profiled, then there is no way a vendor can “address the issue.” But there are general areas of interest for folks worried about APT and other targeted attacks, and those are detection and forensics. Since you don’t know how they will get in, you have to be able to detect and investigate the attack as quickly as possible – we call this “React Faster”. Thus the folks doing full packet capture and forensic data collection should be high on your list of companies to check out on the show floor. You’ll also want to check out some sessions, including Rich and Mike’s Groundhog Day panel, where APT will surely be covered. Compliance Compliance as a theme for RSA? Yes, you have heard this before. Unlike 2005, though, ‘compliance’ is not just a buzzword, but a major driver for the funding and adoption of most security technologies. Assuming you are aware of current compliance requirements, you will be hearing about new requirements and modifications to existing regulations (think PCI next or HIPAA/HiTech evolution). This is the core of IT’s love/hate relationship with compliance. Regulatory change means more work for you, but at the same time if you need budget for a security project in today’s economy, you need to associate the project with a compliance mandate and cost savings at the same time. Both vendors and customers should be talking a lot about compliance because it helps both parties sell their products and projects, respectively. The good news at this point is that security vendors do provide value in documenting compliance. They have worked hard to incorporate policies and reports specific to common regulations into their products, and provide management and customization to address the needs of other constituencies. But there will still be plenty of hype around ease of use and time to value. So there will be plenty of red “Easy PCI” buttons to bring back for your kids, and promises of “Instant Sarbanes-Oxley” and “Comprehensive HIPAA support” in every

Share:
Read Post

Introducing SecurosisTV: RSAC Preview

I know what you are thinking. “Oh god, they should stick to podcasting.” You’re probably right about that – it’s no secret that Rich and I have faces made for radio. But since we hang around with Adrian, we figured maybe he’d be enough of a distraction to not focus on us. You didn’t think we keep Adrian around for his brains, do you? Joking aside, video is a key delivery mechanism for Securosis content moving forward. We’ve established our own SecurosisTV channel on blip.tv, and we’ll be posting short form research on all our major projects this way throughout the year. You can get the video directly through iTunes or via RSS, and we’ll also be embedding the content on the blog as well. So on to the main event: Our first video is an RSA Conference preview highlighting the 3 Key Themes we expect to see at the show. The video runs about 15 minutes and we make sure to not take ourselves too seriously. Direct Link: http://blip.tv/file/3251515 Yes, we know embedding a video is not NoScript friendly, so for each video we will also include a direct link to the page on blip.tv. We just figure most of you are as lazy as we are, and will appreciate not having to leave our site. We’re also interested in comments on the video – please let us know what you think. Whether it’s valuable, what we can do to improve the quality (besides getting new talent), or any other feedback you may have. Share:

Share:
Read Post

What is Your Plan B?

In what remains a down economy, you may be suspicious when I tell you to think about leaving your job. But ultimately in order to survive, you always need to have Plan B or Plan C in place, just in case. Blind loyalty to an employer (or to employees) died a horrendous death many years ago. What got me thinking about the whole concept was Josh Karp’s post on the CISO Group blog talking about the value of vulnerability management. He points out the issues around selling VM internally and some of those challenges. Yet the issues with VM didn’t resonate with me. It was the behavior of the CTO, who basically squelches the discussion of vulnerabilities found on their network because he doesn’t want to be responsible for fixing them. To be clear, this kind of stuff happens all the time. That’s not the issue. The issue is understanding what you would do if you worked there. I would have quit on the spot, but that’s just me. Do you have the stones to just get up, pack your personal effects, and leave? It takes a rare individual with the kind of confidence to just get up and leave – heading off into the unknown. Assuming it would be unwise to act rashly (which I’ve been known to do from time to time), you need to revisit your personal Plan B. Or build it, if you aren’t the type of person with a bomb shelter in your basement. I advise all sorts of folks to be very candid about their ability to be successful, given the expectations of their jobs and the resources they have to execute. If the corporate culture allows a C-level executive to sweep legitimate risks under the rug, then there is zero chance of security success. If you can’t get simple defenses in place, then you can’t be successful – it’s a simple as that. If you find yourself in this kind of situation (and it’s not as rare as it seems), it’s time to execute on Plan B and find something else to do. Being a contingency planner at heart, I also recommend folks have a list of “things you will not do” under any circumstances. There are lots of folks in Club Fed who were just following the instructions of their senior executives, even though they knew they were wrong. My Dad told me when I first joined the working world that I would only get one chance to compromise my integrity, and to think very carefully about everything I did. It makes sense to run those scenarios through your mind ahead of time. So you’ll know where your personal line is, and when someone has crossed it. I know it’s pretty brutal out there in the job market. I know it’s scary when you have responsibilities and people depend on you to provide. But if someone asks you to cross that line, or you know you have no chance to be successful – you owe it to yourself to move on quickly. But you need to be ready to do so, and that preparation starts now. Here is your homework over the weekend: Polish your resume. Hopefully that doesn’t take long because it’s up to date, right? If not, get it up to date. Then start networking and make it a habit. Set up a lunch meeting with a local peer in another organization every week for two months. There is no agenda. You aren’t looking for anything except to reconnect with someone you lost touch with or to learn about how other folks are handling common issues. Two months becomes three months becomes a year, and then you know lots of folks in your community. Which is invaluable when the brown stuff hits the fan. You also need to get involved in your local community, assuming you want to stay there. Go to your local ISSA, NAISG, or InfraGard meeting and network a bit. Even if you are happy in your job. As Harvey MacKay says, Dig Your Well Before You’re Thirsty. Share:

Share:
Read Post

Incite 2/17/2010 – Open Your Mind

I was in the car the other day with my oldest daughter. She’s 9 (going on 15, but that’s another story) and blurted out: “Dad, I don’t want to go to Georgia Tech.” Huh? Now she is the princess of non-sequiturs, but even this one was surprising to me. Not only does she have an educational plan (at 9), but she knows that GA Tech is not part of it. So I figured I’d play along. First off, I studied to be an engineer. So I wasn’t sure if she was poking at me, or what the deal was. Second, her stance towards a state school is problematic because GA residents can go to a state school tuition-free, thanks to the magic of the Hope Scholarship, funded by people who don’t understand statistics – I mean the GA Lottery. Next I figured she was going to blurt out something about going to MIT or Harvard, and I saw my retirement fund dwindle to nothing. Looks like I’ll be eating Beef-a-Roni in my twilight years. But it wasn’t that. She then went on to explain that one of her friends made the point that GA Tech teaches engineering and she didn’t want to be an engineer. Now things were coming more into focus for me. I then asked why she didn’t want to be an engineer. Right, it’s more about the friend’s opinions, then about what she wants. Good, she is still 9. I then proceeded to go through all the reasons that being an engineer could be an interesting career choice, especially for someone who likes math, and that GA Tech would be a great choice, even if she didn’t end up being an engineer. It wasn’t about pushing her to one school or another – it was about making sure she kept an open mind. I take that part of parenting pretty seriously. Peer and family pressure is a funny thing. I thought I wanted to be a doctor growing up. I’m not sure whether medicine actually interested me, or whether I just knew that culturally that was expected. I did know being a lawyer was out of the question. (Yes, that was a zinger directed at all my lawyer friends.) Ultimately I studied engineering and then got into computers way back when. I haven’t looked back since. Which is really the point. I’m not sure what any of my kids’ competencies and passions will be. Neither do they. But it’s my job (working with The Boss) to make sure they get exposed to all sorts of things, keep an open mind, and hopefully find their paths. – Mike Photo credit: “Open Minds” originally uploaded by gellenburg Incite 4 U Things are a little slow on the blog this week. Rich, Adrian, and I are sequestered plotting world domination. Actually, we are finalizing our research agendas & upcoming reports, and starting work on a new video initiative. Thus I’m handling the Incite today, so Adrian and Rich can pay attention to our clients. Toward the end of the week, we’ll also start posting a “Securosis Guide to RSAC 2010” here, to give those of you attending the conference a good idea of what’s going to be hot, and what to look for. I also want to throw a grenade at our fellow bloggers. Candidly, most of you security bloggy types have been on an extended vacation. Yes, you are the suxor. We talked about doing the Incite twice a week, but truth be told, there just isn’t enough interesting content to link to. Yes, we know many of you are enamored with Twitter and spend most of your days there. But it’s hard to dig into a discussion in 140 characters. And our collective ADD kicked in, so we got tired of blogging after a couple years. But keep in mind it’s the community interaction that makes all the difference. So get off your respective asses and start blogging again. We need some link fodder. Baiting the Risk Modeling Crowd – Given my general frustration with the state of security metrics and risk quantification, I couldn’t pass up the opportunity to link to a good old-fashioned beat down from Richard Bejtlich and Tim Mullen discussing risk quantification. Evidently some windbag puffed his chest out with all sorts of risk quantification buffoonery and Tim (and then Richard) jumped on. They are trying to organize a public debate in the near future, and I want a front row seat. If only to shovel some dirt on the risk quantification model. Gunnar weighed in on the topic as well. – MR Meaningful or Accurate: Pick One – I like Matthew Rosenquist’s attempts to put security advice in a fortune cookie, and this month’s is “Metrics show the Relevance of Security.” Then Matthew describes how immature metrics are at this point, and how companies face an awful decision: using meaningful or accurate metrics, but you only get to pick one. The root of the issue is “The industry has not settled on provable and reliable methodologies which scale with any confidence.” I know a lot of folks are working on this, and the hope is for progress in the near term. – MR Wither virtual network appliances? – Exhibit #1 of someone who now seems to think in 140 characters is Chris Hoff. But every so often he does blog (or record a funny song) and I want to give him some positive feedback, so maybe he blogs some more. In this post, Chris talks about the issues of network virtual appliances – clearly they are not ready for prime time, and a lot of work needs to be done to get them there, especially if the intent is to run them in the cloud. Truth be told, I still don’t ‘get’ the cloud, but that’s why I hang out with Rich. He gets it and at some point will school me. – MR Getting to the CORE of Metasploit – Normally vendor announcements aren’t interesting (so $vendor, stop asking if we are going to cover your crappy 1.8 release on

Share:
Read Post

Network Security Fundamentals: Looking for Not Normal

To state the obvious (as I tend to do), we all have too much to protect. No one gets through their list every day, which means perhaps the most critical skill for any professional is the ability to prioritize. We’ve got to focus on the issues that present the most significant risk to the organization (whatever you mean by risk) and act accordingly. I have’t explicitly said it, but the key to network security fundamentals is figuring out how to prioritize. And to be clear, though I’m specifically talking about network security in this series, the tactics discussed can (and need to) be applied to all the other security domains. To recap how the fundamentals enable this prioritization, first we talked about implementing default deny on your perimeter. Next we discussed monitoring everything to provide a foundation of data for analysis. In the last post, correlation was presented to start analyzing that data. By the way, I agree with Adrian, who is annoyed with having to do correlation at all. But it is what it is, and maybe someday we’ll get all the context we need to make a decision based on log data, but we certainly can’t wait for that. So to the degree you do correlate, you need to do it effectively. Pattern Matching Going hand in hand with prioritization is the ability to match patterns. Most of the good security folks out there do this naturally, in terms of consuming a number of data points, understanding how they fit together, and then making a decision about what that means, how it will change things and what action is required. The patterns help you to understand what you need to focus on at any given time. The first fundamental step in matching patterns involves knowing your current state. Let’s call that the baseline. The baseline gives you perspective on what is happening in your environment. The good news is that a “monitor everything” approach gives you sufficient data to establish the baseline. Let’s just take a few examples of typical data types and what their baselines look like: Firewall Logs: You’ll see attacks in the firewall logs, so your baseline consists of the normal number/frequency of attacks, time distribution, and origin. So if all of a sudden you are attacked at a different time from a different place, or much more often than normal, it’s time to investigate. Network Flows: Network flows show network traffic dynamics on key segments, so your baseline tells you which devices communicate with which other devices – both internal and external to your network. So if you suddenly start seeing a lot of flow from an internal device (on a sensitive network) to an external FTP site, it could be trouble. Device Configurations: If a security device is compromised, there will usually be some type of configuration and/or policy change. The baseline in this case is the last known good configuration. If something changes, and it’s not authorized or in the change log, that’s a problem. Again, these examples are not meant to be exhaustive or comprehensive, just to give an idea about the types of data you are already collecting and what the baseline could look like. Next you set up the set of initial alerts to detect attacks that you deem important. Each management console for every device (or class of devices) gives you the ability to set alerts. There is leverage in aggregating all this data (see the correlation post), but it’s not necessary. Now I’ll get back to something discussed in the correlation post, and that’s the importance of planning your use cases before implementing your alerts. You need to rely on those thresholds to tell you when something is wrong. Over time, you tune the thresholds to refine how and when you get alerted. Don’t expect this tuning process to go quickly or easily. Getting this right really is an art, and you’ll need to iterate for a while to get there – think months, not days. You can’t look for everything, so the use cases need to cover the main data sources you collect and set appropriate alerts for when something is outside normal parameters. I call this looking for not normal, and yes it’s really anomaly detection. But most folks don’t think favorably of the term “anomaly detection”, so I use it sparingly. Learning from Mistakes You can learn something is wrong in a number of ways. Optimally, you get an alert from one of your management consoles. But that is not always the case. Perhaps your users tell you something is wrong. Or (worst case) a third party informs you of an issue. How you learn you’ve been pwned is less important than what you do once you are pwned. Once you jump into action, you’re looking at the logs, jumping into management consoles, and isolating the issues. How quickly you identify the root cause has everything to with the data you collect, and how effectively you analyze it. We’ll talk more about incident response later this year, but suffice it to say your only job is to contain the damage and remediate the problem. Once the crisis ends, it’s time to learn from experience. The key, in terms of “looking for not normal”, is to make sure it doesn’t happen again. The attackers do their jobs well and you will be compromised at some point. Make sure they don’t get you the same way twice. The old adage, “Fool me once, shame on you – fool me twice, shame on me,” is very true. So part of the post-mortem process is to define what happened, but also to look for that pattern again. Remember that attackers are fairly predictable. Like the direct marketers that fill your mailbox with crap every holiday season, if something works, they’ll keep doing it. Thus, when you see an attack, you’ll need to expect to see it again. Build another set of rules/policies to make sure that same attack is detected quickly and accurately. Yes, I know this is a black list mindset, and there are limitations to this approach since you

Share:
Read Post

The Death of Product Reviews

As a security practitioner, it has always been difficult to select the ‘right’ product. You (kind of) know what problem needs to be solved, yet you often don’t have any idea how any particular product will work and scale in your production environment. Sometimes it is difficult to identify the right vendors to bring in for an evaluation. Even when you do, no number of vendor meetings, SE demos, or proof of concept installations can tell you what you need to know. So it’s really about assembling a number of data points and trying to do your homework to the greatest degree possible. Part of that research process has always been product reviews by ‘independent’ media companies. These folks line up a bunch of competitors, put them through the paces, and document their findings. Again, this doesn’t represent your environment, but it gives you some clues on where to focus during your hands-on tests and can help winnow down the short list. Unfortunately, the patient (the product review) has died. The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output. And what has been done recently subsists on a diet of suspicion, media bias, and shoddy work. The good news is that tech media has evolved with the times. Who doesn’t love to read tweets about regurgitated press releases? Thankfully Brian Krebs is still out there actually doing something useful. Seeing Larry Suto’s recent update of his web application scanner test (PDF) and the ensuing backlash was the final nail in the coffin for me. But this patient has been sick for a long time. I first recognized the signs years ago when I was in the anti-spam business. NetworkWorld commissioned a bake-off of 40 mail security gateways and published the results. In a nutshell, the test was a fiasco for several reasons: Did not reflect reality: The test design was flawed from the start. The reviewer basically resent his mail to each device. This totally screwed up the headers (by adding another route) and dramatically impacted effectiveness. This isn’t how the real world works. Too many vendors: To really test these products, you have to put them all through their paces. That means at least a day of hands-on time to barely scratch the surface. So to really test 40 devices, it would take 40-60+ man-days of effort. Yeah, I’d be surprised if a third of that was actually spent on testing. Uneven playing field: The reviewers let my company send an engineer to install the product and provide training. We did that with all enterprise sales, so it was standard practice for us, but it also gave us a definite advantage over competitors who didn’t have a resource there. If every review present a choice: a) fly someone to the lab for a day, or b) suffer by comparison to the richer competitors, how fair and comprehensive can reviews really be? Not everyone showed: There is always great risk in doing a product review. If you don’t win and handily, it is a total train wreck internally. Our biggest competitor didn’t show up for that review, so we won, but it didn’t help with in most of our head-to-head battles. Now let’s get back to Suto’s piece to see how things haven’t changed, and why reviews are basically useless nowadays. By the way, this has nothing to do with Larry or his efforts. I applaud him for doing something, especially since evidently he didn’t get compensated for his efforts. In the first wave, the losing vendors take out their machetes and start hacking away at Larry’s methodology and findings. HP wasted no time, nor did a dude who used to work for SPI. Any time you lose a review, you blame the reviewer. It certainly couldn’t be a problem with the product, right? And the ‘winner’ does its own interpretation of the results. So this was a lose-lose for Larry. Unless everyone wins, the methodology will come under fire. Suto tested 7 different offerings, and that probably was too many. These are very complicated products and do different things in different ways. He also used the web applications put forth by the vendors in a “point and shoot” type of methodology for the bulk of the tests. Again, this doesn’t reflect real life or how the product would stand up in a production environment. Unless you actually use the tool for a substantial amount of time in a real application, there is no way around this limitation. I used to love the reviews Network Computing did in their “Real-World Labs.” That was the closest we could get to reality. Too bad there is no money in product reviews these days – that means whoever owns Network Computing and Dark Reading can’t sponsor these kinds of tests anymore, or at least not objective tests. The wall between the editorial and business teams has been gone for years. At the end of the day it gets back to economics. I’m not sure what level of help Larry got from the vendors during the test, but unless it was nothing from nobody, you’re back to the uneven playing field. But even that doesn’t reflect reality, since in most cases (for an enterprise deployment anyway) vendor personnel will be there to help, train, and refine the process. And in most cases, craftily poison the process for other competitors, especially during a proof of concept trial. This also gets back to the complexity issue. Today’s enterprise environments are too complicated to expect a lab test to reflect how things work. Sad, but true. Finally, WhiteHat Security didn’t participate in the test. Jeremiah explained why, and it was probably the right answer. He’s got something to tell his sales guys, and he doesn’t have results that he may have to spin. If we look at other tests, when was someone like Cisco last involved in a product review? Right, it’s been a long time because they don’t have to be. They

Share:
Read Post

Network Security Fundamentals: Correlation

In the last Network Security Fundamentals post, we talked about monitoring (almost) everything and how that drives a data/log aggregation and collection strategy. It’s great to have all that cool data, but now what? That brings up the ‘C word’ of security: correlation. Most security professionals have tried and failed to get sufficient value from correlation relative to the cost, complexity, and effort involved in deploying the technology. Understandably, trepidation and skepticism surface any time you bring up the idea of real-time analysis of security data. As usual, it comes back to a problem with management of expectations. First we need to define correlation – which is basically using more than one data source to identify patterns because the information contained in a single data source is not enough to understand what is happening or not enough to make a decision on policy enforcement. In a security context, that means using log records (or other types of data) from more than one device to figure out whether you are under attack, what that attack means, and the severity of attack. The value of correlation is obvious. Unfortunately networks typically generate ten of thousands data records an hour or more, which cannot be analyzed manually. So sifting through potentially millions of records and finding the 25 you have to worry about represents tremendous time savings. It also provides significant efficiencies when you understand threats in advance, since different decisions require different information sources. The technology category for such correlation is known as SIEM: Security Information and Event Management. Of course, vendors had to come along and screw everything up by positioning correlation as the answer to every problem in security-land. Probably the cure for cancer too, but that’s beside the point. In fairness, end users enabled this behavior by hearing what they wanted. A vendor said (and still says, by the way) they could set alerts which would tell the user when they were under attack, and we believed. Shame on us. 10 years later, correlation is achievable. But it’s not cheap, or easy, or comprehensive. But if you implement correlation with awareness and realistic expectations, you can achieve real value. Making Correlation Work 4 U I liken correlation to how an IPS can and should be used. You have thousands of attack signatures available to your IPS. That doesn’t mean you should use all of them, or block traffic based on thousands of alerts firing. Once again, Pareto is your friend. Maybe 20% of your signatures should be implemented, focusing on the most important and common use cases that concern you and are unlikely to trigger many false positives. The same goes for correlation. Focus on the use cases and attack scenarios most likely to occur, and build the rules to detect those attacks. For the stuff you can’t anticipate, you’ve got the ability to do forensic analysis, after you’ve been pwned (of course). There is another more practical reason for being careful with the rules. Multi-factor correlation on a large dataset is compute intensive. Let’s just say a bunch of big iron was sold to drive correlation in the early days. And when you are racing the clock, performance is everything. If your correlation runs a couple days behind reality, or if it takes a week to do a forensic search, it’s interesting but not so useful. So streamlining your rule base is critical to making correlation work for you. Defining Use Cases Every SIEM/correlation platform comes with a bunch of out-of-the-box rules. But before you ever start fiddling with a SIEM console, you need to sit down in front of a whiteboard and map out the attack vectors you need to watch. Go back through your last 4-5 incidents and lay those out. How did the attack start? How did it spread? What data sources would have detected the attack? What kinds of thresholds need to be set to give you time to address the issue? If you don’t have this kind of data for your incidents, then you aren’t doing a proper post-mortem, but that’s another story. Suffice it to say 90% of the configuration work of your correlation rules should be done before you ever touch the keyboard. If you haven’t had any incidents, go and buy a lottery ticket – maybe you’ll hit it big before your number comes up at work and you are compromised. A danger of not properly defining use cases is the inability to quantify the value of the product once implemented. Given the amount of resources required to get a correlation initiative up and running, you need all the justification you can get. The use cases strictly define what problem you are trying to solve, establish success criteria (in finding that type of attack) and provide the mechanism to document the attack once detected. Then your CFO will pipe down when he/she wants to know what you did with all that money. Also be wary of vendor ‘consultants’ hawking lots of professional service hours to implement your SIEM. As part of the pre-sales proof of concept process, you should set up a bunch of these rules. And to be clear, until you have a decent dataset and can do some mining using your own traffic, paying someone $3,000 per day to set up rules isn’t the best use of their time or your money. Gauging Effectiveness Once you have an initial rule set, you need to start analyzing the data. Regardless of the tool, there will be tuning required, and that tuning takes time and effort. When the vendor says their tool doesn’t need tuning or can be fully operational in a day or week, don’t believe them. First you need to establish your baselines. You’ll see patterns in the logs coming from your security devices and this will allow you to tighten the thresholds in your rules to only fire alerts when needed. A few SIEM products analyze network flow traffic and vulnerability data as well, allowing you to use that data to make your rules smarter

Share:
Read Post

Incite 2/10/2010: Comfortably Numb

You may not know it, but lots of folks you know are zombies. It seems that life has beaten them down, and miraculously two weeks later they don’t say ‘hi’ – they just give you a blank stare and grin as the spittle drips out of the corners of their mouths. Yup, a sure sign they’ve been to see Dr. Feelgood, who heard for an hour how hard their lives are, and as opposed to helping to deal with the pain, they got their friends Prozac, Lexapro, and Zoloft numb it. These billion dollar drugs build on the premise that life is hard, so it’s a good idea to take away the ability to feel because it hurts too much. Basically we, as a society, are increasingly becoming comfortably numb. I’m not one to be (too) judgmental about the personal decisions that some folks make, but this one gets in my craw. My brother once said to me “Life is Pain,” and there is some truth to that statement. Clearly life is hard right now for lots of folks and I feel for them. But our society values quick fixes over addressing the fundamental causes of issues. Just look at your job. If someone came forward with a widget that would get you compliant, you’d buy it. Maybe you already have. And then you realize: there are no short cuts. You’ve got to do the work. Seems to me we don’t do the work anymore. Now, to be clear, some folks are ill and they need anti-depressants. I’ve got no issue with that – in fact I’m thankful that these folks have some options to lead normal lives and not hurt themselves and/or others. It’s the soccer mom (or dad) who is overwhelmed with having to get the kid’s homework done and getting them to baseball practice. That doesn’t make sense to me. I know it’s easier to take a pill than to deal with the problem, but that doesn’t make the problem go away. I guess that’s easy for me to say because thankfully I don’t suffer from depression. Yet, to come clean I spent most of my 20’s medicating in my own way. I got hammered every weekend and sometimes during the week. If I had invested in the market half of what I spent on booze, I wouldn’t be worrying about the mortgage. But I guess that I worry at all about anything is a good sign. Looking back, I was trying to be someone different – the “party guy,” who can drink beer funnels until he pukes and then drink some more. I was good at that. Then I realized how unfulfilling that lifestyle was for me, especially when the doctor informed me I had the liver of a 50 year old. Which is not real good when you are 30. Ten years later, I actually enjoy the ups and downs. OK, I like the ups more than the downs, but I understand that without feeling bad, I can’t appreciate when things are good. I’m getting to the point where I’m choosing what to get pissed off about. And I do still get pissed. But it’s not about everything and I get past my anger a lot faster. Basically, I’m learning how to let it go. If I can’t control it and I didn’t screw it up, there isn’t much I can do – so being pissed off about it isn’t helping anyone. By the way, that doesn’t mean I’m a puritan. I still tip back a few per week and kick out the jams a few times a year. The funnel is still my friend. The difference is I’m not running away from anything. I’m not trying to be someone else. I’m getting into the moment and having fun. There is a big difference. – Mike Photo credit: “Comfortably Numb” originally uploaded by Olivander Incite 4 U One of the advantages of working on a team is that we cover for each other and we are building a strong bench. This week contributor David Mortman put together a couple of pieces. Mort went and got a day job, so he’s been less visible on Securosis, but given his in-depth knowledge of all things (including bread making), we’ll take what we can get. I also want to highlight a post by our “intern” Dave Meier on Misconceptions of a DMZ, in which he dismantles a thought balloon put out there regarding virtualized web browser farms. Meier lives in the trenches every day, so he brings a real practitioner’s perspective to his work for Securosis. It’s About the Boss, Not the Org Chart – My buddy Shack goes on a little rampage here listing the reasons why security shouldn’t report to IT. I’m the first to think in terms of absolutes (the only gray in my life is my hair), but Dave is wrong here. I’m not willing to make a blanket statement about where security should report because it’s more about being empowered than it is about the org chart. If the CIO gets it and can persuade the right folks to do the right thing and support the mission, then it’s all good. If that can happen in the office of the CFO or even CEO, that’s fine too. Dave brings up some interesting points, but unless you have support from the boss, none of it means a damn thing. – MR Rock Stars Are a Liability – It looks like Forrester Research now requires all analysts to shut down their personal blogs, and only blog on the Forrester platform. I started Securosis (the blog) back when I was still working at Gartner, and took advantage of the grey area until they adopted an official policy banning any coverage of IT in personal blogs. That wasn’t why I left the company, but I fully admit that the reception I received while blogging gave me the confidence to jump out there on my own. In a big analyst firm the corporate brand is more

Share:
Read Post

FireStarter: Admin access, buh bye

It seems I’ve been preoccupied lately with telling all of you about the things you shouldn’t do anymore. Between blowing away firewall rules and killing security technologies, I guess I’ve become that guy. Now get off my lawn! But why stop now – I’m on a roll. This week, let’s take on another common practice that ends up being an extraordinarily bad idea – running user devices with administrator access. Let’s slay that sacred cow. Once again, most of you security folks with any kind of kung fu are already here. You’d certainly not let an unsophisticated user run with admin rights, right? They might do something like install software or even click on a bad link and get pwned. Yeah, it’s been known to happen. Thus the time has come for the rest of the great unwashed to get with the program. Your standard build (you have a standard build, right? If not, we’ll talk about that next week in the Endpoint Fundamentals Series) should dictate user devices run as standard users. Take that and put it in your peace pipe. Impact Who cares? Why are we worried about whether a user runs as a standard or admin user? To state the obvious: admin rights on endpoint devices represent the keys to the kingdom. These rights allow users to install software, mess with the registry under Windows, change all sorts of config files, and generally do whatever they want on the device. Which wouldn’t be that bad if we only had to worry about legitimate users. Today’s malware doesn’t require user interaction to do its damage, especially on devices running with local admin access. All the user needs to do is click the wrong link and it’s game over – malware installed and defenses rendered useless, all without the user’s knowledge. Except when they are offered a great deal on “security software,” get a zillion pop-ups, or notice their machine grinding to a halt. To be clear, users can still get infected when running in standard mode, but the attack typically ends at logout. Without access to the registry (on Windows) and other key configuration files, malware generally expires when the user’s session ends. Downside As with most of the tips I’ve provided over the past few weeks, user grumpiness may result once they start running in standard mode. They won’t be able to install new software and some of their existing applications may break. So use this technique with care. That means you should actually test whether every application runs in standard user mode before you pull the trigger. Business critical applications probably need to be left alone, as offensive as that is. Most applications should run fine, making this decision a non-issue, but use your judgement on allowing certain users to keep admin rights (especially folks who can vote you off the island) to run a specific application. Yet I recommend you stamp your feet hard and fast if an application doesn’t play nicely in standard mode. Yell at your vendors or internal developers to jump back into the time machine and catch up with the present day. It’s ridiculous that in 2010 an end-user facing application would require admin rights. You also have the “no soup for you” card, which is basically the tough crap response when someone gripes about needing admin rights. Yes, you need to have a lot of internal mojo to pull that off, but that’s what we are all trying to build, right? Being able to do the right security thing and make it stick is the hallmark of an effective CISO. Discuss So there you have it. This is a FireStarter, so fire away, sports fans. Why won’t this work in your environment? What creative, I mean ‘legitimate’, excuses are you going to come up with now to avoid doing proper security hygiene on the endpoints? This isn’t that hard, and it’s time. So get it done, or tell me why you can’t… Share:

Share:
Read Post

Kill. IE6. Now.

I tend to be master of the obvious. Part of that is overcoming my own lack of cranial horsepower (especially when I hang out with serious security rock stars), but another part is the reality that we need someone to remind us of the things we should be doing. Work gets busy, shiny objects beckon, and the simple blocking and tackling falls by the wayside. And it’s the simple stuff that kills us, as evidenced once again by the latest data breach study from TrustWave. Over the past couple months, we’ve written a bunch of times about the need to move to the latest versions of the key software we run. Like browsers and Adobe stuff. Just to refresh your memory: Microsoft IE Issues Reported: Adrian covered the Heise 0-day attack on IE6 and IE7 back in November. Low Hanging Fruit: Endpoint Security: I wrote about the need to keep software updated and patched. Security Strategies for Long Term, Targeted Threats: Rich responded to the APT hysteria with some actionable advice about dealing with the threats, which included moving off IE6. Of course, the news item that got me thinking about reiterating such an obvious concept was that Google is pulling support for IE6 for all their products this year. Google Docs and Google sites loses IE6 support on March 1. Not that you should be making decisions based on what Google is or is not supporting, but if ever there was a good backwards looking indicator, it’s this. So now is the time. To be clear, this isn’t news. The echo chamber will beat me like a dog because who the hell is still using IE6? Especially after our Aurora Borealis escapade. Right, plenty of folks, which brings me back to being master of the obvious. We all know we need to kill IE6, but it’s probably still somewhere in the dark bowels of your organization. You must find it, and you must kill it. Not in the Excuses Business Given that there has been some consistent feedback every time we say to do anything, that it’s too hard, or someone will be upset, or an important application will break, it’s time to man (or woman) up. No more excuses on this one. Running IE6 is just unacceptable at this point. If your application requires IE6, then fix it. If your machines are too frackin’ old to run IE7 or IE8 with decent performance, then get new machines. If you only get to see those remote users once a year to update their machines, bring them in to refresh now. Seriously, it’s clear that IE6 is a lost cause. Google has given up on it, and they won’t be the last. You can be proactive in how you manage your environment, or you can wait until you get pwned. But if you need the catalyst of a breach to get something simple like this done in your environment, then there’s something fundamentally wrong with your security program. But that’s another story for another day. Comment from Rich: As someone who still develops web sites from time to time, it kills me every time I need to add IE6 compatibility code. It’s actually pretty hard to make anything modern work with IE6, and it significantly adds to development costs on more-complex sites. So my vote is also to kill it! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.