Securosis

Research

React Faster and Better: Alerts & Triggers

In our last post New Data for New Attacks, we delved into the types of data we want to systematically collect, through both log record aggregation and full packet capture. As we’ve said many times, data isn’t the issue – it’s the lack of actionable information for prioritizing our efforts. That means we must more effectively automate analysis of this data and draw the proper conclusions about what is at risk and what isn’t. Automate = Tools As much as we always like to start with process (since that’s where most security professionals fail), automation is really about tools. And there plenty of tools to bring to bear on setting alerts to let you know when something is funky. You have firewalls, IDS/IPS devices, network monitors, server monitors, performance monitors, DLP, email and web filtering gateways … and that’s just the beginning. In fact there is a way to monitor everything in your environment. Twice. And many organizations pump all this data into some kind of SIEM to analyze it, but this continues to underscore that we have too much of the wrong kind of data, at least for incident response. So let’s table the tools discussion for a few minutes and figure out what we are really looking for… Threat Modeling Regardless of the tool being used to fire alerts, you need to 1) know what you are trying to protect; 2) know what an attack on it looks like; and 3) understand relative priorities of those attacks. Alerts are easy. Relevant alerts are hard. That’s why we need to focus considerable effort early in the process on figuring out what is at risk and how it can be attacked. So we will take a page from Security 101 and spend some time building threat models. We’ve delved into this process in gory detail in our Network Security Operations Quant research, so we won’t repeat it all here, but these are the key steps: Define what’s important: First you need to figure out what critical information/applications will create the biggest issues if compromised. Model how it can be attacked: It’s always fun to think like a hacker, so put on your proverbial black hat and think about ways to exploit and compromise the first of the most important stuff you just identified. Determine the data those attacks would generate: Those attacks will result in specific data patterns that you can look for using your analysis tools. This isn’t always an attack signature – it may be the effect of the attack, such as excessive data egress or bandwidth usage. Set alert thresholds: Once you establish the patterns, then figure out when to actually trigger an alert. This is an art, and most organization start with fairly broad thresholds, knowing they result in more alerts initially. Optimize thresholds: Once your systems start hammering you with alerts, you’ll be able to tune the system by tightening the thresholds to focus on real alerts and increase the signal-to-noise ratio. Repeat for next critical system/data: Each critical information source/application will have its own set of attacks to deal with. Once you’ve modeled one, go back and repeat the process. You can’t do everything at once, so don’t even try. Start with the most critical stuff, get a quick win, and then expand use of the system. Keep in mind that the larger your environment, the more intractable modeling everything becomes. You will never know where all the sensitive stuff is. Nor can you build a threat model for every known attack. That’s why under all our research is the idea of determining what’s really important and working hard to protect those resources. Once we have threat models implemented in our monitoring tool(s) – which include element managers, analysis tools like SIEM, and even content monitoring tools like DLP – these products can (and should) be configured to alert based on a scenario in the threat model. More Distant Early Warning We wish the threat models could be comprehensive, but inevitably you’ll miss something – accept this. And there are other places to glean useful intelligence, which can be factored into your analysis and potentially show attacks not factored into the threat models. Baselines: Depending on the depth of monitoring, you can and should be looking at establishing baselines for your critical assets. That could mean network activity on protected segments (using Netflow), or perhaps transaction types (SQL queries on a key database), but you need some way to define normal for your environment. Then you can start by alerting on activities you determine are not normal. Vendor feeds: These feeds come from your vendors – mostly IDS/IPS – because they have a research teams tasked with staying on top of emerging attack plans. Admittedly this is reactive, and needs to be built on known attacks, but the vendors spend significant resources making sure their tools remain current. Keep in mind you’ll want to tailor these signatures to your organization/industry – obviously you don’t need to look for SCADA attacks if you don’t have those control systems, but the inclusive side is a bit more involved. Intelligence sharing: Larger organizations see a wide variety of stuff, mostly because they are frequently targeted and have the staff to see attack patterns. Many of these folks do a little bit of co-opetition and participate in sharing groups (like FS-ISAC) to leverage each other’s experiences. This could be a formal deal or just informal conversations over beers every couple weeks. Either way, it’s good to know what other peer organizations are seeing. The point is that there are many places to leverage data and generate alerts. No one information source can identify all emerging attacks. You’re best served by using many, then establishing a method to prioritize alerts which warrant investigation. Visualization Just about every organization – particularly large enterprises – generates more alerts than it has the capability to investigate. If you don’t, there’s a good chance you aren’t alerting enough. So prioritization is a key

Share:
Read Post

Web Application Firewalls Really Work

A couple months ago I decided to finally dig in and see whether WAFs (Web Application Firewalls) are really useful, or merely another crappy shiny object we spend a lot of money on to get the auditors off our backs. Sure, the WAF vendors keep telling me how well their products work and how many big clients they have, but that’s not the best way to figure out whether something really does the job. I also talk with a bunch of end users who provide darn good info, but even that isn’t always the best way to determine the security value of a tool. Not all users have good visibility and internal controls to measure the effectiveness of the tool, and many can’t deploy it in an optimal manner due to all sorts of political and technical issues. In this case I started with users, then checked with a bunch of my penetration testing friends. While a pen tester doesn’t necessarily understand the overall value of a tool (since they don’t have to pay the same kind of attention to compliance/management issues), a good tester most definitely knows how much harder a security tool makes their life. The end result was that WAFs do have value when used properly, and may provide value beyond pure security, but aren’t a panacea. Since you could say that about the value of a gerbil for defending against APT too, here’s a little more detail… WAFs are best at protecting against known framework vulnerabilities (e.g., you run WordPress and haven’t patched), known automated (script kiddie) attacks, or when configured with (defensive) application-specific rules (whitelisting, although almost no one really deploys them this way). WAFs are moderately effective against general XSS/SQL injection. All the researchers said it was a roadbump for custom attacks that added to the time it took them to generate a successful exploit… with varying effectiveness depending on many factors – particularly the target app behind the WAF. The better the configuration, based on deep application knowledge, the more difficult the attack. But they stated that the increasing time to exploit increases the attacker’s costs, and thus might reduce the chances the attacker would devote time to the app and increase your probability of detecting them. Still, if someone really wants to get you and is knowledgeable, no WAF alone will stop them. The products often provide great analytics value because they are sometimes better than normal tracking/stats packages for understanding what’s going on with your site. They don’t do anything for logic flaws (unless you hand-code/configure them) or much beyond XSS/SQL injection. They aren’t as easy to use as is usually promised in the sales cycle. Gee, what a shock. Again, I could say this about gerbils. In some ways, now that I’ve written this, I feel like I could have substituted “duh” for the entire post. Yet again we have a tool that promises a lot, is often misused, but (used properly) can provide a spectrum of value from “keeping the auditors off our backs” to “protects against some 1337 haxor in a leather bodysuit”. But don’t let anyone tell you they are a waste of money… just make sure you know what you’re getting and use it right. Share:

Share:
Read Post

Friday Summary: December 24, 2010

It’s the holiday season and I should be taking some time off and relaxing, watching some movies and seeing friends. Sounds good. If only I had that ‘relax’ gene sequence I would probably be off having a good time rather than worrying about security on Giftmas eve. But here I am, reading George Hulme’s Threatpost article, 2011: What’s Your IT Security Plan?. I got to thinking about this. Should I wait to do security work for 2011? I mean, at your employer is one thing – who cares about those systems when there is eggnog and pumpkin pie? I’m talkin’ about your stuff! One point I make in the talks I give on software security is: don’t prioritize security out in favor of features when building code. And in this case, if I put off security in favor of fun, security won’t get done in 2011. So I went through the process of evaluating home computer and network security over the last couple days. I did the following: Reassess Router Security: Logged into my router for the first time in like two years to verify security settings. Basically all of the security settings – most importantly encryption – were correct. I did find one small mistake: I forgot to require the management connection to be forced over HTTPS, but as I had not been logged in for years, I am pretty sure that was not a big deal. I did however confirm the firmware was written by Methuselah – and while he was pretty solid coder, he hasn’t fixed any bugs in years. It was good to do a sanity check and take a fresh look. Migration to 1Password: I have no idea why I waited so long to do this. 1Password rocks! I now have every password I use secured in this and synchronized across all my computers and mobile devices. And the passwords are far better than even the longest passphrases I can remember. Love the new interface. Added bonus on the home machine: I can leave the UI open all the time, then autofill all web passwords to save time. If you have not migrated to this tool, do it. Deploy Network Monitoring: We see tons of stuff hit the company firewall. I used to think UTM and network monitoring was overkill. Not so much any more. Still in the evaluation and budgetary phase, but I think I know what I want and should deploy by year’s end. I want to see what hits, and what comes through. Yes, I am going to have to actually review the logs, but Rich wrote a nice desktop widget a couple years ago which I think I can repurpose to view log activity with my morning coffee. It will be just like working IT again! Clean Install: With the purchase of a new machine last week I did not use the Apple migration assistant. As awesome and convenient as that Mac feature is, I did a fresh install. Then I re-installed all ,u applications and merged the files I needed manually. Took me 8 hours. This was a just-in-case security measure, to ensure I don’t bring any hidden malware/trojans/loggers along for the ride. The added beneft was all the software I do not have set to manually update itself got revved. And many applications were well past their prime. Rotate Critical Passwords: I don’t believe that key rotation for encryption makes you any safer if you do key management right, but passwords are a different story. There are a handful of passwords that I cannot afford to have cracked. It’s been a year, so I swapped them out. Mobile Public Internet: Mike mentioned this in one of his Friday Favorites, but this is one of the best posts I have seen all year for general utility: Shearing Firesheep with the Cloud. What does this mean? Forget Firesheep for a minute. General man-in-the-middle attacks are still a huge problem when you leave the comfy confines of your home with that laptop. What this post describes is a simple way to protect yourself using public Internet connections. Use the cloud to construct an encrypted tunnel for you to use wherever you go. And it’s fast. So as long as you set it up and remember to use it, you can be pretty darn safe using public WiFi to get email and other services. That’s six things I did over the course of the week. Of course you won’t read this anywhere else because it’s six things, and no other security information source will give you six things. Five, or seven, but never six. Some sort of mythical marketing feng-shui numbers that can’t be altered without making some deity angry. Or maybe it was that you get cramps? I forget. There is probably a Wiki page somewhere that describes why that happens. This is the last Friday Summary of the year so I wanted to say, from Rich Mogull, Mike Rothman, Chris Pepper, David Mortman, Gunnar Peterson, Dave Lewis, and Melissa (aka Geekgrrl), and myself: thanks for reading the blog! We enjoy the comments and the give-and-take as much as you do. It makes our job fun and, well, occasionally humiliating. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich was quoted so many times on Wikileak DDOS that he DDOSed all media outlets with the sheer volume of his quotes. They had to shut him down. The rest of us were too far gone as slackerly curmudgeons (or was that curmudgeonly slackers?) to speak to anyone. Favorite Securosis Posts We all loved [Dealtime 2010: Remembering the Departed](Dealtime 2010: Remembering the Departed as the best post of the week. Except for Mike, who was unhappy we would not let him graph the specific hype cycles. Other Securosis Posts Incite 12/22/2010: Resolution. 2011 Research Agenda: Quantum Cloudiness, Supervillan Shields, and No-BS Risk. React Faster and Better: New Data for New Attacks, Part 1. NSA Assumes Security Is Compromised. 2011 Research Agenda: the

Share:
Read Post

Dealtime 2010: Remembering the Departed

As we approach Christmas time, quite a few folks will have gold bullion under their trees, courtesy of the security industry M&A machines. Of course, the investment bankers and lawyers had a banner year, but let’s also hear it for some fortunate entrepreneurs, their VCs, and even some public company shareholders who were able to share in the wealth this year. You forget how long 12 months are, until you go back and start to revisit what happened in 2010. CRN helped me out a bit by doing one of their silly slideshows (page view hos) listing the Top 10 deals in security this year. Let’s take a quick run through each and think about the longer term impact (though we covered many of these during the year). Intel/McAfee: Obviously having the biggest pure-play security company taken out is a big deal. We did some analysis of the deal (and here), and our perspectives haven’t changed. Though with the EU scrutinizing the deal, there is still some risk of not closing. If it does we expect business as usual, though McAfee may be a bit more acquisitive (and spend bigger $$’s) leveraging Intel’s balance sheet. Symantec/PGP/GuardianEdge: Symantec had a huge hole relative to encryption, and they filled it. Twice. Why buy one, when you can buy two at twice the price? It’s the SYMC way! Though the initial integration ideas we’ve seen on the roadmap are promising, we are still talking about the Big Yellow here, so we remain cautious. Here is our deal analysis. Symantec/VeriSign: This high dollar deal was a surprise and clearly there is lots of risk. It does make sense and provide some leverage, especially relative to the enterprise authentication business. And this one requires less integration than most of SYMC’s deals. So this could end up being a net positive if the SYMC field teams can figure out how to sell it. SonicWall goes private: Thoma Bravo acquired SonicWall (our analysis here) and saved them from the quarterly scrutiny of being a public company. Big whoop. The real question is what are they going to fold into the operation (and no, Entrust is not a clean fit), because the company will need some additional heft and excitement to warrant another public offering or higher value deal to a strategic acquirer. Sophos goes private equity: Despite how ineffective traditional AV is at pretty much everything (except maybe passing PCI), it’s still a multi-billion-dollar market. We were reminded of that when APAX partners acquired Sophos for $830 million. Basically a 2nd tier player in AV is bigger than the entire DLP market, though probably not for long (WIKILEAKS WIKILEAKS WIKILEAKS). Like SonicWall, Sophos will need to keep buying stuff to be able to generate excitement for an IPO. HP/Fortify: HP got the application security bug and added Fortify to SPI Dynamics and folded it all into its application tools business. Which is exactly where it belongs, because without tight linkages to IDEs and dev tools, developers won’t do much. Not that they will even with tight integration, but at least there is a chance. This also showed HP’s need to buy the biggest dog in any space, because you cannot move a needle that weighs more than $120 billion, $10 million at a time. HP/ArcSight: HP also swallowed up the big dog of the SIEM space in 2010. We’ve been saying for a long time that SIEM and Log Management are going to be part of the big IT ops management stack, and this kind of move facilitates that. Of course integration won’t be easy, but in the meantime we’re pretty sure an army of EDS services folks will keep very busy making ArcSight work. McAfee/Trust Digital: McAfee did a few deals last year, and this one – acquiring Trust Digital to add some mobile security technology – may pay dividends when we see weaponized mobile attacks go mainstream. At some point it will happen and folks will have to pay attention to what’s on those pesky smart phones and how to protect it all. IBM/BigFix: After screwing the pooch on the ISS deal, IBM went back to the well to acquire BigFix, which is as much a big IT ops play as a security play. It fits nicely with Tivoli and thus will be a lot cleaner to integrate and leverage than ISS. That doesn’t mean there won’t be a run for the exits by the BigFix brain trust, or that IBM won’t screw this one up too, but you can at least make a case that BigFix is a much better fit. Trend Micro/Mobile Armor: Oh, yeah, Trend had a big hole in mobile encryption as well. So they filled it, but only once. How silly. Though it’s not clear they could have filled it twice if they tried. Bonus Round The CRN folks left out a couple that bear mentioning. RSA/Archer: This deal was announced on Jan 5, so it hardly feels like a 2010 deal. Given EMC’s move to drive more of their own services and push to solidify CIO level relationships, buying Archer’s toolkit, I mean ‘platform’, makes a lot of sense. The question for next year is whether RSA will buy something to supplement EnVision, which continues to fall behind technically in the SIEM/Log Management space. Juniper/Altor: This one is fresh in our minds because it went down recently, but buying Altor was probably as much about Juniper getting access to the VMSafe API as about buying a spot in a market that isn’t yet. How else do you justify paying in the neighborhood of 30x bookings? You can check out our pithy Incite on the deal (it’s bullet #3). I’m sure by this point you want to know what’s going to happen in 2011. So let’s bust out the Magic 8 Ball and figure it out: Will there be more deals in 2011 than 2010? My sources say no. Will there be a bunch of fire sales? Without a doubt.

Share:
Read Post

Incite 12/22/2010: Resolution

Pretty much every year, I spend the winter holidays up north visiting the Boss’s family. I usually take that week and try to catch up on all the stuff I didn’t get done, working frantically on whatever will launch right when everyone returns from their December hangover. But as I have described here, I’m trying to evolve. I’m trying to take some time to smell the proverbial roses, and appreciate things a bit. I know, quite novel. I have to say, this has been a great year on pretty much all fronts. There was a bit of uncertainty this time last year, as I had left my previous job and we were rushing headlong into announcing the new Securosis. There were a lot of moving pieces and it was pretty stressful, with legal documents relating to the new company floating around, web sites to update, and pipelines to build. A year later, I can say things are great. I’ve told them each collectively, but I have to thank Rich and Adrian for letting me join their band of merry men. Also a big thanks to our contributors (Mort, Gunnar, Dave, and Jamie) who keep us on our toes and teach me something every time we talk. I won’t forget our editor Chris either, who actually helps to make my ramblings somewhat Strunk & White ready. I also want to thank all of you, for reading my stuff and not throwing anything at me during speaking gigs. I do appreciate that. Mentally, I’m in a good place. Sure I still have some demons, but who doesn’t? I keep looking to exorcise each one in its turn. Physically, I’m in pretty good shape. Probably the best shape I’ve been in since I graduated college. Yes, I had dark hair back then. The family is healthy and they seem to still like me. I have nothing to complain about on that front. Yes, I’m very lucky. I’m also very excited for 2011. Rich alluded to our super sekret plans for world domination, and things are coming together on that front. No, it’s not fast enough, but when we get there it will be great. I’m looking forward to fleshing out my research agenda and continuing to work with our clients. Since this is the last Incite of 2010, I guess I’ll divulge my 2011 resolution: Don’t screw it up. No, I’m not kidding. There will be ups and there will be downs. I expect that. But if I can look back 12 months from now and feel the way I do today, it will have been a fantastic year. I hope you have a safe and happy holiday season, and there will be plenty of Incite in 2011. Until then… -Mike Photo credits: “Resolution” originally uploaded by sneeu Incite 4 U Gawking at Security 101: Oh how the PR top spins. After spending last week washing egg off their faces due to the massive pwnage Gawker suffered, now they are out talking about all the cool stuff they’ll do to make sure it doesn’t happen again. Like requiring employees to log into Google Apps with SSL. And telling them not to discuss sensitive stuff in chat rooms. Yeah, that’s the answer. Just be thankful that sites like Gawker don’t collect much information. Though we should commend folks like LinkedIn and Yahoo, who used the list of suckers, I mean commenters, and reset their passwords automagically. I’ve had issues with LinkedIn’s security processes before, but in this case they were on the ball. – MR Fear the PM: Do project managers managers need to “lighten up” and give away some control over development projects? Maybe. Are they being forced to provide transparency into their projects because SaaS management tools allow access to outsiders? Mike Vizard and LiquidPlanner CEO Charles Seybold seem to think so. Personally I think it’s total BS. With Agile becoming a standard development methodology, the trend is exactly the opposite. Agile with Scrum, by design, shields development efforts from outside influencers, leaving product managers more in control of feature sets than ever before. They are the gatekeepers. And when you manage tasks by 3×5 card and prioritize with Post-It notes, you don’t exactly provide transparency. Collaboration and persuasion are interpersonal skills, not an app. I recommend that project managers leverage software for task tracking over and above task cards, but don’t think some cloud-based nag-ware is going to subjugate a skilled PM. – AL Not your daddy’s DDoS: – I’ve spent a heck of a lot of time explaining denial of service attacks to the media over the past few weeks for some odd reason. While explaining straightforward flooding attacks is easy enough, I found it a bit tougher to talk about more complex DDoS. To be honest I don’t know why I tried, because for the general press it doesn’t really matter. But one area I never really covered too much is application level DDoS, where you dig in and attack resource-intensive tasks rather than the platform. Craig Labovitz of Arbor Networks does a great job of explaining it in this SearchSecurity article (near the bottom). Definitely worth a read. – RM No slimming the AV pig: Ed over at Security Curve makes the point (again) that the issues around AV, especially performance, aren’t going to get better. Sure the vendors are working hard to streamline things and for the most part they are making progress. Symantec went from a warthog to a guinea pig, but it’s still a pig. And they can’t change the math. No matter how much you put into the cloud, traditional AV engines cannot keep up. Reputation and threat intelligence helps, but ultimately this model runs out of gas. Positivity, anyone? Yes, I’m looking for white listing to make slow and steady inroads in 2011. – MR Live with it: – This Incite isn’t a link, but a note on a call I had with a vendor recently (not a client) that highlighted 2 issues.

Share:
Read Post

2011 Research Agenda: Quantum Cloudiness, Supervillan Shields, and No-BS Risk

In my last post I covered the more practical items on my research agenda for the coming year. Today I will focus more on pure research: these topics are a bit more out there and aren’t as focused on guiding immediate action. While this is a smaller percentage of where I spend my time, overall I think it’s more important in the big picture. Quantum Datum I try to keep 85-90% of my research focused on practical, real-world topics that help security pros in their day to day activities. But for the remaining 10-15%? That’s where I let my imagination run free. Quantum Datum is a series of concepts I’ve started talking about, around advanced information-centric security, drawing on metaphors from quantum mechanics to structure and frame the ideas. As someone pointed out, I’m using something ridiculously complex to describe something that’s also complex, but I think some interesting conclusions emerge from mashing these two theoretical frameworks together. Quantum Datum is focused on the next 7-10 years of information-centric security – much of which is influenced by cloud computing. For me this is an advanced research project, which spins off various real-world implications that land in my other research areas. I like having an out-there big picture to frame the rest of my work – it provides some interesting context and keeps me from falling so far into the weeds that all I’m doing is telling you things you already know. Outcomes-Based Risk Management and Security I’m sick and tired of theoretical risk frameworks that don’t correlate security outcomes with predictions or controls. I’m also tired of thinking we can input a bunch of numbers into a risk framework without having a broad set of statistics in order to actually evaluate the risks in our context. And if you want to make me puke, just show me a risk assessment that relies on unverified vendor FUD numbers from a marketing campaign. The idea behind outcomes-based risk management and security is that we, to the best of our ability, use data gathered from real-world incidents to feed our risk models and guide security control decisions. This is based on similar approaches in medicine which correlate patient outcomes to treatments – rather than changes in specific symptoms/signs. For example, the question wouldn’t be whether or not the patient has a heartbeat when the ambulance drops them off at the hospital, but whether or not they later leave the hospital breathing on their own. (With the right drugs you can give a rock a heartbeat… or Dick Cheney, as the record shows). For security, this means pushing the industry for more data sets like the Verizon and Trustwave investigation/breach reports, which don’t just count breaches, but identify why they happened. This needs to be supplemented by additional datasets whenever and wherever we can find and validate them. Clearly this is my toughest agenda item, because it relies so heavily on the work of others. Securosis isn’t an investigations firm, and lacks resources for the kinds of vigorous research needed to reach out to organizations and pull together the right numbers. But I still think there are a lot of opportunities to dig into these issues and work on building the needed models by mining public sources. And if we can figure out an economically viable model to engage in the primary research, so much the better. The one area where we are able to contribute is on the metrics model side, especially with Project Quant. We’re looking to expand this in 2011 and continue to develop hard metrics models to help organizations improve operational performance and security. Advanced Persistent Defense Can you say “flame bait”? I probably won’t use the APD term, but I can’t wait to see the reactions when I toss it out there. There are plenty of people spouting off about APT, but I’m more interested in understanding how we can best manage the attackers working inside our networks. The concept behind advanced defense is that you can’t keep them out completely, but you have many tools to detect and contain the bad guys once they come in. Some of this ties to network architectures, monitoring, and incident response; while some looks at data security. Mike has monitoring well covered and we’re working on an incident response paper that fits this research theme. On top of that I’m looking at some newer technologies such as File Activity Monitoring that seem pretty darn interesting for helping to limit the depth of some of these breaches. No, you can’t entirely keep them out, but you can definitely reduce the damage. I’m debating publishing an APT-specific paper. I’ve been doing a lot of research with people directly involved with APT response, but there is so much hype around the issue I’m worried that if I do write something it might spur the wrong kind of response. The idea would be to cut through all the hype and BS. I could really use some feedback on whether I should try this one. In terms of the defense concepts, there are specific areas I think we need to talk about, some of which tie into Mike and Adrian’s work: Network segregation and monitoring. When you’re playing defense only, you need a 100% success rate, but the bad guy only needs to be right once – and no one is ever 100% successful over the long term. But once the bad guy is in your environment, with the right architecture and monitoring you can turn the tables. Now he needs to be right all the time or you can detect his activities. I want to dig into these architectures to tighten the window between breach and detection. File Activity Monitoring. This is a pretty new technology that’s compelling to me from a data security standpoint. In non-financial attacks the goal is usually to grab large volumes of unstructured data. I think FAM tools can increase our chances of detecting this activity early. Incident response. “React Faster

Share:
Read Post

2011 Research Agenda: the Practical Bits

I always find it a bit of a challenge to fully plan out my research agenda for the coming year. Partly it’s due to being easily distracted, and partly my recognition that there are a lot of moving cogs I know will draw me in different directions over the coming year. This is best illustrated by the detritus of some blog series that never quite made it over the finish line. But you can’t research without a plan, and the following themes encompass the areas I’m focusing on now and plan to continue through the year. I know I won’t able to cover everything in the depth I’d like, so I could use feedback on to what you folks find interesting. This list is as much about the areas I find compelling from a pure research standpoint as what I might write about. This post is about the more pragmatic focus areas, and the next post will delve into more forward-looking research. Information-Centric (Data) Security for the Cloud I’m spending a lot more time on cloud computing security than I ever imagined. I’ve always been focused on information-centric (data) security, and the combination of cloud computing adoption, APT-style threats, the consumerization of IT, and compliance are finally driving real interest and adoption of data security. Data security consistently rates as a top concern – security or otherwise – when adopting cloud computing. This is in large driven part by the natural fear of giving up physical control of information assets, even if the data ends up being more secure than it was internally. As you’ll see at the end of this post, I plan on splitting my coverage into two pieces: what you can do today, and what to watch for the future. For this agenda item I’ll focus on practical architectures and techniques for securing data in various cloud models using existing tools and technologies. I’m considering writing two papers in the first half of the year, and it looks like I will be co-branding them with the Cloud Security Alliance: Assessing Data Risk for the Cloud: A cloud and data specific risk management framework and worksheet. Data Security for Cloud Computing: A dive into specific architectures and technologies. I will also continue my work with the CSA, and am thinking about writing something up on cloud computing security for SMB because we see pretty high adoption there. Pragmatic Data Security I’ve been writing about data security, and specifically pragmatic data security, since I started Securosis. This year I plan to compile everything I’ve learned into a paper and framework, plus issue a bunch of additional research delving into the nuts and bolts of what you need to do. For example, it’s time to finally write up my DLP implementation and management recommendations, to go with Understanding and Selecting. The odds are high I will write up File Activity Monitoring because I believe it’s at an early stage and could bring some impressive benefits – especially for larger organizations. (FAM is coming out both stand-alone and with DLP). It’s also time to cover Enterprise DRM, although I may handle that more through articles (I have one coming up with Information Security Magazine) and posts. I also plan to run year two of the Data Security Survey so we can start comparing year-over-year results. Finally, I’d like to complete a couple more Quick Wins papers, again sticking with the simple and practical side of what you can do with all the shiny toys that never quite work out like you hoped. Small Is Sexy Despite all the time we spend talking about enterprise security needs, the reality is that the vast majority of people responsible for implementing infosec in the world work in small and mid-sized organizations. Odds are it’s a part time responsibility – or at most 1 to 2 people who spend a ton of time dealing with compliance. More often than not this is what I see even in organizations of 4,000-5,000 employees. A security person (who may not even be a full-time security professional) operating in these environments needs far different information than large enterprise folks. As an analyst it’s very difficult to provide definitive answers in written form to the big company folks when I know I can never account for their operational complexities in a generic, mass-market report. Aside from the Super Sekret Squirrel project for S Share:

Share:
Read Post

NSA Assumes Security Is Compromised

I saw an interesting news item: the NSA has changed their mindset and approach to data security. Their new(?) posture is that Security Has Always Been Compromised. Debora Plunkett of the NSA’s “Information Assurance Directorate” stated: There’s no such thing as ‘secure’ any more. The most sophisticated adversaries are going to go unnoticed on our networks. We have to build our systems on the assumption that adversaries will get in. We have to, again, assume that all the components of our system are not safe, and make sure we’re adjusting accordingly. I started thinking about how I would handle this problem and it became mind-boggling. I assume compartmentalization and recovery is the strategy, but the details are of course the issue. Just the thought of going through the planning and reorganization of a data processing facility the size of what the NSA (must) have in place sent chills down my spine. What a horrifically involved process that must be! Just the network and security technology deployment would be huge; the disaster recovery planning and compartmentalization – especially what to do in the face of incomplete forensic evidence – would be even more complex. How would you handle it? Better forensics? How would you scope the damage? How do you handle source code control systems if they are compromised? Are you confident you could identify altered code? How much does network segmentation buy you if you are not sure of the extent of a breach? To my mind this what Mike has been covering with his ‘Vaults’ concept of segmentation, part of the Incident Response Fundamentals. But the sheer scope and complexity casts those recommendations in a whole new light. I applaud the NSA for the effort: it’s the right approach. The implementation, given the scale and importance of the organization, must be downright scary. Share:

Share:
Read Post

React Faster and Better: New Data for New Attacks, Part 1

As we discussed in our last post on Critical Incident Response Gaps, we tend to gather too much of the wrong kinds of information, too early in the process. To clarify that a little bit, we are still fans of collecting as much data as you can, because once you miss the opportunity to collect something you’ll never get another chance. Our point is that there is a tendency to try to boil the ocean with analysis of all sorts of data. That causes failure and has plagued technologies like SIEM, because customers try to do too much too soon. Remember, the objective from an operational standpoint is to react faster, which means discovering as quickly as possible that you have an issue, and then engaging your incident response process. But merely responding quickly isn’t useful if your response is inefficient or ineffective, which is why the next objective is to react better. Collecting the Right Data at the Right Time Balancing all the data collection sources available today is like walking a high wire, in a stiff breeze, after knocking a few back at the local bar. We definitely don’t lack for potential information sources, but many organizations find themselves either overloaded with data or missing key information when it’s time for investigation. The trick is to realize that you need three kinds of data: Data to support continuous monitoring and incident alerts/triggers. This is the stuff you look at on a daily basis to figure out when to trigger an incident. Data to support your initial response process. Once an incident triggers, these are the first data sources you consult to figure out what’s going on. This is a subset of all your data sources. Keep in mind that not all incidents will tie directly to one of these sources, so sometimes you’ll still need to dive into the ocean of lower-priority data. Data to support post-incident investigation and root cause analysis. This is a much larger volume of data, some of it archived, used to for the full in-depth investigation. One of the Network Security Fundamentals I wrote about early in the year was called Monitor Everything because I fundamentally believe in data collection and driving issue identification from the data. Adrian pushed back pretty hard, pointing out that monitoring everything may not be practical, and focus should be on monitoring the right stuff. Yes, there is a point in the middle. How about collect (almost) everything and analyze the right stuff? That seems to make the most sense. Collection is fairly simple. You can generate a tremendous amount of data, but with the log management tools available today scale is generally not an issue. Analysis of that data, on the other hand, is still very problematic; when we mention too much of the wrong kinds of information, that’s what we are talking about. To address this issue, we advocate segmenting your network into vaults and analyzing traffic and events within the critical vaults at a deep level. So basically it’s about collecting all you can within the limits of reason and practicality, then analyzing the right information sources for early indications of problems, so you can then engage the incident response process. You start with a set of sources to support your continuous monitoring and analysis, followed by a set of prioritized data to support initial incident management, and close with a massive archive of different data sources, again based on priorities. Continuous Monitoring We have done a lot of research into SIEM and Log Management, as well as advanced monitoring (Monitoring up the Stack). That’s the kind of information to use in your ongoing operational analysis. For those vaults (trust zones) you deem critical, you want to monitor and analyze: Perimeter networks and devices: Yes, the bad guys tend to be out there, so they need to cross the perimeter to get to the good stuff. So we want to look for issues on those devices. Identity: Who is as important as what, so analyze access to specific resources – especially within a privileged user context. Servers: We are big fans of anomaly detection and white listing on critical servers such as domain controllers and app servers, so you can be alerted to funky stuff happening at the server level – which usually indicates something that warrants investigation. Database: Likewise, correlating database anomalies against other types of traffic (such as reconnaissance and network exfiltration) can indicate a breach in progress. Better to know that early, before your credit card brand notifies you. File Integrity: Most attacks involve some change to key system files, so by monitoring their integrity you can pinpoint when an attacker is trying to make changes. You can even block these attacks using technology like HIPS, but that’s another story for another day. Application: Finally, you should be able to profile normal transactions and user interactions for your key applications (those accessing protected data) and watch for non-standard activities. Again, they don’t always indicate a problem, but do allow you to prioritize investigation. We recommend focusing on your most important zones, but keep in mind that you need some baseline monitoring of everything. The two most common sources we see for baselines are network monitoring and endpoint & server logs (or whatever security tools you have on those systems). Full Packet Capture Sandwich One emerging advanced monitoring capability – the most interesting to us – is full packet capture. Rich wrote about this earlier this year. Basically these devices capture all the traffic on a given network segment. Why? In a nutshell, it’s the only way you can really piece together exactly what happened, because this way you have the actual traffic. In a forensic investigation this is absolutely crucial will provide detail you cannot get from log records. Going back to our Data Breach Triangle, you need some kind of exfiltration for a real breach. So we advocate heavy perimeter egress filtering and monitoring, to (hopefully) prevent valuable data from escaping

Share:
Read Post

Quantum Unicorns

Apparently we are supposed to fear the supercomputer of the future. According to Computerworld, the clock is ticking on encryption. Yes, you guessed it, the mythical “quantum computer” technology is back in the news again, casting its shadow over encryption. It will make breaking encryption much, much easier. “There has been tremendous progress in quantum computer technology during the last few years,” says Michele Mosca, deputy director of the Institute for Quantum Computing at the University of Waterloo in Waterloo, Ontario, Canada. “It’s a game changer” And when they perfect it, the Playstation 37 will rock! Unfortunately it’s powered by leprechauns’s gold and Unicorn scat, so research efforts have been slowed by scarcity of resources. Seriously, I have been hearing this argument since I got into security 15 years ago. Back then we were hearing about how 3-DES was doomed when quantum technology appeared. It was, but that has more to do with Moore’s Law and infant encryption technologies than anything else. I think everybody gets that if we have computers that are a million times faster than what we have today Flash will run reasonably fast we’ll be able to break existing encryption technology. But how much data you encrypt today will have value in 20 years? Or more likely in 40 years? I am still willing to bet we’ll see 100” foldable carbon nanotube televisions or pools of algae performing simple arithmetic before quantum cryptography. And by that time, maybe all government laptops will have full disk encryption. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.