Securosis

Research

Incite 10/27/2010: Traffic Ahead

I saw an old friend last week, and we were talking about the business of Securosis a bit. One of the questions he asked was whether it’s a lifestyle business. The answer is that of course it is. Rich, Adrian, and I have done lots of things over the years and we all have independently come to the conclusion that we don’t want to work for big machines any more. We all have different reasons for that, and I was reminded of one of mine on Monday. Traffic. The mere mention of the word makes me cringe. Not like the Low Spark of High Heeled Boys (YouTube) cringe, but the cringe of wasted time. I’ve been lucky in that even when I did have an ‘office’, my commute was normally less than 15 minutes. But for most of the past 10 years, I’ve worked from a home office, which really means from random coffee shops and lunch joints. But on Monday I had to take a morning flight, and I wanted to help out the Boss and get the kids ready for school. I figured it wouldn’t be a big deal to leave 30 minutes later to head down to Hartsfield (Atlanta’s airport). I was wrong. Instead of the 35 minutes it normally takes, I was in my car for almost 80. Yeah, almost an hour and a half. I couldn’t help but feel that was wasted time. Even more, I feel for the folks who do that every day. I mean there are people who drive 70 or 80 miles each way to their offices. Now I’m not trying to judge anyone here, because folks live where they do for lots of reasons. And they work where they work for lots of reasons. Some folks don’t feel they can change jobs or can’t find something that’ll work closer to home. But you have to wonder about the opportunity cost of all that commuting time. Not to mention the environmental impact. Now to be clear, I’m a novice commuter. I didn’t have any podcasts loaded up to listen to or audio books or phone calls to make first thing on Monday morning. Yeah, who the hell wants to hear from me first thing in the morning? So there are more productive ways to pass the time. But that’s not for me. I want my biggest decision in the morning to be which coffee shop to hit and when to make sure I have no exposure to traffic. And it works much better for me that way. – Mike Photo credits: “Rush Hour” originally uploaded by MSVG Incite 4 U Hot wool for you… – The big news this week was the release of a new Firefox plug-in called Firesheep, which basically implements dead simple sidejacking over a wireless network for key social network sites. Like Facebook and Twitter. I saw sidejacking of a Gmail account by Rob Graham at BlackHat about 3 years ago, so this isn’t a new attack. But the ease of doing it is. Rich uses this as another reminder that Public WiFi is no good, and you can’t dispute that. Sure we could get all pissy that this guy released the tool, but that’s the wrong conclusion. I suggest you think of this as a great opportunity to teach users something. You can Firesheep their stuff in the office or in a conference room and use that to show how vulnerable their sites are. I suspect it will have the same educational effect as an internal phishing attack, meaning it’ll shock the hell out of the users and they may even remember it for more than an hour. This piece on GigaOm goes through some of the preventative measures, such as connecting via SSL when that is an option, and using a VPN to encrypt your traffic. Both are good ideas. – MR Bass ackwards (more on Firesheep) – Joe Wilcox argues that the new Firesheep Firefox Plugin is akin to “Giving Guns to Kids”. He claims that, because it’s so easy for anyone to see the cleartext password and cookies that are being blasted around the planet at the speed of light, nearly anyone can compromise an account. I can’t quite comprehend what Mr. Wilcox is thinking by calling the plugin ‘abominable’, as it is simply shining a powerful spotlight on stupidity that has been going on for a long time. Every semi-skilled criminal is doing this today – or more precisely has been doing this for almost a decade. Can the plugin turn kids into hackers? No, but it gives them a handy tool if they did not already have one. But it will help make a lot more people aware of the stupidity going on with web providers, and of logging in over untrusted wireless connections. Better to learn that lesson on Toys ‘R Us than Wells Fargo. – AL Reconcile that, Gunnar – I’ll admit it: I’m a big fan of Gunnar and my man crush has grown since he’s joined our team as a Contributor. Watching the man at work is a learning experience for me, and that’s a good thing. But in his Reconcile This post he’s missing part of the story. He unloaded on security folks for solving yesterday’s problems by making firewalls the highest priority spend. If he’s talking about traditional port-based firewalls, then I’m with him. But I suspect a great deal of those folks are looking at upgrading their perimeter defenses by adding application awareness to the firewall. We described this in depth in our Understanding and Selecting an Enterprise Firewall paper. These devices address social network apps (by enforcing policy on egress), as well as helping to enforce mobile policies (via a VPN connection to leverage the egress policies). I realize GP is talking about the need to focus on the root cause, which is application and higher-level security. But security folks don’t generally control those functions. They do control the network, which is why they usually look to solve whatever security problem they have with an inline device. When all you have is a hammer, everything looks

Share:
Read Post

Incident Response Fundamentals: Incident Command Principles

I know what you’re thinking to yourself right now: “They promised me a cool series of posts on the cutting edge of incident response, and now we’re talking management principles and boxes on an org chart? What a rip.” But believe it or not, the most important aspect of incident response is the right organization, followed by the right process. How do I know this? Because I’ve been through a ton of incident response training with local and federal agencies, and have directly responded to everything from single-rescuer ski accidents to Hurricane Katrina. (And a few IT things in the middle, but those don’t sound nearly as exciting). While working as an emergency responder I fall under something known as the National Incident Management System, which uses a formalized process and structure called the Incident Command System (ICS). ICS consists of a standard management hierarchy and processes for managing temporary incidents of any size and nature. ICS was originally developed for managing large wildfires in the 1970s, and has since expanded into a national standard that’s also used (and adapted) by a variety of other countries and groups. While our React Faster and Better series won’t to teach you all of ICS, everything we will talk about in terms of process and organization is adapted directly from it. There’s no reason to reinvent the wheel when you have something with over 30 years of battle-hardened testing available. Additionally, those of you in larger companies or verticals like healthcare or public utilities may be required to learn and use ICS in your own incidents. Incident Command System Principles ICS solves a lot of the problems we encounter in incidents. Its focus is on clear communications and accountability, with a structure that expands and contracts as needed, allowing disparate groups to combine even if they’ve never worked together before. ICS includes 5 key concepts: Unity of command: Each person involved in an incident only responds to one supervisor. Common terminology: It’s hard to communicate when everyone uses their own lingo. Common terminology applies to both the organizational structure (with defined roles, like “Incident Commander”, that everyone understands) and use of plain English (or the language of your choice) in incident communications. You can still talk RPC flaws all you want, but when communicating with management and non-techies you’ll use phrases like “The server is down because we were hacked.” Management by objectives: Responders have specific objectives to achieve, in priority order, as defined in a response plan. No running around fighting fires without central coordination. Flexible and modular organization: Your org structure should expand and contract as needed based on the nature and size of the incident. The organizational structure can be as small as a single individual, and as large as the entire company. Span of control: No one should manage less or more than 3-7 other individuals, with 5 being the sweet spot. This one comes from many years of management science, which have repeatedly confirmed that attempting to directly manage more is ineffective, while managing less is an inefficient use of resources. If you want to learn more about ICS you can run through the same self-training course used by incident responders at FEMA’s online training site. Start with ICS 100, which covers the basics. While the process we’ll outline in this series is based on ICS principles, it’s specific to information security incident response. We won’t be using terms like “branch” and “section” because they would distract from our focus, but you can clearly plug them in if you want to standardize on ICS. But if you need the Air Ops branch for a cyberattack, something is very very wrong. For the next post we will focus on three of the key concepts related to organizational structure: unity of command, flexible and modular organization, and span of control, as we talk about the key response roles and structure. Share:

Share:
Read Post

NSO Quant: The Report and Metrics Model

It has been a long slog, but the final report on the Network Security Operations (NSO) Quant research project has been published. We are also releasing the raw data we collected in the survey at this point. The main report includes: Background material, assumptions, and research process overview Complete process framework for Monitoring (firewalls, IDS/IPS, & servers) Complete process framework for Managing (firewalls & IDS/IPS) Complete process framework for maintaining Device Health The detailed metrics which correlate with each process framework Identification of key metrics How to use the model Additionally, you can download and play around with the spreadsheet version of the metrics model. In the spreadsheet, you can enter your specific roles and headcount costs, and estimate the time required for each task, to figure out your own costs. In terms of the survey, as of October 22, 2010 we had 80 responses. The demographics were pretty broad (from under 5 employees to over 400,000), but we believe the data validates some of the conclusions we reached through our primary research. Click here for the full, raw survey results. The file includes a summary report and the full raw survey data (anonymized where needed) in .xls format. With the exception of the raw survey results, we have linked to the landing pages for all the documents, because that’s where we will be putting updates and supplemental material (hopefully you aren’t annoyed by having to click an extra time to see the report). The material is being released under a Creative Commons license. Thanks again to SecureWorks for sponsoring this research. Share:

Share:
Read Post

Friday Summary: October 22, 2010

Facebook is for old people. Facebook will ultimately make us more secure. I have learned these two important lessons over the last few weeks. Saying Facebook is for old people is not like saying it’s dead – far from it. But every time I talk computers with people 10-15 years older than me, all they do is talk about Facebook. They love it! They can’t believe they found high school acquaintances they have not seen for 30+ years. They love the convenience of keeping tabs on family and friends from their Facebook page. They are amazed to find relatives who have been out of touch for decades. It’s their favorite web site by far. And they are shocked that I don’t use it. Obviously I will want to once I understand it, so they all insist on telling me about all the great things I could do with Facebook and the wonderful things I am missing. They even give me that look, like I am a complete computer neophyte. One said “I thought you were into computers?” Any conversation about security and privacy went in one ear and out the other because, as I have been told, Facebook is awesome. As it always does, this thread eventually leads to the “My computer is really slow!” and “I think I have a virus, what should I do?” conversations. Back when I had the patience to help people out, a quick check of the machine would not uncover a virus. I never got past the dozen quasi-malicious browser plug-ins, PR-ware tracking scripts sucking up 40% of system resources, or nasty pieces of malware that refused to be uninstalled. Nowdays I tell them to stop visiting every risky site, stop installing all this “free” crap, and for effing sake, stop clicking on email links that supposedly come from your bank or Facebook friends! I think I got some of them to stop clicking email links from their banks. They are, after all, concerned about security. Facebook is a different story – they would rather throw the machine out than change their Facebook habits because, sheesh, why else use the computer? I am starting to notice an increase in computer security awareness from the general public. Actually, the extent of their awareness is that a lot of them have been hacked. The local people I talk to on a regular basis tell me they and all their children, have had Facebook and Twitter accounts hacked. It slowed them down for a bit, but they were thankful to get their accounts back. And being newly interested in security, they changed their passwords to ‘12345’ to ensure they will be safe in the future. Listening to the radio last week, two of the DJs had their Twitter accounts stolen. One DJ had a password that was his favorite team name concatenated with the number of his favorite player. He was begging over the air for the ‘hacker’ to return his access so he could tweet about the ongoing National League series. Social media are a big part of their personal and professional lives and, dammit, someone was messing with them! One of my biggest surprises in Average Joe computer security was seeing Hammacher Schlemmer offer an “online purchase security system”. Yep, it’s a little credit card mag stripe reader with a USB cable. Supposedly it encrypts data before it reaches your computer. I certainly wonder exactly whose public key it might be encrypting with! Actually, I wonder if the device does what it says it does – or anything at all! I am certain Hammacher Schlemmer sells more Harry Potter wands, knock-off Faberge eggs, and doggie step-up ladders than they do credit card security systems, but clearly they believe there is a market for this type of device. I wonder how many people will see these in their in-flight Sky Mall magazines over the holidays and order a couple for the family. Even for aunt Margie in Minnesota, so she can safely send electronic gift cards to all the relatives she found on Facebook. Now that she regained access to her account and set a new password. And that’s how Facebook will improve security for everyone. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Tech Target article on Database Auditing. Adrian’s technical tips on setting up database auditing. Rich at RSA 2010 China. Favorite Securosis Posts Mike Rothman: Monitoring up the Stack: Climbing the Stack. Then end of the MUTS series provides actionable information on where to start extending your monitoring environment. Adrian Lane: Vaults within Vaults. Other Securosis Posts React Faster and Better: Data Collection/Monitoring Infrastructure. White Paper Goodness: Understanding and Selecting an Enterprise Firewall. Incite 10/20/2010: The Wrongness of Being Right. React Faster and Better: Introduction. New Blog Series: React Faster and Better. Monitoring up the Stack: Platform Considerations. Favorite Outside Posts Mike Rothman: Reconcile This. Gunnar calls out the hypocrisy of what security folks focus on – it’s great. The bad guys are one thing, but our greatest adversary is probably inertia. Gunnar Peterson: Tidal Wave of Java Exploitation. Adrian Lane: Geek Day at the White House. Chris Pepper: WTF? Apple deprecates Java. Actuallly they’re dropping the Apple JVM as of 10.7, but do you expect Oracle to build and maintain a high-quality JVM for Mac OS X? A lot of Mac-toting Java developers are looking at each other quizzically today. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics – Device Health. NSO Quant: Manage Metrics – Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics – Deploy and Audit/Validate. NSO Quant: Manage Metrics – Process Change Request and Test/Approve. Research Reports and Presentations Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Data Encryption 101: A Pragmatic Approach to PCI. White Paper: Understanding and Selecting SIEM/Log Management. White Paper: Endpoint Security Fundamentals. Top News and Posts A boatload of Oracle fixes. Judge Clears CAPTCHA-Breaking Case for Criminal Trial Data theft overtakes physical loss. Malware pushers abuse Firefox warning page. Predator

Share:
Read Post

Can we ever break IT?

I was reading one of RSnake’s posts on how our security devolves to the lowest common denominator because we can’t break IT – which means we can’t make changes to systems, applications, and endpoints in order to protect them. He was talking specifically about the browser, but it got me thinking a bit bigger: when/if it’s OK to break IT. To clarify, by breaking IT, I mean changing the user experience adversely in some way to more effectively protect critical data/information. I’ll get back to a concept I’ve been harping on the last few weeks: the need to understand what applications & data are most important to your organization. If the data is that important to your business, then you need to be able to break IT in order to protect it. Right? Take the next step: this means there probably should be a class of users who have devices that need to be locked down. Those users have sensitive information on those devices, and if they want to have that data, then they need to understand they won’t be able to do whatever they want on their devices. They can always choose not to have that data (so they can visit pr0n sites and all), but is it unreasonable to want to lock down those devices? And actually be able to do it? There are other users who don’t have access to much, so locking down their devices wouldn’t yield much value. Sure, the devices could be compromised and turned into bots, but you have other defenses to address that, right? But back to RSnake’s point: we have always been forced to accept the lowest common denominator from a security standpoint. That’s mostly because security is not perceived as adding value to the business, and so gets done as quickly and cheaply as possible. Your organization has very little incentive to be more secure, so they aren’t. Your compliance mandate du jour also forces us toward the lowest common denominator box. Love it or hate it, PCI represents that low bar now. Actually, if you ask most folks who don’t do security for a living (and probably a shocking number who do), they’ll tell you that being PCI compliant represents a good level of security. Of course we know better, but they don’t. So we are forced to make a serious case to go beyond what is perceived to be adequate security. Most won’t and don’t, and there it ends. So RSnake and the rest of us can gripe about the fact that we aren’t allowed to break much of anything to protect it, but that’s as much our problem as anything else. We don’t make the case effectively enough that the added protection we’ll get from breaking the user experience is worth it. Until we can substantiate this we’ll remain in the same boat. Leaky as it may be. Share:

Share:
Read Post

Everything You Ever Wanted to Know about DLP

Way back when I converted Securosis from a blog into a company, my very first paper was (no surprise) Understanding and Selecting a DLP Solution. Three or so years later I worried it was getting a little long in the tooth, even though the content was all still pretty accurate. So, as you may have noticed from recent posts, I decided to update and expand the content for a new version of the paper. Version 1.0 is still downloaded on pretty much a daily basis (actually, sometimes a few hundred times a month). The biggest areas of expansion were a revamped selection process (with workflow, criteria, and a selection worksheet) and more details on “DLP features” and “DLP Light” tools that don’t fit the full-solution description. This really encapsulates everything you should need to know up through acquiring a DLP solution, but since it’s already 50+ pages I decided to hold off on implementation until the next paper (besides, that gives me a chance to scrum up some extra cash to feed the new kid). I did, however, also break out just the selection worksheet for those of you who don’t need the entire paper. Not that it will make any sense without the paper. The landing page is here: Understanding and Selecting a DLP Solution. Direct download is at: Whitepaper (PDF) Very special thanks to Websense for licensing the paper and worksheet. They were the very first sponsor of my first paper, which helped me show my wife we wouldn’t lose the house because I quit my job to blog. Share:

Share:
Read Post

Incident Response Fundamentals: Data Collection/Monitoring Infrastructure

In Incident Response Fundamentals: Introduction we talked about the philosophical underpinnings of our approach and how you need to look at stuff before, during, and after an attack. Regardless of where in the attack lifecycle you end up, there is a common requirement: for data. As we mentioned, you only get one opportunity to capture the data, and then it’s gone. So in order to react faster and better in your environment, you will need lots of data. So how and where do you collect it? In theory, we say get everything you can and worry about how useful it is later. Obviously that’s not exactly practical in most environments. So you need to prioritize the data requirements based upon the most likely attack vectors. Yes, we’re talking risk management here, but it’s okay. A little prioritization based on risk won’t kill you. Collect What? The good news is there is no lack of data to capture – let’s list some of the biggest buckets you have to parse: Events/Logs: It’s obvious but we still have to mention it. Event logs tell you what happened and when; they provide the context for many other data types, as well as validation for attacks. Database activity: Database audit logs provide one aspect of application activity, but understanding the queries and how that relates to specific attacks is very helpful to profiling normal behavior and thus understanding what isn’t normal. Application data: Likewise, application data beyond the logs – including transactions, geo-location, etc. – provides better visibility into the context of application usage and pinpoint potential fraudulent activity. We discussed both Database activity monitoring and application monitoring in detail in the Monitoring up the Stack series. Network flow: Understanding which devices are communicating with which others enables pattern analysis to pinpoint strange network activity – which might represent an attack exfiltrating data or reconnaissance activity. Email: One of the most significant data leakage vectors is email (even if it is not always malicious), so it needs to be monitored and collected. Web traffic: Like email, web traffic data drive alerts and provide useful information for forensics. Configuration data: Most malware makes some kind of changes to the devices, so by collecting and monitoring device configurations those changes can be isolated and checked against policy to quickly detect an outbreak. Identity: Finally, an IP address is usable, but being able to map that back to a specific user and then track that user’s activity within your environment is much more powerful. We have dug pretty deeply into all these topics in our Understanding and Selecting a SIEM/Log Management research report, as well as the Monitoring Up the Stack series. Check out those resources for a deeper dive. Collect How? Now that you know what you want to do, how are you going to collect it? That depends a lot on the data. Most organizations have some mix of the following classes of devices: SIEM/Log Management Database Activity Monitoring Fraud detection CMDB (configuration management database) Network Behavioral Analysis Full Network Packet Capture So there are plenty of tools you can use, depending on what you want to collect. Again, we generally tend to want to capture as much data as possible, which is why we like the idea of full network packet capture for as many segments as possible. Mike wrote a thought piece this week about Vaults within Vaults, and part of that approach is a heavy dose of network segmentation along with capturing network traffic on the most sensitive segments. But capturing the data is only the first step in a journey of many miles. Then you have to aggregate, normalize, analyze, and alert on that data across data types to get real value. As mentioned above, we have already published a lot of information about SIEM/Log Management and Database Activity Monitoring, so check out those resources for more detail. As we dig into what has to happen at each period within the attack lifecycle, we’ll delve into how best to use the data you are collecting at that point in the process. But we don’t want to get ahead of ourselves – the first aspect of Incident Response is to make sure you are organized appropriately to respond to an incident. Our next post will focus on that. Share:

Share:
Read Post

Incite 10/20/2010: The Wrongness of Being Right

One of my favorite sayings is “Don’t ask the question if you don’t want the answer.” Of course, when I say answer, what I really mean is opinion. It makes no difference what we are talking about, I probably have an opinion. In fact, a big part of my job is to have opinions and share them with however will listen (and even some who won’t). But to have opinions means you need to judge. I like to think I have a finely tuned bullshit detector. I’ve been having vendors lie to me since I got into this business 18 years ago. A lot of end users can be delusional about their true situations as well. So that means I’m judging what’s happening around me at all times, and I tell then what I think. Even if they don’t want to hear my version of the truth. Sometimes I make snap judgements; other times I only take a position after considerable thought and research. I’m trying to determine if something is right or wrong, based on all the information I can gather at that point in time. But I have come to understand that right and wrong is nothing more than another opinion. What is right for you may be wrong for me. Or vice-versa. It took me a long long time to figure that out. Most folks still don’t get this. I can recall when I was first exposed to the Myers-Briggs test, when I stumbled on a book that included i. Taking that test was very enlightening for me. Turns out I’m an INTJ, which means I build systems, can be blunt and socially awkward (go figure), and tend to judge everything. Folks like me make up about 1% of the population (though probably a bit higher in tech and in the executive suite). I knew I was different ever since my kindergarten teacher tried to hold me back in kindergarten (true story), but I never really understood why. Even if you buy into the idea there are just 16 personality types, clearly there is a spectrum across each of the 4 characteristics. In my black/white world, there seems to be a lot of color. Who knew? This train of thought was triggered by a tweet by my pal Shack, basically calling BS on one guy’s piece on the value of not trying to be successful. That’s not how Dave rolls. And that’s fine. Dave is one guy. The dude writing the post is another. What works for that guy clearly would n’t work for Dave. What works for me doesn’t work for you. But what we can’t do is judge it as right or wrong. It’s not my place to tell some guy he needs to strive for success. Nor is it my place to tell the Boss not to get upset about something someone said about something. I would like her not to get upset because when she’s upset it affects me, and it’s all about me. But if she decides to get upset, that’s what she thinks is the right thing to do. To make this all high concept, a large part of our social problems boil down to one individual’s need to apply their own concept of right to another. Whether it’s religion or politics or parenting or values or anything, everyone thinks they are right. So they ridicule and persecute those who disagree. I’m all for intelligent discord, but at some point you realize you aren’t going to get to common ground. Not without firearms. Trying to shove your right into my backside won’t work very well. The next time someone does something you think is just wrong, take a step back. Try to put yourself in their shoes and see if there is some way you can rationalize the fact they think it’s right. Maybe you can see it, maybe you can’t. But unless that decision puts you in danger, you just need to let it go. Right? Glad you decided to see it my way (the right way). – Mike Photo credits: “wrong way/right way” originally uploaded by undergroundbastard Recent Securosis Posts Vaults within Vaults React Faster and Better: Introduction Monitoring Up the Stack series Platform Considerations Climbing the Stack Dead or Alive: Pen Testing Incite 4 U Verify your plumbing (or end up in brown water) – Daniel Cox of BreakingPoint busts devices for a living. So it’s interesting to read some of his perspectives on what you need to know about your networking gear. Remember, no network, no applications. So if your network is brittle, then your business will be brittle. Spoken by a true plumber, no? There is good stuff there, like understanding what happens during a power cycle and the logging characteristics of the device. The one I like best is #5: Do Not Believe Your Vendor. That’s great advice for any type of purchase. The vendor’s job is to sell you. Your job is to solve a problem. Rarely the twain shall meet, so verify all claims. But only if you want to keep your job, because folks covered in the brown stuff tend to get jettisoned quickly. – MR It’s new, and it’s old – Adam Shostack’s post Re-architecting the Internet poses a valid question: if we were to build a new Internet from scratch, would it be more secure? I think I have argued both sides of the “need a new Internet” debate at one time or another. Now I am kind of non-plussed on the whole discussion because I believe there won’t be a new Internet, and there won’t be a single Internet. We need to change what we do, but we don’t need a new Internet to do it. There is no reason we cannot continue to use the physical Internet we have and just virtualize the presentation. Much as a virtual server will leverage whatever hardware it has to run different virtual machines, there is no reason we can’t have different virtual Internets running over the same physical infrastructure. We have learned from information centric security that we can encapsulate information

Share:
Read Post

White Paper Goodness: Understanding and Selecting an Enterprise Firewall

What? A research report on enterprise firewalls. Really? Most folks figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but people think of firewalls as old, static, and generally uninteresting. But this is unfounded. Firewalls continue to evolve, and their new capabilities can and should impact your perimeter architecture and firewall selection process. That doesn’t mean we will be advocating yet another rip and replace job at the perimeter (sorry, vendors), but there are definitely new capabilities that warrant consideration – especially as the maintenance renewals on your existing gear come due. We have written a fairly comprehensive paper that delves into how the enterprise firewall is evolving, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting an Enterprise Firewall blog series from August and September 2010. Special thanks to Palo Alto Networks for sponsoring the research. You can check out the page in the research library, or download directly: Understanding and Selecting an Enterprise Firewall Share:

Share:
Read Post

Vaults within Vaults

My session for the Atlanta BSides conference was about what I expected in 2011. I might as well have thrown a dart at the wall. But the exercise got me thinking about the newest attacks (like Stuxnet) and the realization of how state-sponsored attackers have penetrated our networks with impunity. Clearly we have to shake up the status quo in order to keep up. This is a point I hit on in last week’s Incite, when discussing Greg Shipley’s post on being outgunned. Obviously what we are doing now isn’t working, and if anything the likelihood of traditional controls such as perimeter defense and anti-malware agents protecting much of anything decreases with every application moved up to the cloud and each trading partner allowed into your corporate network. The long-term answer is to protect the fundamental element of data. Rich and Adrian (with an assist from Gunnar) are all over that. As what we used to call applications continue to decompose into data, logic, processing, and presentation we have neither control over nor visibility into the data at most points in the cycle. So we are screwed unless we can figure out some way to protect the data regardless of how, where, or by whom it’s going to be used. But that is going to be a long, long, long, long slog. We don’t even know how to think about tackling the problem, so solutions are probably a decade away, and that’s being optimistic. Unfortunately that’s the wrong answer, because we have the problem now and need to start thinking about what to do. Did I mention we need answers now? Since I’m the plumber, I took a look into my tool bag and started thinking about what we could do within the constraints of our existing infrastructure, political capital, and knowledge to give us a better chance. This was compounded by the recent disagreement Adrian and I had about how much monitoring is necessary (and feasible) driven by Kindervag’s ideas on Zero Trust. I always seem to come back to the idea of not a disappearing perimeter, but multiple perimeters. Sorry Jerichonians, but the answer is more effectively segmenting networks with increasingly stringent controls based on the type and sensitivity of data within that domain. Right, this is not a new idea. It’s the idea of trust zones based on type of data. The military has been doing this for years. OK, maybe it isn’t such a great idea… Yes, I’m kidding. Many folks will say this doesn’t work. It’s just the typical defense in depth rhetoric, which says you need everything you already have, plus this new shiny object to stop the new attack. The problem isn’t with the architecture, it’s with the implementation. We don’t compartmentalize – not even if PCI says to. We run into far too many organizations with flat networks. From a network ops standpoint, flat networks are certainly a lot easier to deal with than trying to segment networks based on what data can be accessed. But flat networks don’t provide the hierarchy necessary to protect what’s important, and we have to understand that we don’t have the money (or resources) to protect everything. And realize that not everything needs to be protected with the same level of control. OK Smart Guy, How? Metaphorically, think about each level of segmented network as a vault. As you climb the stack of data importance, you tighten the controls and make it harder to get to the data (and theoretically harder to compromise), basically implementing another vault within the first. So an attacker going after the crown jewels needs to do more than compromise a vulnerable Windows 2000 Server that someone forgot about to see the targeted assets. Here’s how we do it: Figure out what’s important: Yes, I’ve been talking about this for years (this is the first step of the Pragmatic CSO). Find the important data: This is the discover step from all the Pragmatic Data Security research we’ve done. Assess the data’s importance: This gets back to prioritization and value. How much you can and should spend on protecting the data needs to correlate to how valuable it is, right? You should probably look at 3-4 different levels of data importance/value. Re-architect your network: This means working with the network guys to figure out how to segment your networks to cordon off each level of increasingly sensitive data. Add controls: Your existing perimeter defenses are probably fine for the first layer. Then you need to figure out what kind of controls are appropriate for each layer. More on Controls Again, the idea of layered controls is old and probably a bit tired. You don’t want a single point of failure. No kidding? But here I’m talking about figuring out what controls are necessary to protect the data, depending on its sensitivity. For example, maybe you have a call center and those folks have access to private data. Obviously you want that behind more than just the first perimeter, but the reality is that most of the risk is from self-inflicted injury. You know, a rep sending data out inadvertently. Sounds like a situation where DLP would be appropriate. Next you have some kind of transactional system that drives your business. For the layer, you monitor database and application activity. Finally you have intellectual property that is the underpinning of your business. This is the most sensitive stuff you have. So it makes sense to lock it down as tightly as possible. Any devices on this network segment are locked down using application whitelisting. You also probably want to implement full network packet capture, so you know exactly what is happening and can watch for naughty behavior. I’m making this up, but hopefully the idea of implementing different (and more stringent) controls in each network segment makes sense. None of this is new. As I’m starting to think about my 2011 research agenda, I like this idea of vaults (driven by network segmentation) as a metaphor for infrastructure security. But this isn’t just my show. I’m interested in whether you all think there is

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.