Securosis

Research

Can we ever break IT?

I was reading one of RSnake’s posts on how our security devolves to the lowest common denominator because we can’t break IT – which means we can’t make changes to systems, applications, and endpoints in order to protect them. He was talking specifically about the browser, but it got me thinking a bit bigger: when/if it’s OK to break IT. To clarify, by breaking IT, I mean changing the user experience adversely in some way to more effectively protect critical data/information. I’ll get back to a concept I’ve been harping on the last few weeks: the need to understand what applications & data are most important to your organization. If the data is that important to your business, then you need to be able to break IT in order to protect it. Right? Take the next step: this means there probably should be a class of users who have devices that need to be locked down. Those users have sensitive information on those devices, and if they want to have that data, then they need to understand they won’t be able to do whatever they want on their devices. They can always choose not to have that data (so they can visit pr0n sites and all), but is it unreasonable to want to lock down those devices? And actually be able to do it? There are other users who don’t have access to much, so locking down their devices wouldn’t yield much value. Sure, the devices could be compromised and turned into bots, but you have other defenses to address that, right? But back to RSnake’s point: we have always been forced to accept the lowest common denominator from a security standpoint. That’s mostly because security is not perceived as adding value to the business, and so gets done as quickly and cheaply as possible. Your organization has very little incentive to be more secure, so they aren’t. Your compliance mandate du jour also forces us toward the lowest common denominator box. Love it or hate it, PCI represents that low bar now. Actually, if you ask most folks who don’t do security for a living (and probably a shocking number who do), they’ll tell you that being PCI compliant represents a good level of security. Of course we know better, but they don’t. So we are forced to make a serious case to go beyond what is perceived to be adequate security. Most won’t and don’t, and there it ends. So RSnake and the rest of us can gripe about the fact that we aren’t allowed to break much of anything to protect it, but that’s as much our problem as anything else. We don’t make the case effectively enough that the added protection we’ll get from breaking the user experience is worth it. Until we can substantiate this we’ll remain in the same boat. Leaky as it may be. Share:

Share:
Read Post

Incident Response Fundamentals: Data Collection/Monitoring Infrastructure

In Incident Response Fundamentals: Introduction we talked about the philosophical underpinnings of our approach and how you need to look at stuff before, during, and after an attack. Regardless of where in the attack lifecycle you end up, there is a common requirement: for data. As we mentioned, you only get one opportunity to capture the data, and then it’s gone. So in order to react faster and better in your environment, you will need lots of data. So how and where do you collect it? In theory, we say get everything you can and worry about how useful it is later. Obviously that’s not exactly practical in most environments. So you need to prioritize the data requirements based upon the most likely attack vectors. Yes, we’re talking risk management here, but it’s okay. A little prioritization based on risk won’t kill you. Collect What? The good news is there is no lack of data to capture – let’s list some of the biggest buckets you have to parse: Events/Logs: It’s obvious but we still have to mention it. Event logs tell you what happened and when; they provide the context for many other data types, as well as validation for attacks. Database activity: Database audit logs provide one aspect of application activity, but understanding the queries and how that relates to specific attacks is very helpful to profiling normal behavior and thus understanding what isn’t normal. Application data: Likewise, application data beyond the logs – including transactions, geo-location, etc. – provides better visibility into the context of application usage and pinpoint potential fraudulent activity. We discussed both Database activity monitoring and application monitoring in detail in the Monitoring up the Stack series. Network flow: Understanding which devices are communicating with which others enables pattern analysis to pinpoint strange network activity – which might represent an attack exfiltrating data or reconnaissance activity. Email: One of the most significant data leakage vectors is email (even if it is not always malicious), so it needs to be monitored and collected. Web traffic: Like email, web traffic data drive alerts and provide useful information for forensics. Configuration data: Most malware makes some kind of changes to the devices, so by collecting and monitoring device configurations those changes can be isolated and checked against policy to quickly detect an outbreak. Identity: Finally, an IP address is usable, but being able to map that back to a specific user and then track that user’s activity within your environment is much more powerful. We have dug pretty deeply into all these topics in our Understanding and Selecting a SIEM/Log Management research report, as well as the Monitoring Up the Stack series. Check out those resources for a deeper dive. Collect How? Now that you know what you want to do, how are you going to collect it? That depends a lot on the data. Most organizations have some mix of the following classes of devices: SIEM/Log Management Database Activity Monitoring Fraud detection CMDB (configuration management database) Network Behavioral Analysis Full Network Packet Capture So there are plenty of tools you can use, depending on what you want to collect. Again, we generally tend to want to capture as much data as possible, which is why we like the idea of full network packet capture for as many segments as possible. Mike wrote a thought piece this week about Vaults within Vaults, and part of that approach is a heavy dose of network segmentation along with capturing network traffic on the most sensitive segments. But capturing the data is only the first step in a journey of many miles. Then you have to aggregate, normalize, analyze, and alert on that data across data types to get real value. As mentioned above, we have already published a lot of information about SIEM/Log Management and Database Activity Monitoring, so check out those resources for more detail. As we dig into what has to happen at each period within the attack lifecycle, we’ll delve into how best to use the data you are collecting at that point in the process. But we don’t want to get ahead of ourselves – the first aspect of Incident Response is to make sure you are organized appropriately to respond to an incident. Our next post will focus on that. Share:

Share:
Read Post

Incite 10/20/2010: The Wrongness of Being Right

One of my favorite sayings is “Don’t ask the question if you don’t want the answer.” Of course, when I say answer, what I really mean is opinion. It makes no difference what we are talking about, I probably have an opinion. In fact, a big part of my job is to have opinions and share them with however will listen (and even some who won’t). But to have opinions means you need to judge. I like to think I have a finely tuned bullshit detector. I’ve been having vendors lie to me since I got into this business 18 years ago. A lot of end users can be delusional about their true situations as well. So that means I’m judging what’s happening around me at all times, and I tell then what I think. Even if they don’t want to hear my version of the truth. Sometimes I make snap judgements; other times I only take a position after considerable thought and research. I’m trying to determine if something is right or wrong, based on all the information I can gather at that point in time. But I have come to understand that right and wrong is nothing more than another opinion. What is right for you may be wrong for me. Or vice-versa. It took me a long long time to figure that out. Most folks still don’t get this. I can recall when I was first exposed to the Myers-Briggs test, when I stumbled on a book that included i. Taking that test was very enlightening for me. Turns out I’m an INTJ, which means I build systems, can be blunt and socially awkward (go figure), and tend to judge everything. Folks like me make up about 1% of the population (though probably a bit higher in tech and in the executive suite). I knew I was different ever since my kindergarten teacher tried to hold me back in kindergarten (true story), but I never really understood why. Even if you buy into the idea there are just 16 personality types, clearly there is a spectrum across each of the 4 characteristics. In my black/white world, there seems to be a lot of color. Who knew? This train of thought was triggered by a tweet by my pal Shack, basically calling BS on one guy’s piece on the value of not trying to be successful. That’s not how Dave rolls. And that’s fine. Dave is one guy. The dude writing the post is another. What works for that guy clearly would n’t work for Dave. What works for me doesn’t work for you. But what we can’t do is judge it as right or wrong. It’s not my place to tell some guy he needs to strive for success. Nor is it my place to tell the Boss not to get upset about something someone said about something. I would like her not to get upset because when she’s upset it affects me, and it’s all about me. But if she decides to get upset, that’s what she thinks is the right thing to do. To make this all high concept, a large part of our social problems boil down to one individual’s need to apply their own concept of right to another. Whether it’s religion or politics or parenting or values or anything, everyone thinks they are right. So they ridicule and persecute those who disagree. I’m all for intelligent discord, but at some point you realize you aren’t going to get to common ground. Not without firearms. Trying to shove your right into my backside won’t work very well. The next time someone does something you think is just wrong, take a step back. Try to put yourself in their shoes and see if there is some way you can rationalize the fact they think it’s right. Maybe you can see it, maybe you can’t. But unless that decision puts you in danger, you just need to let it go. Right? Glad you decided to see it my way (the right way). – Mike Photo credits: “wrong way/right way” originally uploaded by undergroundbastard Recent Securosis Posts Vaults within Vaults React Faster and Better: Introduction Monitoring Up the Stack series Platform Considerations Climbing the Stack Dead or Alive: Pen Testing Incite 4 U Verify your plumbing (or end up in brown water) – Daniel Cox of BreakingPoint busts devices for a living. So it’s interesting to read some of his perspectives on what you need to know about your networking gear. Remember, no network, no applications. So if your network is brittle, then your business will be brittle. Spoken by a true plumber, no? There is good stuff there, like understanding what happens during a power cycle and the logging characteristics of the device. The one I like best is #5: Do Not Believe Your Vendor. That’s great advice for any type of purchase. The vendor’s job is to sell you. Your job is to solve a problem. Rarely the twain shall meet, so verify all claims. But only if you want to keep your job, because folks covered in the brown stuff tend to get jettisoned quickly. – MR It’s new, and it’s old – Adam Shostack’s post Re-architecting the Internet poses a valid question: if we were to build a new Internet from scratch, would it be more secure? I think I have argued both sides of the “need a new Internet” debate at one time or another. Now I am kind of non-plussed on the whole discussion because I believe there won’t be a new Internet, and there won’t be a single Internet. We need to change what we do, but we don’t need a new Internet to do it. There is no reason we cannot continue to use the physical Internet we have and just virtualize the presentation. Much as a virtual server will leverage whatever hardware it has to run different virtual machines, there is no reason we can’t have different virtual Internets running over the same physical infrastructure. We have learned from information centric security that we can encapsulate information

Share:
Read Post

White Paper Goodness: Understanding and Selecting an Enterprise Firewall

What? A research report on enterprise firewalls. Really? Most folks figure firewalls have evolved about as much over the last 5 years as ant traps. They’re wrong, of course, but people think of firewalls as old, static, and generally uninteresting. But this is unfounded. Firewalls continue to evolve, and their new capabilities can and should impact your perimeter architecture and firewall selection process. That doesn’t mean we will be advocating yet another rip and replace job at the perimeter (sorry, vendors), but there are definitely new capabilities that warrant consideration – especially as the maintenance renewals on your existing gear come due. We have written a fairly comprehensive paper that delves into how the enterprise firewall is evolving, the technology itself, how to deploy it, and ultimately how to select it. We assembled this paper from the Understand and Selecting an Enterprise Firewall blog series from August and September 2010. Special thanks to Palo Alto Networks for sponsoring the research. You can check out the page in the research library, or download directly: Understanding and Selecting an Enterprise Firewall Share:

Share:
Read Post

Vaults within Vaults

My session for the Atlanta BSides conference was about what I expected in 2011. I might as well have thrown a dart at the wall. But the exercise got me thinking about the newest attacks (like Stuxnet) and the realization of how state-sponsored attackers have penetrated our networks with impunity. Clearly we have to shake up the status quo in order to keep up. This is a point I hit on in last week’s Incite, when discussing Greg Shipley’s post on being outgunned. Obviously what we are doing now isn’t working, and if anything the likelihood of traditional controls such as perimeter defense and anti-malware agents protecting much of anything decreases with every application moved up to the cloud and each trading partner allowed into your corporate network. The long-term answer is to protect the fundamental element of data. Rich and Adrian (with an assist from Gunnar) are all over that. As what we used to call applications continue to decompose into data, logic, processing, and presentation we have neither control over nor visibility into the data at most points in the cycle. So we are screwed unless we can figure out some way to protect the data regardless of how, where, or by whom it’s going to be used. But that is going to be a long, long, long, long slog. We don’t even know how to think about tackling the problem, so solutions are probably a decade away, and that’s being optimistic. Unfortunately that’s the wrong answer, because we have the problem now and need to start thinking about what to do. Did I mention we need answers now? Since I’m the plumber, I took a look into my tool bag and started thinking about what we could do within the constraints of our existing infrastructure, political capital, and knowledge to give us a better chance. This was compounded by the recent disagreement Adrian and I had about how much monitoring is necessary (and feasible) driven by Kindervag’s ideas on Zero Trust. I always seem to come back to the idea of not a disappearing perimeter, but multiple perimeters. Sorry Jerichonians, but the answer is more effectively segmenting networks with increasingly stringent controls based on the type and sensitivity of data within that domain. Right, this is not a new idea. It’s the idea of trust zones based on type of data. The military has been doing this for years. OK, maybe it isn’t such a great idea… Yes, I’m kidding. Many folks will say this doesn’t work. It’s just the typical defense in depth rhetoric, which says you need everything you already have, plus this new shiny object to stop the new attack. The problem isn’t with the architecture, it’s with the implementation. We don’t compartmentalize – not even if PCI says to. We run into far too many organizations with flat networks. From a network ops standpoint, flat networks are certainly a lot easier to deal with than trying to segment networks based on what data can be accessed. But flat networks don’t provide the hierarchy necessary to protect what’s important, and we have to understand that we don’t have the money (or resources) to protect everything. And realize that not everything needs to be protected with the same level of control. OK Smart Guy, How? Metaphorically, think about each level of segmented network as a vault. As you climb the stack of data importance, you tighten the controls and make it harder to get to the data (and theoretically harder to compromise), basically implementing another vault within the first. So an attacker going after the crown jewels needs to do more than compromise a vulnerable Windows 2000 Server that someone forgot about to see the targeted assets. Here’s how we do it: Figure out what’s important: Yes, I’ve been talking about this for years (this is the first step of the Pragmatic CSO). Find the important data: This is the discover step from all the Pragmatic Data Security research we’ve done. Assess the data’s importance: This gets back to prioritization and value. How much you can and should spend on protecting the data needs to correlate to how valuable it is, right? You should probably look at 3-4 different levels of data importance/value. Re-architect your network: This means working with the network guys to figure out how to segment your networks to cordon off each level of increasingly sensitive data. Add controls: Your existing perimeter defenses are probably fine for the first layer. Then you need to figure out what kind of controls are appropriate for each layer. More on Controls Again, the idea of layered controls is old and probably a bit tired. You don’t want a single point of failure. No kidding? But here I’m talking about figuring out what controls are necessary to protect the data, depending on its sensitivity. For example, maybe you have a call center and those folks have access to private data. Obviously you want that behind more than just the first perimeter, but the reality is that most of the risk is from self-inflicted injury. You know, a rep sending data out inadvertently. Sounds like a situation where DLP would be appropriate. Next you have some kind of transactional system that drives your business. For the layer, you monitor database and application activity. Finally you have intellectual property that is the underpinning of your business. This is the most sensitive stuff you have. So it makes sense to lock it down as tightly as possible. Any devices on this network segment are locked down using application whitelisting. You also probably want to implement full network packet capture, so you know exactly what is happening and can watch for naughty behavior. I’m making this up, but hopefully the idea of implementing different (and more stringent) controls in each network segment makes sense. None of this is new. As I’m starting to think about my 2011 research agenda, I like this idea of vaults (driven by network segmentation) as a metaphor for infrastructure security. But this isn’t just my show. I’m interested in whether you all think there is

Share:
Read Post

Incident Response Fundamentals: Introduction

Over the past year, as an industry we have come to realize that we are dealing with different adversaries using different attack techniques with different goals. Yes, the folks looking for financial gain by compromising devices are still out there. But add a well-funded, potentially state-sponsored, persistent and patient adversary to the mix, and we need to draw a new conclusion. Basically, we now must assume our networks and systems are compromised. That is a tough realization, but any other conclusion doesn’t really jive with reality, or at least the reality of everyone we talk to. For a number of years, we’ve been calling bunk on the concept of “getting ahead of the threat” – most of the things viewed as proactive. Anyone trying to take such action has been disappointed by their ability to stop attacks, regardless of how much money or political capital they expended to drive change. Basing our entire security strategy on the belief that we can stop attacks if we just spend enough, tune enough, or comply enough; is no longer credible – if it ever was. We need to change our definition of success from stopping an attack (which would be nice, but isn’t always practical) to reacting faster and better to attacks, and containing the damage. We’re not saying you should give up on trying to prevent attacks – but place as much (or more) emphasis on detecting, responding to, and mitigating them. This has been a common theme in Securosis research since the beginning, and now we will document exactly what that means and how to get there. React Faster We don’t get a lot of push-back anymore on our position that organizations can’t stop all attacks. From a certain perspective that is progress, and we also believe many security professionals have spent a lot of time managing expectations internally so there is an understanding that perfect security cannot be achieved (or that management is unwilling to fund it and compromise everything else to in favor of security improvements). But following that concept to the next step means we need to get much better at detecting attacks sooner. We have already documented a number of approaches at the network layer in terms of monitoring everything and looking for not normal. They also apply to the application (part 1 & part 2) and database (part 1 & part 2), which we have been talking about in our Monitoring up the Stack series. So in the first part of this new series, we will talk about the data collection infrastructure you should be thinking about, what kind of organizational model allows you to react faster, and what to do before the attack is detected. If you know you are being attacked, you are already ahead of the vast majority of companies out there. But what then? And Better Once you understand you are under attack, then your incident response process needs to kick in. Most organizations do this poorly because they have neither the process nor the skills to figure out what’s happening and do something useful about it. Many organizations have a documented incident response program, but that doesn’t mean it’s effective or that the organization has embraced what it really means to respond to an incident. And this is about much more than just tools and flowcharts. Unless the process is well established and somewhat second nature, it will fail under duress – which is the definition of an incident. It is also important to remember that this process touches much more than just IT. It must involve other organizations (legal, HR, operational risk, etc.), in order to actually manage or mitigate the organizational risk of any attack. One of the things that Rich’s emergency response experience has shown is that chain of command is critical; and everyone must be in alignment on process, responsibilities, and accountabilities; before the incident happens. Again, a lot of this stuff seems like common sense (and it is!), but we have seen few organizations that do this well, so we’ll walk through what we mean by reacting better throughout the series. Before, During, and After The concept we will come back to throughout this series is before, during, and after the attack. This will provide context for the different things that must happen based on where you are within the attack lifecycle. Before: Figure out what data to monitor, how much of it is useful, how to make use of it, and how long to retain it, is key to building the infrastructure for persistent monitoring. This must happen before the attack, because you only get one chance to collect that data, when things are happening. You don’t get to go back and record it after the fact (unless you completely fail to learn from the first attack, and they hit you again – not a good way to get a second chance!). During: How can you contain the damage as quickly as possible? By identifying root cause accurately and remediating effectively. We’ll dig into how to identify the attack, who to work with to provide the data you need, and how to do this in the heat of battle. After: Once the attack has been contained, focus shifts to making sure it doesn’t happen again. In these posts we’ll discuss the forensics process, and necessary tools and skills – as well as how to maintain chain of custody and the post mortem required to learn something from a difficult situation. We’ll also discuss the current state of threat management tools, including SIEM, IDS/IPS, and network packet capture, to define their place in our approach. Finally we consider how network security is evolving and what kind of architectural constructs you should be thinking about as you revisit your data collection and defensive strategies. At the end of this series you will have a good overview of how to deal with all sorts of threats and a high level process for identifying the issues, containing the damage, and using the feedback loop to ensure you don’t make the same mistakes again. That’s the plan, anyway. Share:

Share:
Read Post

New Blog Series: Incident Response Fundamentals

Our “beat our readers into a content coma” plan is working perfectly. Just when you thought you had enough of NSO Quant, Enterprise Firewall, Monitoring up the Stack, and DLP (just in the last month) – we will be starting another series Monday. Rich and I will begin the “Incident Response Fundamentals: Understanding Threats Before, During, and After the Attack” series. React Faster is something I’ve been talking about for years (literally) and Rich improved it by integrating the importance of incident response to the mix. Now we are going to bring all those aspects together into a very focused view on how you can keep pace with the rapidly evolving attack space. The general thesis of the series is: Organizations need to embrace a pervasive monitoring approach to track attacks before, during, and after the threat. Far too many organizations do not capture the proper data at the network layer to detect attacks, find the root cause and remediate, or perform a detailed forensic analysis after the fact. This impairs their ability to protect their environments and ensure they don’t suffer similar breaches over and over again. We will not only talk about monitoring (as much as Adrian loves that), but also about an incident response plan and what to do before the attack, once you think something is going down, and (from a forensics standpoint) after the fact. We’ll also do a little bit of visioning and take a cut at what network security will look like in 5 years. Overall it will be a great research project and we think the output will be very valuable to practitioners. Which is why we do this stuff. Share:

Share:
Read Post

Incite 10/13/2010: the Rise of the Cons

No we aren’t going to talk about jailbreaks or other penal system trials and tribulations. This one is about how the conference circuit is evolving in a really positive way. Most folks attend the big security shows – you know, RSA and BlackHat and maybe some others. Most folks also hate these shows. I hear a lot of complaints about weak content and vendor whoring putting a damper on the experience. Of course, since myself and my ilk tend to speak at most of these shows, we can only point the finger at ourselves. Personally, unless I’m speaking I tend to skip all but the biggest shows, which I attend for networking purposes. But that’s just me. But nature hates a vacuum, and the vacuum of user-oriented conferences is being filled by the BSides movement and a number of regional hacker cons. If the conference you are attending doesn’t do it for you, get some smart folks together (who are there anyway) and put on an unconference of your own. That’s the general concept for BSides. I attended BSides ATL last week, and it was a really great experience. First shout outs need to be sent to the driving forces bringing BSides to ATL, and they were Eric Smith (@infosecmafia), Nick Owen (@wikidsystems), Marisa Fagan (@dewzi) and MC Petermann (@petermannmc). I know there were tons of other folks who put a lot of blood, sweat, and tears into making BSides ATL happen, and no offense to anyone I didn’t mention. I can’t thank them all enough. Why is this working? Because it’s about community. I’ve been in Atlanta for over 6 years now, and there isn’t really a cohesive security community. The ISSA meetings are a joke, unless you like vendors to hump your leg for 2 hours every month. We tried to get a CitySec group meeting going (and all three of us who attended enjoyed the beer that I bought), but that fizzled. A new Cloud Security Alliance chapter is forming in the ATL and we are seeing a lot of activity for the NAISG in town as well. Yes, there are other organizations, but it’s generally a small group of folks getting together in an ad hoc fashion. But what’s been missing has been a more technically oriented conference, where smart folks from the Southeast can get together and share what we are seeing. That happened in spades at BSides ATL. Whether talking Google and Bing hacking with Rob Ragan, exfiltration with Dave Shackleford and Rick Hayes, pen testing with Eric Smith and Dave Kennedy, or having Chris Nickerson show how to bring entire companies down (think attacking robots!) – it was just a flood of information. Good information. And those were just the sessions I attended. There were a bunch of others I had to miss. The conference organizers even let me play and talk about what I think will happen in 2011. The short answer is I have no idea. But you already knew that. Yet I did get to use a picture of a guinea pig BBQ, which has to set some low bar for depravity. I’m probably going to get in trouble by talking up BSides because we Securosis folks do a lot of work with the RSA Conference. Next year, we’ll be leading the E10 (CISO-focused) event on Monday at RSA, and Rich is in London and will be in China this month speaking at RSA’s global events. But the writing is on the wall. Content is king, and right now there is a lot of great content being driven through the regional BSides conferences and the other hacker cons. While I’m talking conferences, I also should mention what seemed to be a rousing success for Hoff and friends at the inaugural HacKid conference in Boston last weekend. It’s such a great concept, to teach kids about security, self-defense and other important topics. I can’t wait to get this going in ATL. And with that, just remember – if you don’t take care of your customers someone else will. Mr. Market told me. – Mike. Photo credits: “Pug Shot” originally uploaded by Jerry Reynolds Recent Securosis Posts IT Debt: Real or FUD? FireStarter: Consumer Internet Penalty Box Friday Summary: October 8, 2010 Monitoring up the Stack: User Activity Monitoring Identity Monitoring I should also highlight an article on Application Monitoring in Dark Reading that highlights the Monitoring up the Stack research Adrian and Gunnar are working on right now. I know lots of folks have a hard enough time monitoring their network and security devices, but the application is where the action is, so ignore it at your own peril. Incite 4 U Time for the heavy artillery. What heavy artillery? – Greg Shipley makes the point we’ve all come to grips with. We are outgunned. The bad guys have better tools and more motivation, and all we can do is watch it happen and clean up the mess afterwards. This statement kind of says it all: “Recent events suggest that we are at a tipping point, and the need to reassess and adapt has never been greater. That starts with facing some hard truths and a willingness to change the status quo.” Right. So all is not lost, but we need to start thinking differently. But what does that mean? According to Shipley, it’s focusing on the database and maybe things like application white listing. Best of all is the idea to “stop rewarding ineffectiveness and start rewarding innovation.” Bravo. But how do you do that when the checkbox says you need AV? So basically we are in a quandary, but you already knew that. What to do? Basically what we’ve been saying for years. React Faster (and Better), focus on the fundamentals, and if you are targeted, just understand you can’t stop them. And manage expectations accordingly. He closes the article with “If we remain bound to our relentless commitment to mediocrity, we will be worse off moving ahead. We can and must do better. It’s time to change our way of thinking.” Right. – MR Instructive memory – Ever had

Share:
Read Post

FireStarter: Consumer Internet Penalty Box

A few weeks back, the fine folks at Microsoft used a healthcare analogy to describe a possible solution to the Internet’s bot infestation. Scott Charney suggested that every PC should have a health certificate which would provide access to the Internet. No health certificate, no access. Kind of like a penalty box for consumer Internet users. It’s an interesting idea, and clearly we need some kind of solution to the reality that Aunt Bessie has no idea her machine has been pwned and is blasting spam and launching DDoS attacks. Unfortunately it won’t work, unless mandated by some kind of regulation. It’s really an economic thing. Comcast will proactively send devices connected to their network exhibiting bad behavior a message telling them they are likely compromised. They call it their Bot Alert program. Then they point to a nice web page where the consumer can get answers. The consumer is then expected to address the issue. If they can’t (or don’t) Comcast will continue to notify the customer until they do. Here’s the rub: if the consumer knew what they were doing in the first place, they wouldn’t have gotten pwned. You can’t blame Comcast (or any other ISP) for drawing a line in the sand. They charge maybe $40 a month for Internet service. The minute a customer picks up the phone and calls for help, they lose money for that month. There is no financial incentive for them to try to fix the compromised device. Sure, a bot does bad things. But bad enough to spend staff time trying to fix every one of them? The constant notifications will definitely push a customer to call and force Comcast to help them address the issue. I guess that worked OK in their pilot test, but we’ll see how well it scales as they roll it out nationwide. And Comcast seems to be out in front on this issue. I’m not familiar with any similar initiatives from the other major ISPs. So let’s tip our hat to Comcast for at least trying to do something. But is it the right approach? Do we just accept the fact that a percentage of consumer devices will be pwned and will exhibit bad behavior. Is it a cost of doing business for the ISPs? Is there some other kind of technical, procedural, or cultural answer? I wish I knew. What do you folks think? Can this health certificate thing work? Am I just stuck in a cycle of cynicism that prevents me from seeing any solution to this problem? Or do we just make sure our families aren’t the path of least resistance and forget the rest? Share:

Share:
Read Post

Incite 10/6/2010: The Answer is 42

One of my favorite passages in literature is when Douglas Adams proclaims the Ultimate Answer to the Ultimate Question of Life, The Universe, and Everything to be 42 in Hitchhiker’s Guide to the Galaxy. Of course, we don’t know the Ultimate Question. Details. This week I plan to discover he was right as I finish my 42nd year on the planet. That seems old. It’s a big number. But I don’t feel old. In fact, I feel like a big kid. Sometimes I look at my own kids and my house and snicker a bit. Can you believe they’ve entrusted any responsibility to me? These kids think I actually know something? Ha, that’s a laugher… Since I’m trying not to look forward and plan, I figure I should look backward and try to appreciate the journey. As I look back, I can kind of break things up into a couple different phases. My childhood was marked by anger. Yeah, I know you are shocked. But I took everything bad that happened personally, and as a result, I was a pretty angry kid. College was a blur. I know I drank a lot of beer. I think I studied a bit. When I graduated I entered the unbreakable phase. Right, like the Oracle database. I could do little wrong. I had a pretty quick progression through the corporate ranks. In hindsight it was too quick. I didn’t screw anything up, so I felt invincible. I also didn’t learn a hell of a lot, but thought I did. Sound familiar? Then I started a software company in 1998 to chase the Internet bubble IPO money. I learned pretty quickly that I wasn’t invincible, as I heard the sound of $30 million of someone else’s money being flushed down the toilet. Crash. Big time. Then I entered the striving stage throughout my 30’s. Striving for more and never being satisfied. From there I proceeded to jump from job to job every 15 months, chasing some shiny object and trying to catch the brass ring. Again, that didn’t work out too well and I found myself getting angry again. Then I started Incite and was a lot happier. I managed to remember what I liked to do and then start to address some of my deeply buried issues. No, I’m not going to bare my soul like Bill Brenner, but we all have demons to face and at that point I started facing my own. I took a detour back into the vendor world for 15 months, and then sold Rich and Adrian a bill of goods to let me hang my shingle at Securosis. 10 months in, I’m having the time of my life. I’m thinking this is the contented phase. I’ve been working hard, at everything. Physically, I’m in the best shape I’ve been in since my early 20’s. Mentally I’m making progress, working to accept what’s happening and stop looking forward at the expense of being present. I’m happy with what I do and what I have. My family loves me and I love them. What else does a guy need? I’m still fighting demons, and I probably always will. The hope is that my epic battles will be fewer and farther between over time. I’m still screwing things up, and I’ll probably always do that too. That’s an entrepreneur’s curse. I’m also learning new things almost every day, and when that stops it’s time to move on to the Great Unknown. As I look back, I figured out what my Ultimate Question is: “When do you realize it’s a game and you should enjoy the ride, both the ups and the downs?” Right. For me, the answer is 42. – Mike. Photo credits: “42” originally uploaded by cszar Recent Securosis Posts Friday Summary: September 30, 2010 Monitoring up the Stack: DAM, Part 2 App Monitoring, Part 1 App Monitoring, Part 2 Understanding and Selecting a DLP Solution A Wee Bit on DLP SaaS “DLP Light” and DLP Features NSO Quant Posts The End is Near! Comprehensive Index of Posts Incite 4 U Get on the (security incident) cycle – Good summary here by Lenny Zeltser covering a presentation from our hero Richard Bejtlich about how he’s built the Incident Response team at GE to deal with things like well-funded patient attackers (note I didn’t use the a(blank)t acronym). Of course there will always be failures, but the question is about organizational commitment to detecting adversaries and putting the right capabilities in place to protect your organization. And to look at security as a process and – dare I say it – a lifecycle. That means you need to focus on all aspects – before, during, and after the attack. Amazingly enough, Rich and I are starting another blog series on exactly this topic in about a week. – MR Save the children… with robots – The state of technology education in this country is simply embarrassing. Everyone talks about how kids use a mouse before they can read, but how many of them understand how a computer works? You’d think today’s teenagers would know a hard drive from RAM, but not if they rely on their (standard) school to teach them. However, they are pretty good at putting cats in PowerPoints. Our friend Chris Hoff is trying to change this with a hacking conference dedicated to kids… called, appropriately enough, HacKid. It’s an amazing idea, with everything from Lego robots to online safety covered, and if you have kids of the right age, or just want to support it, I highly recommend attending or getting involved. – RM No trust for you! – Despite being a big fan of monitoring technologies, I thought the Trust No One, Monitor Everything position was a bit over the top. The “monitor everything” approach fails for exactly the same reasons “encrypt everything” fails: a single technology cannot solve every problem. Monitoring is just another security tool, and before you try to saw wood with a hammer, remember attacks that bypass WAF, IDS, App Monitoring, and DAM are well documented. Don’t

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.