Securosis

Research

Reminder: We all live in glass houses

Forrester’s Rick Holland makes a great point in the epic Target Breach: Vendors, You’re Not Wrestlers, And This Isn’t The WWE post. Epic mostly because he figured out how to work the WWE and a picture of The Rock into a security blog post. Rick’s irritation with competitors trying to get a leg up on FireEye based on their presence in Target’s network is right on the money. Vendors who live in glass houses shouldn’t throw stones. It didn’t take long; I’ve already started hearing FireEye competitors speaking out against their competitor’s role in the Target breach. As I mentioned above, this wasn’t a technology failure: FireEye detected the malware. This was a people/process/oversight failure.   We all live in glass houses and karma is a bitch. But more to the point, if you think I take as fact anything written about a security attack in the mainstream business press, you’re nuts. If Krebs writes something I believe it because he knows what he’s doing. Not that no other reporters have enough technical credibility to get it right, there are. But without the full and complete picture of an attack, trying to assign blame is silly. Clearly in Target’s case there were many opportunities to detect the malware and perhaps stop the breach. They didn’t, and they are suffering now. Their glass house is shattered. But this could happen to any organization at any time. And to think otherwise is idiotic. So think twice before thinking that would never happen to you. Never is a long time. Photo credit: “Going into the Glass House” originally uploaded by Melody Joy Kramer Share:

Share:
Read Post

Defending Against Network Distributed Denial of Service Attacks [New Series]

Back in 2013, volumetric denial of service (DoS) attacks targeting networks were all the rage. Alleged hacktivists effectively used the tactic first against Fortune-class banks, largely knocking down major banking brands for days at a time. But these big companies adapted quickly and got proficient at defending themselves, so attackers then bifurcated their attacks. On one hand they went after softer targets like public entities (the UN, et al) and smaller financial institutions. They also used new tactics to take on content delivery networks like CloudFlare with multi-hundred-gigabyte attacks, just because they could. In our Defending Against Denial of Service Attacks research we described network-based DoS attacks: Network-based attacks overwhelm the network equipment and/or totally consume network capacity by throwing everything including the kitchen sink at a site – this interferes with legitimate traffic reaching the site. This volumetric type of attack is what most folks consider Denial of Service, and it realistically requires blasting away from many devices, so current attacks are called Distributed Denial of Service (DDoS). If your adversary has enough firepower it is very hard to defend against these attacks, and you will quickly be reminded that though bandwidth may be plentiful, it certainly isn’t free. Application-based attacks are different – they target weaknesses in web application components to consume all the resources of a web, application, or database server to effectively disable it. These attacks can target either vulnerabilities or ‘features’ of an application stack to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. The motivation for these attacks hasn’t changed much. Attackers tend to be either organized crime factions stealing money via ransom attacks, or hacktivists trying to make a point. We do see a bit of competitor malfeasance and Distributed DoS (DDoS) to hide exfiltration activities, but those don’t seem to be primary use cases any more. Regardless of motivation, attackers now have faster networks, bigger botnets, and increasingly effective tactics to magnify the impact of DDoS attacks, forcing most organizations to devote attention to implementing plans to mitigate these attack. After digging deeper into the application side of denial of service in Defending Against Application Denial of Service Attacks, we now turn our attention to the network side of the house. We are pleased to start this new series, entitled Defending Against Network Distributed Denial of Service Attacks. As with all our public research, we will build the series using our Totally Transparent Research model. Before we get going we would like to thank A10 Networks, as they have agreed to potentially license this research at the end of the project. It’s Getting Easier If anything, it is getting easier to launch large-scale network-based DDoS attacks. There are a few main reasons: Bot availability: It’s not like fewer devices are being compromised. Fairly sophisticated malware kits are available to make it even easier to compromise devices. As a result there seem to be millions of (predominately consumer) devices compromised daily, adding to the armies which can be brought to bear in DoS attacks. Faster consumer Internet: With a bandwidth renaissance happening around the world, network speeds into homes and small offices continue to climb. This enables consumer bots to blast targets with growing bandwidth, and this trend will continue as networks get faster. Cloud servers: It is uncommon to see 50mbps sustained coming from a consumer device. But that is quite possible at the server level. Combine this with the fact that cloud servers (and management consoles) are Internet-facing, and attackers can now use compromised cloud servers to blast DDoS targets as well. This kind of activity is harder to detect because these servers should be pumping out more traffic. Magnification: Finally, attackers are getting better at magnifying the impact of their attacks, manipulating protocols like DNS and ICMP which can provide order-of-magnitude magnification of traffic hitting the target site. This makes far better use of attacker resources, allowing them to use each bot sporadically and with more lightly (in terms of bandwidth) to better hide from detection. Limitations of Current Defenses Before we dive into specifics of how these attacks work we need to remind everyone why existing network and security devices aren’t particularly well-suited to DDoS attacks. It’s not due to core throughput – we see service provider network firewalls processing upwards of 500gbps of traffic, and they are getting faster rapidly. But the devices aren’t architected to deal with floods of legitimate traffic from thousands of devices. Even with NGFW capabilities providing visibility into web and other application traffic; dealing with millions of active connection requests can exhaust link, session, and application handling capacity on security devices, regardless of their maximum possible throughput. IPS devices are in the same boat, except that their job is harder because they are actively looking for attacks and profiling activity to find malicious patterns. So they are far more compute-intensive, and have an even harder time keeping pace with DDoS bandwidth. In fact many attackers target firewalls and IPS devices with DDoS attacks, knowing the devices typically fail closed, rendering the target network inoperable. You should certainly look to service providers to help deal with attacks, first by over-provisioning your networks. This is a common tactic for networking folks: throw more bandwidth at the problem. Unfortunately you probably can’t compete with a botmaster leveraging the aggregate bandwidth of all their compromised hosts. And it gets expensive to provision enough unused bandwidth to deal with a DDoS spike in traffic. You can also look at CDNs (Content Delivery Networks) and/or DoS scrubbing service. Unfortunately CDN offerings may not offer full coverage of your entire network and are increasingly DDoS targets themselves. Scrubbing centers can be expensive, and still involve downtime as you shift traffic routes to the scrubbing center. Finally, any scrubbing approach is inherently reactive – you are likely to already be down by the time you learn you have a problem. Further complicating things is the fundamental challenge of simply detecting the onset of a DDoS attack. How can you tell the difference between a temporary spike in traffic and a full-on blitzkrieg on your

Share:
Read Post

New Paper: Reducing Attack Surface with Application Control

Attacks keep happening. Breaches keep happening. Senior management keeps wondering what the security team is doing. The lack of demonstrable progress [in stopping malware] comes down to two intertwined causes. First, devices are built using software that has defects attackers can exploit. Nothing is perfect, especially not software, so every line of code presents an attack surface. Second, employees can be fooled into taking action (such as installing software or clicking a link) that enables attacks to succeed. Application Control technology can have a significant impact on the security posture of protected devices, but has long been much maligned. There was no doubt of its value in stopping attacks, especially those using sophisticated malware. Being able to block the execution of unauthorized executables takes many common attacks out of play. But there is a user experience cost for that protection.   In Reducing Attack Surface with Application Control, we look at the double-edged sword of application control, detail a number of use cases where it fits well, and define selection criteria to consider for the technology. Keep in mind that no one control or tactic fits every scenario. Not for every company, nor for every device within a company. If you are looking for a panacea you are in the wrong business. If you are looking for a technology that can lock down devices in appropriate circumstances, check out this paper. Conclusion: Application control can be useful – particularly for stopping advanced attackers and securing unsupported operating systems. There are trade-offs as with any security control, but with proper planning and selection of which use cases to address, application control resists device compromise and protects enterprise data. We would like to thank AppSense for licensing the paper and supporting our research. We make this point frequently, but without security companies understanding and getting behind our Totally Transparent Research model you wouldn’t be able to enjoy our research. Get the paper via our permanent landing page or download the paper directly (PDF). Share:

Share:
Read Post

Incite 3/12/2014: Digging Out

The ritual is largely the same. I do my morning stuff (usually consisting of some meditation and some exercise), I grab a quick bite, and then I consult my list of things that need to get done. It is long, and seems to be getting longer. The more I work, the more I have to do. It’s a good problem to have, but it’s still a problem. And going to RSA two weeks ago exacerbated it. I had a lot of great conversations with lots of folks who want to license our research, have us speak at their events, and have us advise them on all sorts of things. It’s awesome, but it’s still a problem.   Of course you probably think we should expand and add a bunch of folks to keep up with demand. We have thought about that. And decided against it. It takes a unique skill set to do what we do, the way we do it. The folks who understand research tend to be locked up by big research non-competes. The folks who understand how to develop business tend not to understand research. And the very few who can do both generally aren’t a cultural fit for us. Such is life… But that’s not even the biggest obstacle. It’s that after 4+ years of working together (Rich and Adrian a bit more), we enjoy a drama-free environment. The very few times we had some measure of disagreement or conflict, it was resolved with a quick email or phone call, in a few minutes. Adding people adds drama. And I’m sure none of us wants more drama. So we put our heads down and go to work. We build the pipeline, push the work over the finish line, and try to keep pace. We accept that sometimes we need to decide not to take a project or see how flexible the client is on delivery or scheduling. As with everything, you make choices and live with them. And while it may sound like I’m whining about how great our business is, I’m not. I am grateful to have to make trade-offs. That I have a choice of which projects I work on, for which clients. Not that I can’t find work or deal with slow demand. The three of us all realize how fortunate we are to be in this position: lots of demand and very low overhead. That is not a problem. We want to keep it that way. Which is basically my way of saying, where is that shovel again? Time to get back to digging. –Mike Photo credit: “Digging out auto” originally uploaded by Boston Public Library Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and well hang out. We talk a bit about security as well. We try to keep these to less than 15 minutes and usually fail. March 11 – RSA Postmortem Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Advanced Endpoint and Server Protection Quick Wins Detection/Investigation Prevention Assessment Introduction Newly Published Papers Leveraging Threat Intelligence in Security Monitoring The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Incite 4 U Incentives drive organizational behavior: I am not sure why Gunnar tweeted a link to something he posted back in October, but it gave me an opportunity to revisit a totally awesome post. In Security Engineering and Incentives he goes through the key aspects of security engineering, and incentives are one of the four cornerstones (along with security policy, security mechanism, and assurance). Probably the most important of the cornerstones, because without proper incentives no one does anything. If you have ever been in sales you know the compensation plan drives behavior. It is that way in every functional part of the business. In the not-so-real world you have folks who do what they are supposed to because they do. And in the real world, those behaviors are driven by incentives, not risk (as GP points out). So when you wonder why the Ops team ignores the security policy and developers couldn’t give less of a crap about your security rules, look at what they are incented to do. Odds are be secure isn’t really on that list. – MR Persona non grata: The Mozilla Wiki does not really capture the essence of what’s going on with Mozilla’s Persona project, but the gist is that their effort to offer third party identity federation has failed. There is some debate about whether technical or financial derailed the project and prevented it from reaching “critical mass”, but I think the statement “We looked at Facebook Connect as our main competitor, but we can’t offer the same incentives (access to user data)” pretty much nails it. If you wonder why Yahoo is ditching Facebook and Google federation services in lieu of their own offering, understand that identity is the next generation’s “owning the user”, and a key means for data providers (advertising networks) to differentiate their value to advertisers. The goal of federated identity was to offer easier and better identity management across web applications, doing away with user names and passwords. But identity providers have seen the greatest benefit, through enrichment of the data

Share:
Read Post

Advanced Endpoint and Server Protection: Quick Wins

We have covered the main aspects of the threat management cycle, in terms of the endpoint and server contexts, in our last few posts. Now let’s apply these concepts to a scenario to see how it plays out. In this scenario you work for a high-tech company which provides classified technology to a number of governments, and has a lot of valuable intellectual property. You know you are targeted by state-sponsored adversaries for the classified information and intellectual property on your networks. So you have plenty of senior management support and significant resources to invest in dealing with advanced threats. You bought into reimagined threat management, and have deployed a combination of controls on your endpoints and servers. These include advanced heuristics on valuable endpoints, application control on servers with access to key intellectual property stores, and broad deployment of device activity monitoring technology – all because you know it is a matter of when rather than if you will be compromised. You supplement endpoint and server protections with network-based malware detection and full packet capture. So resources are not an issue and you have controls in place to deal with advanced adversaries. Of course that and $4 will get you a coffee, so you need to build these controls into a strong process to ensure you can react faster and better to the attacks you know are coming. But not every organization can make such extensive investments, so you may not have the full complement of controls at your disposal. The Attack: Take 1 This attack starts as many do, with an adversary sending a phishing email with a malicious MS Office attachment to an employee in the finance department. The employee’s device has an agent that uses advanced heuristics, which identifies the malicious behavior when the file attempts to turn off the traditional AV product and install what looks like a dropper on the device. The agent runs at the kernel level so it manages to block the attack and administrators alerted, and no harm is done… this time. These are the kinds of quick wins you are looking for, and even with proper security awareness training, employees are still very likely to be duped by advanced attackers. So additional layers of defense, beyond the traditional endpoint protection suite, are critical. The Attack: Take 2 The advanced adversary is not going to give up after their blocked initial foray. This time they target the administrative assistant of the CEO. They pull out a big gun, and use a true 0-day to exploit an unknown flaw in the operating system to compromise the device. They deliver the exploit via another phishing email and get the admin to click on the link to a dedicated server never used for anything else. A drive-by download exploits the OS using the 0-day, and from there they escalate privileges on the admin’s device, steal credentials (including the CEO’s logins) and begin reconnaissance within the organization to find the data they were tasked to steal. As the adversary is moving laterally throughout the organization they compromise additional devices and get closer to their goal, a CAD system with schematics and reports on classified technology. As mentioned above, your organization deployed network-based malware detection to look for callbacks, and since a number of devices have used similar patterns of DNS searches (which seem to be driven by a domain-generating algorithm), alarms go off regarding a possible compromise. While you are undertaking the initial validation and triage of this potential attack, the adversaries have found the CAD system and are attempting to penetrate the server and steal the data. But the server has application controls, and will not run any unauthorized executables. So the attack is blocked and the security team is alerted to a bunch of unauthorized activity on that server. This is another quick win – attackers found their target but can’t get the data they want directly. Between the endpoint compromise calling back to the botnet, and attempts on the server, you have definitive proof of an adversary in your midst. At this point the incident response process kicks in. Respond and Contain As we described in our incident response fundamentals series, you start the response process after confirming the attack by escalating the incident based on what’s at risk and the likelihood of data loss. Then you size up the incident by determining the scope of the attack, the attacker’s tactics, and who the attacker is, to get a feel for intent. With that information you can decide what kind of response you need to undertake, and its urgency. Your next step is to contain the attack and make sure you have the potential damage under control. This can take a variety of forms, but normally it involves quarantining the affected device (endpoint or server) and starting the forensics investigation. But in this scenario – working with senior management, general counsel, and external forensic investigators – the decision has been made to leave the compromised devices on the network. You might do this for a couple reasons: You don’t want to tip off the adversary that you know they are there. If they know they have been detected they may burrow in deeper, hiding in nooks and crannies and making it much harder to really get rid of them. Given an advanced attacker is targeting your environment, you can gather a bunch of intelligence about their tactics and techniques by watching them in action. Obviously you start by making sure the affected devices can’t get to sensitive information, but this gives you an opportunity to get to know the adversary. A key part of this watching and waiting approach is continuing to collect detailed telemetry from the devices, and starting to capture full network traffic to and from affected devices. This provides a full picture of exactly what the adversary is doing (if anything) on the devices. Investigate The good news is that the investigation team has access to extensive telemetry from device activity monitoring and network packet capture. Analyzing the first compromised

Share:
Read Post

New Paper: Leveraging Threat Intelligence in Security Monitoring

As we continue our research into the practical uses of threat intelligence (TI), we have documented how TI should change existing security monitoring (SM) processes. In our Leveraging Threat Intelligence in Security Monitoring paper, we go into depth on how to update your security monitoring process to integrate malware analysis and threat intelligence. Updating our process maps demonstrates that we don’t consider TI a flash in the pan – it is a key aspect of detecting advanced adversaries as we move forward. Here is our updated process map for TI+SM.:   As much as you probably dislike thinking about other organizations being compromised, this provides a learning opportunity. An excerpt from the paper explains in more detail: There are many different types of threat intelligence feeds and many ways to apply the technology – both to increase the effectiveness of alerting and to implement preemptive workarounds based on likely attacks observed on other networks. That’s why we say threat intelligence enables you to benefit from the misfortune of others. By understanding attack patterns and other nuggets of information gleaned from attacks on other organizations, you can be better prepared when they come for you. And they will be coming for you – let’s be clear about that. So check out the paper and figure out how your processes need to evolve, both to keep pace with your adversaries, and to take advantage of all the resources now available to keep your defenses current. We would like to thank Norse Corporation for licensing this paper. Without support from our clients, you wouldn’t be able to use our research without paying for it. You can check out the permanent landing page for the paper, or download it directly (PDF). Share:

Share:
Read Post

Advanced Endpoint and Server Protection: Detection/Investigation

Our last AESP post covered a number of approaches to preventing attacks on endpoints and servers. Of course prevention remains the shiny object most practitioners hope to achieve. If they can stop the attack before the device is compromised there need be no clean-up. We continue to remind everyone that hope is not a strategy, and counting on blocking every attack before it reaches your devices always ends badly. As we detailed in the introduction, you need to plan for compromise because it will happen. Adversaries have gotten much better, attack surface has increased dramatically, and you aren’t going to prevent every attack. So pwnage will happen, and what you do next is critical, both to protecting the critical information in your environment and to your success as a security professional. So let’s reiterate one of our core security philosophies: Once the device is compromised, you need to shorten the window between compromise and when you know the device has been owned. Simple to say but very hard to do. The way to get there is to change your focus from prevention to a more inclusive process, including detection and investigation… Detection Our introduction described detection: You cannot prevent every attack, so you need a way to detect attacks after they get through your defenses. There are a number of different options for detection – most based on watching for patterns that indicate a compromised device. The key is to shorten the time between when the device is compromised and when you discover it has been compromised. To be fair, there is a gray area between detection and prevention, at least from an endpoint and server standpoint. With the exception of application control, the prevention techniques described in the last post depend on actually detecting the bad activity first. If you are looking at controls using advanced heuristics, you detect the malicious behavior first – then you block it. In an isolation context you run executables in the walled garden, but you don’t really do anything until you detect bad activity – then you kill the virtual machine or process under attack. But there is more to detection than just figuring out what to block. Detection in the broader sense needs to include finding attacks you missed during execution because: You didn’t know it was malware at the time, which happens frequently – especially given how quickly attackers innovate. Advanced attackers have stockpiles of unknown exploits (0-days) they use as needed. So your prevention technology could be working as designed, but still not recognize the attack. There is no shame in that. Alternatively, the prevention technology may have missed the attack. This is common as well because advanced adversaries specialize in evading known preventative controls. So how can you detect after compromise? Monitor other data sources for indicators that a device has been compromised. This series is focused on protecting endpoints and servers, but looking at devices is insufficient. You also need to monitor the network for a full perspective on what’s really happening, using a couple techniques: Network-based malware detection: One of the most reliable ways to identify compromised devices is to watch for communications with known botnets. You can look for specific traffic patterns, or for communications to known botnet IP addresses. We covered these concepts in both the NBMD 2.0 and TI+SM series. Egress/Content Filtering: You can also look for content that should not be leaving the confines of your network. This may involve a broad DLP deployment – or possibly looking for sensitive content on your web filters, email security gateways, and next generation firewalls. Keep in mind that every endpoint and server device has a network stack of some sort. Thus a subset of this monitoring can be performed within the device, by looking at traffic that enters and leaves the stack. As mentioned above, threat intelligence (TI) is making detection much more effective, facilitated by information sharing between vendors and organizations. With TI you can become aware of new attacks, emerging botnets, websites serving malware, and a variety of other things you haven’t seen yet and therefore don’t know are bad. Basically you leverage TI to look for attacks even after they enter your network and possibly compromise your devices. We call this retrospective searching. This works by either a) using file trajectory – tracking all file activity on all devices, looking for malware files/droppers as they appear and move through your network; or b) looking for attack indicators on devices with detailed activity searching on endpoints – assuming you collect sufficient endpoint data. Even though it may seem like it, you aren’t really getting ahead of the threat. Instead you are looking for likely attacks – the reuse of tactics and malware against different organizations gives you a good chance to see malware which has hit others before long. Once you identify a suspicious device you need to verify whether the device is really compromised. This verification involves scrutinizing what the endpoint has done recently for indicators of compromise or other such activity that would confirm a successful attack. We’ll describe how to capture that information later in this post. Investigation Once you validate the endpoint has been compromised, you go into incident response/containment mode. We described the investigation process in the introduction as: Once you detect an attack you need to verify the compromise and understand what it actually did. This typically involves a formal investigation, including a structured process to gather forensic data from devices, triage to determine the root cause of the attack, and searching to determine how broadly the attack has spread within your environment. As we described in React Faster and Better, there are a number of steps in a formal investigation. We won’t rehash them here, but to investigate a compromised endpoint and/or server you need to capture a bunch of forensic information from the device, including: Memory contents Process lists Disk images (to capture the state of the file system) Registry values Executables (to support malware analysis and reverse engineering) Network activity logs As part of the investigation you also need to understand the attack timeline. This enables you

Share:
Read Post

Incite 3/5/2014: Reentry

After I got off the plane Friday night, picked my bag up off the carousel, took the train up to the northern Atlanta suburbs, got picked up by the Boss, said hello to the kids, and then finally took a breath – my first thought was that RSA isn’t real. But it is quite real, just not sustainable. That makes reentry into my day to day existence a challenge for a few days.   It’s not that I was upset to be home. It’s not that I didn’t want to see my family and learn about what they have been up to. My 5 minute calls two or three times a day, while running between meetings, didn’t give me much information. So I wanted to hear all about things. But first I needed some quiet. I needed to decompress – if I rose to the surface too quickly I would have gotten the bends. For me the RSA Conference is a nonstop whirlwind of activity. From breakfast to the wee hours closing down the bar at the W or the Thirsty Bear, I am going at all times. I’m socializing. I’m doing business. I’m connecting with old friends and making new ones. What I’m not doing is thinking. Or recharging. Or anything besides looking at my calendar to figure out the next place I need to be. For an introvert, it’s hard. The RSA Conference is not the place to be introverted – not if you work for yourself and need to keep it that way. I mean where else is it normal that dinner is a protein bar and shot of 5-hour energy, topped off with countless pints of Guinness? Last week that was not the exception, it was the norm. I was thankful we were able to afford a much better spread at the Security Blogger’s Meetup (due to the generosity of our sponsors), so I had a decent meal at least one night. As I mentioned last week, I am not about to complain about the craziness, and I’m thankful the Boss understands my need to wind down on reentry. I make it a point to not travel the week after RSA, both to recharge, get my quiet time, and reconnect with the family. The conference was great. Security is booming and I am not about to take that for granted. There are many new companies, a ton of investment coming into the sector, really cool innovative stuff hitting the market, and a general awareness that the status quo is no good. Folks are confused and that’s good for our business. The leading edge of practitioners are rethinking security and have been very receptive to research we have been doing to flesh out what that means in a clear, pragmatic fashion. This is a great time to be in security. I don’t know how long it will last, but the macro trends seem to be moving in our direction. So I’ll file another RSA Conference into the memory banks and be grateful for the close friends I got to see, the fantastic clients who want to keep working with us, and the new companies I look forward to working with over the next year (even if you don’t know you’ll be working with us yet). Even better, next year’s RSA Conference has been moved back to April 2015. So that gives me another two months for my liver to recover and my brain cells to regenerate. –Mike PS: This year we once again owe huge thanks to MSLGROUP and Kulesa Faul, who made our annual Disaster Recovery Breakfast possible. We had over 300 people there and it was really great. Until we got the bill, that is… Photo credit: “Reentry” originally uploaded by Evan Leeson Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and, well, hang out. We talk a bit about security as well. We try to keep these less than 15 minutes, and usually fail. Feb 21 – Happy Hour – RSA 2014 Feb 17 – Payment Madness Feb 10 – Mass Media Abuse Feb 03 – Inevitable Doom Jan 27 – Government Influence Jan 20 – Target and Antivirus Jan 13 – Crisis Communications 2014 RSA Conference Guide In case any of you missed it, we published our fifth RSA Conference Guide. Yes, we do mention the conference a bit, but it’s really our ideas about how security will shake out in 2014. You can get the full guide with all the memes you can eat. Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Leveraging Threat Intelligence In Security Monitoring Quick Wins with TISM The Threat Intelligence + Security Monitoring Process Revisiting Security Monitoring Benefiting from the Misfortune of Others Advanced Endpoint and Server Protection Prevention Assessment Introduction Newly Published Papers The Future of Security Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7 Eliminating Surprises with Security Assurance and Testing What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Incite 4 U TI is whatever you want it to mean: Interesting experiment from FireEye/Mandiant’s David Bianco, who went around the RSA show floor and asked vendors what threat intelligence (TI) meant to vendors who used the term prominently in their booths. Most folks just use the buzzword, and mean some of the less sophisticated data sources. I definitely understand David’s perspective, but he is applying the wrong filter. It’s like of like having a Ph.D. candidate go into a third grade classroom and wonder why the students don’t understand differential equations. Security is a big problem, and the kinds of things David is comfortable with at the top of his Pyramid of Pain would be lost on 98% of the world. If even 40% of the broad market would use

Share:
Read Post

Research Revisited: POPE analysis on the new Securosis

Since we’re getting all nostalgic and stuff, I figured I’d dust off the rationale I posted the day we announced that I was joining Securosis. That was over 4 years ago and it has been a great ride. Rich and Adrian haven’t seen fit to fire me for cause yet, and I think we’ve done some great work. Of course the plans you make typically aren’t worth the paper they’re written on. We have struggled to launch that mid-market research offering. And we certainly couldn’t have expected the growth we have seen with our published research (using our unique Totally Transparent Research process) or our retainer business. So on balance it’s still all good. By the way, I still don’t care about an exit, as I mentioned in the piece below. I am having way too much fun doing what I love. How many folks really get to say that? So we will continue to take it one day at a time, and one research project at a time. We will try new stuff because we’re tinkerers. We will get some things wrong, and then we’ll adapt. And we’ll drink some beer. Mostly because we can… The Pope Visits Security Incite + Securosis (Originally published on the Security Incite blog – Jan 4, 2010.) When I joined eIQ, I did a “POPE” analysis on the opportunity, to provide a detailed perspective on why I made the move. The structure of that analysis was pretty well received, so as I make another huge move, I may as well dust off the POPE and use that metaphor to explain why I’m merging Security Incite with Securosis. People Analyzing every “job” starts with the people. I liked the freedom of working solo, but ultimately I knew that model was inherently limiting. So thinking about the kind of folks I’d want to work with, a couple of attributes bubbled to the top. First, they need to be smart. Smart enough to know when I’m full of crap. They also need to be credible. Meaning I respect their positions and their ability to defend them, so when they tell me I’m full of crap – I’m likely to believe them. Any productive research environment must be built on mutual respect. Most importantly, they need to stay on an even keel. Being a pretty excitable type (really!), when around other excitable types the worst part of my personality surfaces. Yet, when I’m around guys that go with the flow, I’m able to regulate my emotions more effectively. As I’ve been working long and hard on personal development, I didn’t want to set myself back by working with the wrong folks. For those of you that know Rich and Adrian, you know they are smart and credible. They build things and they break them. They’ve both forgotten more about security than most folks have ever known. Both have been around the block, screwed up a bunch of stuff and lived to tell the story. And best of all, they are great guys. Guys you can sit around and drink beer with. Guys you looking forward to rolling your sleeves up with and getting some stuff done. Exactly the kind of guys I wanted to work with. Opportunity Securosis will be rolling out a set of information products targeted at accelerating the success of mid-market security and IT professionals. Let’s just say the security guy/gal in a mid-market company may be the worst job in IT. They have many of the same problems as larger enterprises, but no resources or budget. Yeah, this presents a huge opportunity. We also plan to give a lot back to the community. Securosis publishes all its primary research for free on the blog. We’ll continue to do that. So we have an opportunity to make a difference in the industry as well. To be clear, the objective isn’t to displace Gartner or Forrester. We aren’t going to build a huge sales force. We will focus on adding value and helping to make our clients better at their jobs. If we can do that, everything else works itself out. Product To date, no one has really successfully introduced a syndicated research product targeted to the mid-market, certainly not in security. That fact would scare some folks, but for me it’s a huge challenge. I know hundreds of thousands of companies struggle on a daily basis and need our help. So I’m excited to start figuring out how to get the products to them. In terms of research capabilities, all you have to do is check out the Securosis Research Library to see the unbelievable productivity of Rich and Adrian. The library holds a tremendous amount of content and it’s top notch. As with every business trying something new, we’ll run into our share of difficulties – but generating useful content won’t be one of them. Exit Honestly, I don’t care about an exit. I’ve proven I can provide a very nice lifestyle for my family as an independent. That’s liberating, especially in this kind of economic environment. That doesn’t mean I question the size of the opportunity. Clearly we have a great opportunity to knock the cover off the ball and build a substantial company. But I’m not worried about that. I want to have fun, work with great guys and help our clients do their jobs better. If we do this correctly, there are no lack of research and media companies that will come knocking. Final thoughts On the first working day of a new decade, I’m putting the experiences (and road rash) gained over last 10 years to use. Whether starting a business, screwing up all sorts of things, embracing my skills as an analyst or understanding the importance of balance in my life, this is the next logical step for me. Looking back, the past 10 years have been very humbling. It started with me losing a fortune during the Internet bubble. selling the company I founded for the cash on our balance sheet because

Share:
Read Post

Research Revisited: RSA/NetWitness Deal Analysis

As we continue our journey down memory lane I want to take a look at what I said about the RSA/NetWitness deal back in April 2011, when it was announced. In hindsight the NetWitness technology has become the underlying foundation of RSA’s security management and security analytics offerings, so I underplayed that a bit. EnVision is pretty much dead. And we haven’t really seen a compelling alternative on the full packet capture and analytics front. Although a bunch of bigger SIEM players started introducing that technology this year. As with most everything, some prognostications were good and some not so good. And if I had a crystal ball that worked I would have invested in WhatsApp, rather than trying to figure out the future of security. Fool us once… EMC/RSA Buys NetWitness (Published on the Securosis blog April 4, 2011) To no one’s surprise (after NetworkWorld spilled the beans two weeks ago), RSA/EMC formalized its acquisition of NetWitness. I guess they don’t want to get fooled again the next time an APT comes to visit. Kidding aside, we have long been big fans of full packet capture, and believe it’s a critical technology moving forward. On that basis alone, this deal looks good for RSA/EMC. Deal Rationale APT, of course. Isn’t that the rationale for everything nowadays? Yes, that’s a bit tongue in cheek (okay, a lot) but for a long time we have been saying that you can’t stop a determined attacker, so you need to focus on reacting faster and better. The reality remains that the faster you figure out what happened and remediate (as much as you can), the more effectively you contain the damage. NetWitness gear helps organizations do that. We should also tip our collective hats to Amit Yoran and the rest of the NetWitness team for a big economic win, though we don’t know for sure how big a win. NetWitness was early into this market and did pretty much all the heavy lifting to establish the need, stand up an enterprise class solution, and show the value within a real attack context. They also showed that having a llama at a conference party can work for lead generation. We can’t minimize the effect that will have on trade shows moving forward. So how does this help EMC/RSA? First of all, full packet capture solves a serious problem for obvious targets of determined attackers. Regardless of whether the attack was a targeted phish/Adobe 0-day or Stuxnet type, you need to be able to figure out what happened, and having the actual network traffic helps the forensics guys put the pieces together. Large enterprises and governments have figured this out and we expect them to buy more of this gear this year than last. Probably a lot more. So EMC/RSA is buying into a rapidly growing market early. But that’s not all. There is a decent amount of synergy with the rest of RSA’s security management offerings. Though you may hear some SIEM vendors pounding their chests as a result of this deal, NetWitness is not SIEM. Full packet capture may do some of the same things (including alert on possible attacks), but it analysis is based on what’s in the network traffic – not logs and events. More to the point, the technologies are complimentary – most customers pump NetWitness alerts into a SIEM for deeper correlation with other data sources. Additionally some of NetWitness’ new visualization and malware analysis capabilities supplement the analysis you can do with SIEM. Not coincidentally, this is how RSA positioned the deal in the release, with NetWitness and EnVision data being sent over to Archer for GRC (whatever that means). Speaking of EnVision, this deal may take some of the pressure off that debacle. Customers now have a new shiny object to look at, while maybe focusing a little less on moving off the RSA log aggregation platform. It’s no secret that RSA is working on the next generation of the technology, and being able to offer NetWitness to unhappy EnVision customers may stop the bleeding until the next version ships. A side benefit is that the sheer amount of network traffic to store will drive some back-end storage sales as well. For now, NetWitness is a stand-alone platform. But it wouldn’t be too much of a stretch to see some storage/archival integration with EMC products. EMC wouldn’t buy technology like NetWitness just to drive more storage demand, but it won’t hurt. Too Little, Too Late (to Stop the Breach) Lots of folks drew the wrong conclusion, that RSA bought NetWitness because of their recent breach. But these deals doesn’t happen overnight, so this acquisition has been in the works for quite a while. But what could better justify buying a technology than helping to detect a major breach? I’m sure EMC is pretty happy to control that technology. The trolls and haters focus on the fact that the breach still happened, so the technology couldn’t work that well, right? Actually, the biggest issue is that EMC didn’t have enough NetWitness throughout their environment. They might have caught the breach earlier if they had the technology more widely deployed. Then again, maybe not, because you never know how effective any control will be at any given time against any particular attack, but EMC/RSA can definitely make the case that they could have reacted faster if they had NetWitness everywhere. And now they likely will. Competitive Impact The full packet capture market is still very young. There are only a handful of direct competitors to NetWitness, all of whom should see their valuations skyrocket as a result of this deal. Folks like Solera Networks are likely grinning from ear to ear today. We also expect a number of folks in adjacent businesses (such as SIEM) to start dipping their toes into this water. Speaking of SIEM, NetWitness did have partnerships with the major SIEM providers to send them data, and this deal is unlikely to change much in the short term. But we

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.