Securosis

Research

Advanced Persistent Threat (APT) Defeated by Marketure

Washington, D.C. Officials today revealed that the “Advanced Persistent Threat” (APT) has been completely defeated by vendor marketure, analyst/pundit tweets, and PowerPoint presentations. “APT is dead. Totally gone. The term APT is meaningless now” revealed a senior official under the condition of anonymity, as he was not authorized to discuss the issue with the press – as if anyone believes that anymore. “Advanced Persistent Threat” was a term coined by members of the military, intelligence, and defense industries to define a series of ongoing attacks originating from state and non-state actors primarily located in China, first against military targets, and later against manufacturing and other industries of interest. It referred to specific threat actors, rather than a general type of advanced attacks. Revealed through major breaches at Google and reports from Lockheed-Martin, APT quickly entered the Official Industry Spin Machine and was misused to irrelevance. Bill Martin, President, CEO, and CMO of Big Security, stated, Our security products have always protected against advanced threats, and all threats are persistent, which is why we continue to push LOVELETTER virus definitions to our clients desktops. By including APT in our marketing materials and webcasts we are now able to educate our clients on why they should give us more money for the same products we’ve been selling them for years. In 2011 we will continue to enhance our customers’ experiences by adding an APT Gauge to all our product dashboards for a minimal price increase. Self-proclaimed independent security pundit Rob Robson stated, “The APT isn’t dead until I say it is. I will continue to use APT in my presentations and press quotes until I stop getting invited to RSA parties”. When asked in an unrelated press conference whether this means China is no longer hacking foreign governments and enterprises, Cybergeneral Johnson replied, “We have seen no decrease in activity.” Johnson continued, “If anything, we’ve seen even more successful breaches due to agencies and companies believing the latest security product they purchased will stop the APT. We are still in the middle of a long-term international conflict with a complex political dynamic that could materially affect our military and economic capabilities in the future. I don’t think a new firewall will help”. For more on this topic, please see The Security Industry Anti-Disambiguation Movement. Share:

Share:
Read Post

The 2011 Securosis Disaster Recovery Breakfast

The RSA Conference is just around the corner, and you know what that means. Pain. Pain in your head, and likely a sick feeling in your stomach. All induced by an inability to restrain your consumption when surrounded by oodles of fellow security geeks and free drinks. You know what? We’re here for you. Once again, with the help of our friends at ThreatPost and Schwartz Communications, we will be holding our Disaster Recovery Breakfast to cure what ales ya (or ails you, but I think my version is more accurate). This year the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We’ll have food, beverages, and assorted recovery items to ease your day (non-prescription only). No marketing, no spin, just a quiet place to relax and have muddled conversations. It sure beats trying to scream at the person next to you at some corporate party with pounding music and, for the most part, a bunch of other dudes. Invite is below. To help us estimate numbers please RSVP to rsvp@securosis.com. Share:

Share:
Read Post

What Do You Want to See in the First Cloud Security Alliance Training Course?

It leaked a bit over Twitter, but we are pretty excited that we hooked up with the Cloud Security Alliance to develop their first training courses. Better yet, we’re allowed to talk about it and solicit your input. We are currently building two courses for the CSA to support their Cloud Computing Security Knowledge (CCSK) certification (both of which will be licensed out to training organizations). The first is a one day CCSK Enhanced class which we will be be delivering the Sunday before RSA. This includes the basics of cloud computing security, aligned with the CSA Guidance and ENISA Rick documents, plus some hands-on practice and material beyond the basics. The second class is the CCSK Review, which will be a 3-hour course optimized for online delivery and to prep you for the CCSK exam. We don’t want to merely teach to the book, so we are structuring the course to cover all the material in a way that makes more sense for training. Here is our current module outline with the person responsible and their Twitter handle in case you want to send them ideas: Introduction and Cloud Architectures. (Domain 1; Mike Rothman; @securityincite) Creating and securing a public cloud instance. (Domains 7 & 8; David Mortman; @mortman) Securing public cloud data. (Domains 5 & 11; Adrian Lane; @adrianlane) Securing cloud users and applications (Domains 10 & 12; Gunnar Peterson; @oneraindrop) Managing cloud computing security and risk (Domains 6 & 9 and parts of 2, 3, & 4; James Arlen; @myrcurial) Creating and securing a private cloud (Domain 13; Dave Lewis; @gattaca) The entire class is being built around a fictional case study to provide context and structure, especially for the hands-on portions. We are looking at: Set up instances on AWS and/or RackSpace with a basic CMS stack (probably on EC2 free, with Joomla). Set basic instance security. Encrypt cloud data (possibly the free demo of the Trend EBS encryption service). Something with federation/OAuth. Risk/threat modeling exercise. Set up a private cloud (vCloud or Eucalyptus) Keep in mind this is a one-day class so these will be very scripted and quick – there’s only so much we can cover. I will start pushing out some of the module outlines in our Complete feed (our Highlights RSS feed still has everything due to a platform bug – you only need to know that if you visit the site). We can’t put everything out there since this is a commercial class, but here’s your chance to influence the training. Also remember that we are deep into the project already with a very tight deadline to deliver the pilot class at RSA. Thanks Share:

Share:
Read Post

Friday Summary: January 7, 2011

Compliance and security have hit the big time, and I have the proof. Okay: all of us who live, eat, and breathe security already know that compliance is a big deal and a pain in the ass – but it isn’t as if “normal” people ever pay attention, right? Other than CEOs and folks who have to pay for our audits, right? And according to the meme that’s been circulating since I started in the business, no one actually cares about security until they’ve been hit, right? Well, today I was sitting at my favorite local coffee shop when the owner came over to make fun of me for having my Mac and iPad out at the same time. We got to talking about their wireless setup (secure, but he doesn’t like the service) and he mentioned he was thinking of dropping the service and running it off his own router. I gave him some security tips, and he informed me that in no way, shape, or form would he connect his open WiFi to the same connection his payment system is on. Because he has to stay PCI compliant. Heck, he even knew what PCI PA-DSS was and talked about buying a secure, compliant point of sale system! He’s not some closet security geek – just a dude running a successful small business (now in two locations). He’s a friggin’ Level 4 merchant, and still knows about PCI and compliant apps. I feel like kissing the sales guy who must have explained it all to him. And security? He never uses anything except his up-to-date Windows 7 computer to access his bank account. Now can we all shut up about not making a difference? Do you really think I could have had that conversation even a few years ago? One last note: RSA is fast approaching. We (well, @geekgrrl) are working hard on the Securosis Guide to RSA 2011, the Recovery Breakfast announcement will go out soon, we’re cramming to finish the CSA training class, and we’ve locked in an awesome lineup for the RSA e10+ program we are running this year. And then there’s our sekret squirrel project. In other words, please forgive us if we are slow responding to email, phone calls, or beatings over the head. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mort quoted in Incident%20response%20plans%20badly%20lacking,%20experts%20say. Kevin Riggins gives us a shout-out and review. Favorite Securosis Posts Mike Rothman: Mr. Cranky Faces Reality. Any time Adrian is cranky, you need to highlight that. I guess he is human after all. Adrian Lane: The Evolving Role of Vulnerability Assessment and Penetration Testing in Web Application Security. David Mortman: Web Application Firewalls Really Work. Rich: BSIMM meets Joe the Programmer. Other Securosis Posts React Faster and Better: Initial Incident Data. Mobile Device Security: Saying no without saying no. Incite 1/5/2011: It’s a Smaller World, after All. HP(en!s) Envy: Dell Buys SecureWorks. Motivational Skills for Security Wonks: 2011 Edition. Mobile Device Security: I can haz your mobile. Coming Soon…. React Faster and Better Chugging along. React Faster and Better: Alerts & Triggers. Favorite Outside Posts Mike Rothman: Quora Essentials for Information Security Professionals. Lenny Z talks about how to use the new new social networking thingy: Quora. I’m a luddite, so maybe I’ll be there in a year or two, but it sounds cool. Adrian Lane: thicknet: starting wars and funny hats. A couple weeks old, but a practical discussion of MinM attacks on Oracle. And Net8 is difficult to decipher. Rich: Slashdot post on how China acquires IP. I suggest the full article linked by Slashdot, but it’s a translation and even the short bits in the post are very revealing. Project Quant Posts NSO Quant: Index of Posts. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Researcher breaks Adobe Flash sandbox security feature. He did not actually break anything, but figured out how to bypass the restriction. Windows 0day in the wild. SourceFire buys Immunet. More perspective on Gawker Hack. Chinese hackers dig into new IE bug, says Google researcher. Breaking GSM With a $15 Phone … Plus Smarts. The Dubai Job: Awesome article in GQ on the assasination. Security risks of PDF. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to mokum von Amsterdam, in response to NSA Assumes Security Is Compromised. One can not keep information secret that is accessable by >10 people over years, period. Mind you, ‘systems’ and ‘networks’ are not limited to the typical IT stuff one might think of but includes the people and processes. Trying to secure it is doomed to fail, so what one needs is to adjust the mindset to reality. Sorry, no spend-more-dollars solution from me… Share:

Share:
Read Post

The Evolving Role of Vulnerability Assessment and Penetration Testing in Web Application Security

Yesterday I got involved in an interesting Twitter discussion with Jeremiah Grossman, Chris Eng, Chris Wysopal, and Shrdlu that was inspired by Shrdlu’s post on application security over at Layer8. I sort of suck at 140 character responses, so I figured a blog post was in order. The essence of our discussion was that in organizations with a mature SDLC (security development lifecycle), you shouldn’t need to prove that a vulnerability is exploitable. Once detected, it should be slotted for repair and prioritized based on available information. While I think very few organizations are this mature, I can’t argue with that position (taken by Wysopal). In a mature program you will know what parts of your application the code affects, what potential data is exposed, and even the possible exploitability. You know the data flow, ingress/egress paths, code dependencies, and all the other little things that add up to exploitability. These flaws are more likely to be discovered during code assessment than a vulnerability scan. And biggest of all, you don’t need to prove every vulnerability to management and developers. But I don’t think this, in any way, obviates the value of penetration testing to determine exploitability. First we need to recognize that – especially with web applications – the line between a vulnerability assessment and penetration test is an artificial construct created to assuage the fears of the market in the early days of VA. Assessment and penetration testing are on continuum, and the boundary is a squishy matter of depth, rather than a hard line with clear demarcation. Effectively, every vulnerability scan is the early stage of a (potential) penetration test. And while this difference may be more distinct for a platform, where you check something like patch level, it’s even more vague for a web application, where the mere act of scanning custom code often involves some level of exploitation techniques. I’m no pen tester, but this is one area where I’ve spent reasonable time getting my hands dirty – using various free and commercial tools against both test and (my own) production systems. I’ve even screwed up the Securosis site by misconfiguring my tool and accidentally changing site functionality during what should have been a “safe” scan. I see what we call a vulnerability scan as merely the first, incomplete step of a longer and more involved process. In some cases the scan provides enough information to make an appropriate risk decision, while in others we need to go deeper to determine the full impact of the issue. But here’s the clincher – the more information you have on your environment, the less depth you need to make this decision. The greater your ability to analyze the available variables to determine risk exposure, the less you need to actually test exploitability. This all presumes some sort of ideal state, which is why I don’t ever see the value of penetration testing declining significantly. I think even in a mature organization we will only ever have sufficient information to make exploitation testing unnecessary for a small number of our applications. It isn’t merely a matter of cost or tools, but an effect of normal human behavior and attention spans. Additionally, we cannot analyze all the third party code in our environment to the same degree as our own code. As we described a bit in our Building a Web Application Security Program paper, these are all interlocking pieces of the puzzle. I don’t see any of these as in competition in the long term – once we have the maturity and resources to acquire and use these techniques and tools together. Code analysis and penetration testing are complementary techniques that provide different data to secure our applications. Sometimes we need one or the other, and often we need both. Share:

Share:
Read Post

Web Application Firewalls Really Work

A couple months ago I decided to finally dig in and see whether WAFs (Web Application Firewalls) are really useful, or merely another crappy shiny object we spend a lot of money on to get the auditors off our backs. Sure, the WAF vendors keep telling me how well their products work and how many big clients they have, but that’s not the best way to figure out whether something really does the job. I also talk with a bunch of end users who provide darn good info, but even that isn’t always the best way to determine the security value of a tool. Not all users have good visibility and internal controls to measure the effectiveness of the tool, and many can’t deploy it in an optimal manner due to all sorts of political and technical issues. In this case I started with users, then checked with a bunch of my penetration testing friends. While a pen tester doesn’t necessarily understand the overall value of a tool (since they don’t have to pay the same kind of attention to compliance/management issues), a good tester most definitely knows how much harder a security tool makes their life. The end result was that WAFs do have value when used properly, and may provide value beyond pure security, but aren’t a panacea. Since you could say that about the value of a gerbil for defending against APT too, here’s a little more detail… WAFs are best at protecting against known framework vulnerabilities (e.g., you run WordPress and haven’t patched), known automated (script kiddie) attacks, or when configured with (defensive) application-specific rules (whitelisting, although almost no one really deploys them this way). WAFs are moderately effective against general XSS/SQL injection. All the researchers said it was a roadbump for custom attacks that added to the time it took them to generate a successful exploit… with varying effectiveness depending on many factors – particularly the target app behind the WAF. The better the configuration, based on deep application knowledge, the more difficult the attack. But they stated that the increasing time to exploit increases the attacker’s costs, and thus might reduce the chances the attacker would devote time to the app and increase your probability of detecting them. Still, if someone really wants to get you and is knowledgeable, no WAF alone will stop them. The products often provide great analytics value because they are sometimes better than normal tracking/stats packages for understanding what’s going on with your site. They don’t do anything for logic flaws (unless you hand-code/configure them) or much beyond XSS/SQL injection. They aren’t as easy to use as is usually promised in the sales cycle. Gee, what a shock. Again, I could say this about gerbils. In some ways, now that I’ve written this, I feel like I could have substituted “duh” for the entire post. Yet again we have a tool that promises a lot, is often misused, but (used properly) can provide a spectrum of value from “keeping the auditors off our backs” to “protects against some 1337 haxor in a leather bodysuit”. But don’t let anyone tell you they are a waste of money… just make sure you know what you’re getting and use it right. Share:

Share:
Read Post

2011 Research Agenda: Quantum Cloudiness, Supervillan Shields, and No-BS Risk

In my last post I covered the more practical items on my research agenda for the coming year. Today I will focus more on pure research: these topics are a bit more out there and aren’t as focused on guiding immediate action. While this is a smaller percentage of where I spend my time, overall I think it’s more important in the big picture. Quantum Datum I try to keep 85-90% of my research focused on practical, real-world topics that help security pros in their day to day activities. But for the remaining 10-15%? That’s where I let my imagination run free. Quantum Datum is a series of concepts I’ve started talking about, around advanced information-centric security, drawing on metaphors from quantum mechanics to structure and frame the ideas. As someone pointed out, I’m using something ridiculously complex to describe something that’s also complex, but I think some interesting conclusions emerge from mashing these two theoretical frameworks together. Quantum Datum is focused on the next 7-10 years of information-centric security – much of which is influenced by cloud computing. For me this is an advanced research project, which spins off various real-world implications that land in my other research areas. I like having an out-there big picture to frame the rest of my work – it provides some interesting context and keeps me from falling so far into the weeds that all I’m doing is telling you things you already know. Outcomes-Based Risk Management and Security I’m sick and tired of theoretical risk frameworks that don’t correlate security outcomes with predictions or controls. I’m also tired of thinking we can input a bunch of numbers into a risk framework without having a broad set of statistics in order to actually evaluate the risks in our context. And if you want to make me puke, just show me a risk assessment that relies on unverified vendor FUD numbers from a marketing campaign. The idea behind outcomes-based risk management and security is that we, to the best of our ability, use data gathered from real-world incidents to feed our risk models and guide security control decisions. This is based on similar approaches in medicine which correlate patient outcomes to treatments – rather than changes in specific symptoms/signs. For example, the question wouldn’t be whether or not the patient has a heartbeat when the ambulance drops them off at the hospital, but whether or not they later leave the hospital breathing on their own. (With the right drugs you can give a rock a heartbeat… or Dick Cheney, as the record shows). For security, this means pushing the industry for more data sets like the Verizon and Trustwave investigation/breach reports, which don’t just count breaches, but identify why they happened. This needs to be supplemented by additional datasets whenever and wherever we can find and validate them. Clearly this is my toughest agenda item, because it relies so heavily on the work of others. Securosis isn’t an investigations firm, and lacks resources for the kinds of vigorous research needed to reach out to organizations and pull together the right numbers. But I still think there are a lot of opportunities to dig into these issues and work on building the needed models by mining public sources. And if we can figure out an economically viable model to engage in the primary research, so much the better. The one area where we are able to contribute is on the metrics model side, especially with Project Quant. We’re looking to expand this in 2011 and continue to develop hard metrics models to help organizations improve operational performance and security. Advanced Persistent Defense Can you say “flame bait”? I probably won’t use the APD term, but I can’t wait to see the reactions when I toss it out there. There are plenty of people spouting off about APT, but I’m more interested in understanding how we can best manage the attackers working inside our networks. The concept behind advanced defense is that you can’t keep them out completely, but you have many tools to detect and contain the bad guys once they come in. Some of this ties to network architectures, monitoring, and incident response; while some looks at data security. Mike has monitoring well covered and we’re working on an incident response paper that fits this research theme. On top of that I’m looking at some newer technologies such as File Activity Monitoring that seem pretty darn interesting for helping to limit the depth of some of these breaches. No, you can’t entirely keep them out, but you can definitely reduce the damage. I’m debating publishing an APT-specific paper. I’ve been doing a lot of research with people directly involved with APT response, but there is so much hype around the issue I’m worried that if I do write something it might spur the wrong kind of response. The idea would be to cut through all the hype and BS. I could really use some feedback on whether I should try this one. In terms of the defense concepts, there are specific areas I think we need to talk about, some of which tie into Mike and Adrian’s work: Network segregation and monitoring. When you’re playing defense only, you need a 100% success rate, but the bad guy only needs to be right once – and no one is ever 100% successful over the long term. But once the bad guy is in your environment, with the right architecture and monitoring you can turn the tables. Now he needs to be right all the time or you can detect his activities. I want to dig into these architectures to tighten the window between breach and detection. File Activity Monitoring. This is a pretty new technology that’s compelling to me from a data security standpoint. In non-financial attacks the goal is usually to grab large volumes of unstructured data. I think FAM tools can increase our chances of detecting this activity early. Incident response. “React Faster

Share:
Read Post

2011 Research Agenda: the Practical Bits

I always find it a bit of a challenge to fully plan out my research agenda for the coming year. Partly it’s due to being easily distracted, and partly my recognition that there are a lot of moving cogs I know will draw me in different directions over the coming year. This is best illustrated by the detritus of some blog series that never quite made it over the finish line. But you can’t research without a plan, and the following themes encompass the areas I’m focusing on now and plan to continue through the year. I know I won’t able to cover everything in the depth I’d like, so I could use feedback on to what you folks find interesting. This list is as much about the areas I find compelling from a pure research standpoint as what I might write about. This post is about the more pragmatic focus areas, and the next post will delve into more forward-looking research. Information-Centric (Data) Security for the Cloud I’m spending a lot more time on cloud computing security than I ever imagined. I’ve always been focused on information-centric (data) security, and the combination of cloud computing adoption, APT-style threats, the consumerization of IT, and compliance are finally driving real interest and adoption of data security. Data security consistently rates as a top concern – security or otherwise – when adopting cloud computing. This is in large driven part by the natural fear of giving up physical control of information assets, even if the data ends up being more secure than it was internally. As you’ll see at the end of this post, I plan on splitting my coverage into two pieces: what you can do today, and what to watch for the future. For this agenda item I’ll focus on practical architectures and techniques for securing data in various cloud models using existing tools and technologies. I’m considering writing two papers in the first half of the year, and it looks like I will be co-branding them with the Cloud Security Alliance: Assessing Data Risk for the Cloud: A cloud and data specific risk management framework and worksheet. Data Security for Cloud Computing: A dive into specific architectures and technologies. I will also continue my work with the CSA, and am thinking about writing something up on cloud computing security for SMB because we see pretty high adoption there. Pragmatic Data Security I’ve been writing about data security, and specifically pragmatic data security, since I started Securosis. This year I plan to compile everything I’ve learned into a paper and framework, plus issue a bunch of additional research delving into the nuts and bolts of what you need to do. For example, it’s time to finally write up my DLP implementation and management recommendations, to go with Understanding and Selecting. The odds are high I will write up File Activity Monitoring because I believe it’s at an early stage and could bring some impressive benefits – especially for larger organizations. (FAM is coming out both stand-alone and with DLP). It’s also time to cover Enterprise DRM, although I may handle that more through articles (I have one coming up with Information Security Magazine) and posts. I also plan to run year two of the Data Security Survey so we can start comparing year-over-year results. Finally, I’d like to complete a couple more Quick Wins papers, again sticking with the simple and practical side of what you can do with all the shiny toys that never quite work out like you hoped. Small Is Sexy Despite all the time we spend talking about enterprise security needs, the reality is that the vast majority of people responsible for implementing infosec in the world work in small and mid-sized organizations. Odds are it’s a part time responsibility – or at most 1 to 2 people who spend a ton of time dealing with compliance. More often than not this is what I see even in organizations of 4,000-5,000 employees. A security person (who may not even be a full-time security professional) operating in these environments needs far different information than large enterprise folks. As an analyst it’s very difficult to provide definitive answers in written form to the big company folks when I know I can never account for their operational complexities in a generic, mass-market report. Aside from the Super Sekret Squirrel project for S Share:

Share:
Read Post

React Faster and Better: Incident Response Gaps

In our introduction to this series we mentioned that the current practice of incident response isn’t up to dealing with the compromises and penetrations we see today. It isn’t that the incident response process itself is broken, but how companies implement response is the problem. Today’s incident responders are challenged on multiple fronts. First, the depth and complexity of attacks are significantly more advanced than commonly discussed. We can’t even say this is a recent trend – advanced attacks have existed for many years – but we do see them affecting a wider range of organizations, with a higher degree of specificity and targeting than ever before. It’s no longer merely the defense industry and large financial institutions that need to worry about determined persistent attackers. In the midst of this onslaught, the businesses we protect are using a wider range of technology – including consumer tools – in far more distributed environments. Finally, responders face the dual-edged sword of a plethora of tools; some of them are highly effective, and others that contribute to information overload. Before we dig into the gaps we need to provide a bit of context. First, keep in mind that we are focusing on larger organizations with dedicated incident response resources. Practically speaking, this probably means at least a few thousand employees and a dedicated IT security staff. Smaller organizations should still glean insight from this series, but probably don’t have resources to implement the recommendations. Second, these issues and recommendations are based on discussions with real incident response teams. Not everyone has the same issues – especially across large organizations – nor the same strengths. So don’t get upset when we start pointing out problems or making recommendations that don’t apply to you – as with any research, we generalize to address a broad audience. Across the organizations we talk with, some common incident response gaps emerge: Too much reliance on prevention at the expense of monitoring and response. We still find even large organizations that rely too heavily on their defensive security tools rather than balancing prevention with monitoring and detection. This imbalance of resources leads to gaps in the monitoring and alerting infrastructure, with inadequate resources for response. All organizations are eventually breached, and targeted organizations always have some kind of attacker presence. Always. Too much of the wrong kinds of information too early in the process. While you do need extensive auditing, logging, and monitoring data, you can’t use every feed and alert to kick off your process or in the initial investigation. And to expect that you can correlate all of these disparate data sources as an ongoing practice is ludicrous. Effective prioritization and filtering is key. Too little of the right kinds of information too early (or late) in the process. You shouldn’t have to jump right from an alert into manually crawling log files. By the same token, after you’ve handled the initial incident you shouldn’t need to rely exclusively on SIEM for your forensics investigation and root cause analysis. This again goes back to filtering and prioritization, along with sufficient collection. This also requires two levels of collection for your key device types – the first being what you can do continuously. The second is the much more detailed information you need to pinpoint root cause or perform post-mortem analysis. Poor alert filtering and prioritization. We constantly talk about false positives because those are the most visible, but the problem is less that an alert triggered, and more determining its importance in context. This ties directly to the previous two gaps, and requires finding the right balance between alerting, continuing collection of information for initial response, and gathering more granular information for after-action investigation. Poorly structured escalation options. One of the most important concepts in incident response is the capability to smoothly escalate incidents to the right resources. Your incident response process and organizations must take this into account. You just can’t effectively escalate with a flat response structure; tiering based on multiple factors such as geography and expertise is key. And this process must be determined well in advance of any incident. Escalation failure during response is a serious problem. Response whack-a-mole. Responding without the necessary insight and intelligence leads to an ongoing battle where the organization is always one step behind the attacker. While you can’t wait for full forensic investigations before clamping down on an incident to contain the damage, you need enough information to make informed and coordinated decisions that really stop the attack – not merely a symptom. So balancing hair-trigger response with analysis/paralysis is critical to ensure you minimize damage and potential data loss. *Your goal in incident response is to detect and contain attacks as quickly as possible – limiting the damage by constraining the window within the attacker operates.** To pull this off you need an effective process with graceful escalation to the right resources, to collect the right amount of the right kinds of information to streamline your process, to do ongoing analysis to identify problems earlier, and to coordinate your response to kill the threat instead of just a symptom. But all too often we see flat response structures, too much of the wrong information early in the process with too little of the right information late in the process, and a lack of coordination and focus that allow the bad guys to operate with near impunity once they establish their first beachhead. And let’s be clear, they have a beachhead. Whether you know about it is another matter. In our next couple posts Mike will start talking about what information to collect and how to define and manage your triggers for alerts. Then I’ll close out by talking about escalation, investigations, and intelligently kicking the bad guys out. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.