Securosis

Research

What to do when your Twitter account is hacked

PCWorld/TechHive has a very clear article on how to deal with a Twitter hack. Print it out and keep it handy, especially if you manage a corporate account. If you are very big get a phone number for Twitter security, make contact, and add it to your IR plans. Share:

Share:
Read Post

Friday Summary: September 6, 2013

When my wife an I were a young couple looking for a place in the hills of Berkeley, we came across an ad for an apartment with “Views of the Golden Gate Bridge”. The price was a bit over our budget and the neighborhood was less than thrilling, but we decided to check it out. We had both previously lived in places with bay views and we felt that the extra expense would be worth it. But after we got to the property the apartment was beyond shabby, and no place we wanted to live. What’s more, we could not find a view! We stayed for a while searching for the advertised view, and when neither of us could find it we asked the agent. She said the view was from the side of the house. As it turns out, if you either stood on the fence in the alley, or on the toilet seat of the second bathroom, and looked out the small window, you could see a sliver of the Golden Gate. The agent had not lied to us – technically there was a bridge view. But in a practical sense it did not matter. I would hardly invite company over for a glass of wine and have them stand on tiptoes atop the toilet lid for an obstructed view of the bridge. I think about this often when I read security product marketing collateral. There are differing degrees of usefulness of security products, and while some offer the full advertised value, others are more fantasy than reality. Or require you to have a substance abuse problem – either works. This is of course one of the things we do as analysts – figure out not only whether a product addresses a security problem, but how usefully it does so, and which of the – often many – use cases it deals with. And that is where we need to dig into the technology to validate what’s real vs. a whitewash. One such instance occurred recently, as I was giving a presentation on how malware is the scourge of the earth, nothing solves the problem, and this vendor’s product stops it from damaging your organization. If you think my paraphrasing of the presentation sounds awkward, you are not alone. It was. But every vendor is eager to jump on the anti-malware bandwagon because it is one of the biggest problems in IT security, driving a lot of spending. But the applicability of their solution to the general use case was tenuous. When we talk to IT practitioners about malware they express many concerns. They worry about email servers being infected and corporate email and contacts being exposed. They worry that their servers will be infected and used to send across the Internet. They worry that their servers will become bots and participate in DoS attacks. They worry that their databases will be pilfered and their information will stream out to foreign countries. They worry that their users will be phished, resulting in malware being dropped on their machines, and the malware will assume their user credentials. They even worry about malware outside their networks, infecting customer machines and generating bogus traffic. And within each of these use cases are multiple attack avenues, multiple ways to pwn a machine, and multiple ways to exfiltrate data. So we see many legitimate techniques applied to address malware, with each approach a bit better or worse suited, depending on the specifics of the infection. And this is where understanding technology comes in, as you start to see specific types of detection and prevention mechanisms which work across multiple use cases. Applicability is not black and white, but graduated. The solutions that only apply to one instance of one use case make me cringe. As with the above reference vendor, they addressed a use case customers seem least interested in. And they provide their solution in a way that really only works in one or two instances of that use case. Technically the vendor was correct: their product does indeed address a specific type of malware in a particular scenario. But in practice it is only relevant in a remote niche of the market. That is when past and present merged: I was transported back in time to that dingy apartment. But instead of the real estate agent it was the security vendor, asking me to teeter on the toilet seat lid with them, engaging in the fantasy of a beautiful Golden Gate Bridge view. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich talking about how hackers find weaknesses in car computer systems. Mogull and Hoff wrote a book? Amazon says so! Only four copies left, so hurry! Mike’s DDoS research quoted in the Economist… Really. Security issues are clearly becoming mass market news. Mike quoted in Dark Reading about Websense’s free CSO advisory offering. Favorite Securosis Posts Adrian Lane: Friday Summary: Decisions, Decisions. Rich is being modest here – he created a couple really cool tools while learning Chef and Puppet. Even some of the professional coders in the audience at BH were impressed. Drop him a line and give him some feedback if you can. There will be loads of work in this area over the coming months – this is how we will manage cloud security. David Mortman: Dealing with Database Denial of Service. Other Securosis Posts [New Paper] Identity and Access Management for Cloud Services. Incite 9/4/2013: Annual Reset. [New Paper] Dealing with Database Denial of Service. Friday Summary: Decisions, Decisions. Firewall Management Essentials: Introduction [New Series]. Tracking the Syrian Electronic Army. The future of security is embedded. Third Time is the Charm. Security is Reactive. Learn to Love It. Deming and the Strategic Nature of Security. Incite 8/27/2013: You Can’t Teach Them Everything. Reactionary Idiot Test. PCI 3.0 is coming. Hide the kids. Ecosystem Threat Intelligence: Use Cases and Selection Criteria. Random Thought: Meet Your New Database. VMWare Doubles Down on SDN. Ecosystem Threat Intelligence: Assessing Partner Risk. China Suffers Large DNS DDoS Attack. Favorite Outside Posts David Mortman: Busting the Biometric Myth.

Share:
Read Post

[New Paper] Identity and Access Management for Cloud Services

We are happy to announce the release of our Identity and Access Management for Cloud Services research paper. Identity, access management, and authorization are each reasonably complicated subjects, but they all reside at the center of most on-premise security projects. Cloud computing and cloud security are both very complex subjects. Mix them all together, in essence federating your on-premise identity systems into the cloud, and you have complexity soup! Gunnar and I agreed that in light of the importance of identity management to cloud computing, and the complexity of the subject matter, users need a guide to help understand what the heck is going on. Far too often people talk about the technologies (e.g.: SAML, OAuth, and OpenID) as the solution, while totally missing the bigger picture: the transformation of identity as we knew it into Cloud IAM. We are witnessing a major shift in how we both provide and consume identity, which is not obvious to a tools-centric view. This paper presents the nuts and bolts of how Cloud IAM works, but more importantly it frames them in the bigger picture of how Cloud IAM services work, and how this industry trend is changing identity systems. Moving the trust model outside the enterprise, with multiple internal and external services cooperating to support IAM, is a radical departure from traditional on-premies directory services. We liken the transition from in-house directory services to Cloud IAM as akin to moving from an Earth-centric view of the universe to a heliocentric view: a complete change in perspective. This is not your father’s LDAP server! If you want to understand what Cloud Identity is all about, we encourage you to download the paper and give it a read. And we greatly appreciate Symplified for licensing this content! While most vendors we speak with only want to talk about their Single Sign-On capability, federation module, SAML connector, mobile app, or management dashboard – or whichever piece of the whole they think holds their competitive advantage – Symplified shares our vision that you need to understand the cloud IAM ecosystem first, and how everything fits together, before diving into the supporting technologies. You can get a copy of the paper from Symplified or our Research Library. Share:

Share:
Read Post

Incite 9/4/2013: Annual Reset

This week marks the end of one year and the beginning of the next. For a long time I took this opportunity around the holidays to revisit my goals and ensure I was still on track. I diligently wrote down my life goals and break those into 10, 5, and 1 year increments. Just to make sure I was making progress toward where I wanted to be. Then a funny thing happened. I realized that constantly trying to get somewhere else made me very unhappy. So I stopped doing that. That’s right. I don’t have specific goals any more. Besides the stuff on Maslow’s hierarchy, anyway. If I can put a roof over our heads, feed my family, provide enough to do cool stuff, and feel like I’m helping people on a daily basis, I’m good. Really. But there are times when human nature rears its (ugly) head. These are the times when I wonder whether my approach still makes sense. I mean, what kind of high-achieving individual doesn’t need goals to strive toward? How will I know when I get somewhere, if I don’t know where I’m going? Shouldn’t I be competing with something? Doesn’t a little competition bring out the best in everyone? Is this entire line of thinking just a cop-out because I failed a few times? Yup, I’m human, and my monkey brain is always placing these mental land mines in my path. Sustainable change is very hard, especially with my own mind trying to get me to sink back into my old habits. These thoughts perpetually attempt to convince me I’m not on the right path. That I need to get back to constantly striving for what I don’t have, rather than appreciating what I do have. Years ago my annual reset was focused on making sure I was moving toward my goals. Nowadays I use it to maintain my resolve to get where I want to be – even if I’m not sure where that is or when I will get there. The first year or two that was a real challenge – I am used to very specific goals. And without those goals I felt a bit lost. But not any more, because I look at results. If you are keeping score, I lead a pretty balanced life. I have the flexibility to work on what I want to work on with people I enjoying working with. I can work when I want to work, where I want to work. Today that’s my home office. Friday it will be in a coffee shop somewhere. Surprisingly enough, all this flexibility has not impacted my ability to earn at all. If anything, I am doing better than when I worked for the man. Yes, I’m a lucky guy. That doesn’t mean I don’t get stressed out during crunch time. That I don’t get frustrated with things I can’t control. Or that everything is always idyllic. I am human, which means my monkey brain wins every so often and I feel dissatisfied. But I used to feel dissatisfied most of the time, so that’s progress. I also understand that the way I live is not right for everyone. Working on a small team where everyone has to carry their own weight won’t work if you can’t sell or deliver what you sold. Likewise, without strong self-motivation to get things done, not setting goals probably won’t work out very well. But it works for me, and at least once a year I take a few hours to remind myself of that. Happy New Year (Shanah Tova) for those of you celebrating this week. May the coming year bring you health and happiness. –Mike Photo credit: “Reset” originally uploaded by Steve Snodgrass Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Introduction Ecosystem Threat Intelligence Use Cases and Selection Criteria Assessing Ecosystem Risk The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Incite 4 U Wherefore art thou, cyber-liability insurance?: Interesting circumstances around Liberty Mutual suing their customer to define what they will and won’t cover with cyber insurance. As Dan Glass says, Liberty Mutual treats cyber just like physical assets. That means they will pay for the cost of the breach (like they pay for the destruction of physical assets), but they don’t want to cover other losses (such as regulatory fines or customer privacy lawsuits, etc.). If they are successful in defining these boundaries around their liability, Dan correctly points out: In other words, cyber insurance will be a minor part of any technology risk management program. Don’t let your BOD, CFO, or CIO get lulled into thinking cyber insurance will do much for the organization. – MR Big R, Little r, what begins with R? My views on risk management frameworks have seriously changed over the past decade or so. I once wrote up my own qualitative framework (my motivation now eludes me, but youthful exuberance was likely involved), I have mostly been disillusioned with the application of risk management methodologies to security – particularly quantitative models that never use feedback to match predictions against reality. Russell Thomas has a great post showing the disconnect between how many of us in security look at risk, compared to more mature financial models. To paraphrase, we often take a reductionist approach and try and map vulnerabilities and threats to costs –

Share:
Read Post

[New Paper] Dealing with Database Denial of Service

We are pleased to put the finishing touches on our Database Denial of Service (DB-DoS) research and distribute it to the security community. Unless you have had your head in the sand for the past year, you know DoS attacks are back with a vengeance. Less visible but no less damaging is the fact that attackers are “moving up the stack” to the application and database layers. Rather than “flooding the pipes” with millions of bogus packets, we now see cases where a single request topples a database – halting the web services it supported. Database DoS requires less effort for the attacker, and provides a stealthier approach to achieving their goals. Companies that have been victimized by DB-DoS are not eager to share details, but here at Securosis we think it’s time you know what we are hearing about so you can arm yourself with knowledge of how to defend against this sort of attack. Here is an except from the paper: Attackers exploit defects by targeting a specific weakness or bug in the database. In the last decade we have seen a steady parade of bugs that enable attackers – often remotely and without credentials – to knock over databases. A single ‘query of death’ is often all it takes. Buffer overflows have long been the principal vulnerability exploited for Db – DoS. We have seen a few dozen buffer overflow attacks on Oracle that can take down a database – sometimes even without user credentials by leveraging the PUBLIC privilege. SQL Server has its own history of similar issues, including the named pipes vulnerability. Attackers have taken down DB2 by sending UDP packets. We hear rumors at present of a MySQL attack that slows databases to a crawl. We would like to thank DBNetworks for licensing the content in this paper. Obviously we wouldn’t be able to do the research we do, or offer it to you for this most excellent price, without clients licensing our content. If you have comments of questions about this research please feel free to email us with questions! Download the paper, free of charge: Dealing with Database Denial of Service. Share:

Share:
Read Post

Friday Summary: Decisions, Decisions

I am in a bit of a pickle, and could use some advice. Over the time I have been an analyst, I have learned that it is important to have the right distribution of research. My rule of thumb is 80-90% of it should be practical research to help people get their jobs done on a daily basis. Then you can spend 10-20% on future research that I promise not to call thought leadership. Many analysts (and other pundits) fall into an esoteric trap, where they are so desperate to be seen as leaders that their research becomes more about branding and marketing, and less about helping people get their jobs done. It is totally fine to tilt at the occasional windmill, but everything in moderation. The corollary is that once you focus on the future too much you disconnect from the present and lose your understanding of current technologies and trends, and your subsequent predictions are based on reading science fiction and bad tech media articles. Those aren’t worth the bits they are printed on. And yeah, there is a lot of that going around. Always has been, especially in conference keynotes. This isn’t merely for ego gratification. On the business side you can’t survive long by selling research that doesn’t help someone get their job done. Many of my former Gartner colleagues lose track of this because they think people like their new “connected enterworld” junk, as opposed to paying for Magic Quadrants so they don’t lose their jobs when they buy something in the upper-right quadrant that doesn’t work. For a small firm like us, screw up the mix and it’s back to truck driving school. My dilemma is that a lot of the research I’m working on appears to be ahead of the general market, but still very practical and usable. I am thinking specifically of my work on Software Defined Security and DevOps. It’s the most fulfilling research I have done in a long time, especially because it gets me back to coding – even at a super-basic level. But I am borderline tilting at windmills myself – relatively few organizations are operationally ready for it. So it isn’t a load of hand-waving bullpoop – it is all real and usable today – but not for many organizations that lack the time or resources to start integrating these ideas. Not everyone has free time to play with new things. Especially with all the friggin’ auditors hanging over your head. Anyway, I have been bouncing this off people since Black Hat and am interested in what you folks think. I would love to make a go of it and have at least half my research agenda filled with using APIs, securing cloud management planes, integrating security into DevOps, and the like, but only if there is real interest out there – I gotta pay the bills. Drop me a line at rmogull at securosis dot com if you have an opinion, or leave a comment on this post. Thanks, and on to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike’s DDoS research quoted in the Economist… Really. Security issues are clearly becoming mass market news. Mike quoted in Dark Reading about Websense’s free CSO advisory offering. Don’t Be The Tortoise. Rich digs into his old book of parables at Dark Reading to point out that: “Agility may not always win the race, but you sure shouldn’t bet against it.” Incentives and Organizational Alignment (Or Lack Thereof). Mike’s latest Dark Reading column on Vulnerabilities and Threats. Rich on Threatpost – How I Got Here. I got to do my third favorite thing, talk about myself. Dave Mortman on Big Data Security Challenges. Rich’s piece on Apple’s security design quoted in a Techpinions article. Dave Lewis at CSO Online: Innovation And The Law Of Unintended Consequences. And more of Mr. Lewis: My (ISC)2 Report Card. Favorite Securosis Posts Mike Rothman: The future of security is embedded. Gunnar weighs in on our little blog ‘discussion’ about how to prove value in a security operation. And no, I don’t really think Rich and I were arguing. Rich: Random Thought: Meet Your New Database. Some trends are real. Both Adrian and I, former DBAs and developers, would likely go non-relational with our next projects. Mort: PCI 3.0 is coming. Hide the kids. Other Securosis Posts Tracking the Syrian Electronic Army. Third Time is the Charm. Security is Reactive. Learn to Love It. Deming and the Strategic Nature of Security. Incite 8/27/2013: You Can’t Teach Them Everything. Reactionary Idiot Test. VMWare Doubles Down on SDN. China Suffers Large DNS DDoS Attack. Friday Summary: August 23, 2013. “Like” Facebook’s response to Disclosure Fail. Research Scratchpad: Stateless Security. New Paper: The 2014 Endpoint Security Buyer’s Guide. Incite 8/21/2013: Hygienically Challenged. Two Apple Security Tidbits. Ecosystem Threat Intelligence: Use Cases and Selection Criteria. Ecosystem Threat Intelligence: Assessing Partner Risk. Favorite Outside Posts Mike Rothman: Innovation and the Law of Unintended Consequences. Dave has been killing it in his CSO blog. This latest one deals with the fact that until we can do security fundamentals well, dealing with all of these shiny innovative security objects is like moving deck chairs on the Titanic. David Mortman: ITIL vs. DevOps: Slugfest or Lovefest? Rich: Dark Patterns: inside the interfaces designed to trick you. Really great design stuff. Research Reports and Presentations The 2014 Endpoint Security Buyer’s Guide. The CISO’s Guide to Advanced Attackers. Defending Cloud Data with Infrastructure Encryption. Network-based Malware Detection 2.0: Assessing Scale, Accuracy and Deployment. Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Top News and Posts New York Times DNS Hacked. Android malware WAY worse than iOS. Russian spyboss brands Tor a crook’s paradise, demands a total ban. Obama administration asks court to force NYT reporter to reveal source. Amazon ‘wish list’ is gateway to epic social engineering hack. Former White House ‘copyright czar’

Share:
Read Post

Tracking the Syrian Electronic Army

Brian Krebs is digging into the SEA and trying to out individuals: A hacking group calling itself the Syrian Electronic Army (SEA) has been getting an unusual amount of press lately, most recently after hijacking the Web sites of The New York Times and The Washington Post, among others. But surprisingly little light has been shed on the individuals behind these headline-grabbing attacks. Beginning today, I’ll be taking a closer look at this organization, starting with one of the group’s core architects. He’s just getting started, and his techniques wouldn’t stand forensic or legal scrutiny, but are still very interesting. Very similar to the stuff Mandiant dug up on Chinese hackers. Everyone leaves tracks. Everyone. Share:

Share:
Read Post

Firewall Management Essentials: Introduction [New Series]

It starts right there in PCI-DSS Requirement 1. Install and maintain a firewall configuration to protect cardholder data. Since it’s the first requirement, firewalls must be important, right? Not that PCI is the be all, end all of security goodness, but it does represent the low bar of controls you should have in place to defend against attackers. As the line of first defense on a network, it’s the firewall’s job to enforce a set of access policies that dictate what traffic should be allowed to pass. It’s basically the traffic cop on your network, as well as acting as a segmentation point between separate networks. Given the compliance mandates and the fact that firewalls have been around for over 20 years, they are a mature technology which every company has installed. It may be called an access router or UTM, but it provides firewall functionality. The firewalls run on a set of rules that basically define what ports, protocols, networks, users, and increasingly applications, can do on your network. And just like a closet in your house, if you don’t spend time sorting through old stuff it can become a disorganized mess, with a bunch of things you haven’t used in years and don’t need any more. That metaphor fits your firewall rule base – when we talk to security administrators and they admit (often in a whisper) to having thousands of firewall rules, many of which haven’t been looked at in years. The problem is that, like your closet, this issue just gets worse if you put off addressing the issue. And it’s not like rule bases are static. You have new requests coming in to open this port or allow that group of users to do something new or different pretty much every day. The situation can get out of control quickly, especially as you increase the number of devices in use. That creates significant operational and security problems, including: Attack Surface Impact: When a change request comes in, how many administrators actually do some work to figure out whether the change would create any additional attack surface or contradict existing rules? Probably not enough, so firewall management – first and foremost – must maintain the integrity of the protection the firewall provides. Performance Impact: Every extra rule in the rule base means the firewall may need to do another check on every packet that comes through, so more rules impact device the performance. Keep in mind that the order of your rule set also matters, as the sooner you can block a packet, the less rules you will have to check, so the rules should be structured to eliminate connections as quickly as possible. Verification: If a change was made, was it made correctly? Even if the change is legitimate and your operational team is good there will still be human errors. So another problem with firewall management at scale is verifying each change. Weak Workflow and Nonexistent Authorization: What happens when you receive a rule change? Do you have a way to ensure each change request is legit? Or do you do everything via 10 year old forms and/or email? Do you have an audit trail to track who asked for the change and why? Can you generate documentation to show why each change was made? If you can’t it is probably an issue, because your auditor is going to need to see substantiation. Scale: The complexity of managing any operational device increases exponentially with every new device you add. Firewalls are no exception. If you have a dozen or more devices, odds are you have an unwieldy situation with inconsistent rules creating security exposure. Heterogeneity: Many enterprises use multiple firewall vendors, which makes it even more difficult to enforce consistent rules across a variety of devices. As with almost everything else in technology, innovation adds a ton of new capabilities but increases operational challenges. The new shiny object in the firewall space is the Next-Generation Firewall (NGFW). At a very high level, NGFWs add the capability to define and enforce policies at the *application( layer. That means you can finally build a rule more granular than ALLOW port 80 traffic – instead defining which specific web-based applications are permitted. Depending on the application you can also restrict specific behaviors within an application. For example you might allow use of Facebook walls but block Facebook chat. You can enforce polices for users and groups, as well as certain content rules (we call this DLP Lite). The NGFW is definitely not your grand-pappy’s firewall, which means they dramatically complicate firewall policy management. But network security is going through a period of consolidation. Traditionally separate functions such as IPS and web filtering are making their way onto a consolidated platform that we call the Perimeter Security Gateway (PSG). Yup, add more functions to the device and you increase policy complexity – making it all the more important to maintain solid operational discipline when managing these devices. In any sizable organization the PSG rule base will be difficult to manage manually. Automation is critical to improving speed, accuracy, and effectiveness of these devices. We are happy to get back to our network security roots and documenting our research on the essentials of managing firewalls. This research is relevant both to classical firewalls and PSGs. In Firewall Management Essentials we will cover the major areas of managing your installed base of firewalls. Here is our preliminary plan for posts in the series: Automating Change: Firewall management comes down to effectively managing exceptions in a way that provides proper authorization for each change, evaluating each one to ensure security is maintained (including ensuring new attack paths are not introduced), auditing all changes, and rolling back in the event of a problem. Optimizing Performance: Once the change management process is in place, the next step is to keep the rule set secure and optimized. We will discuss defining the network topology and identifying ingress and egress points to help prioritize rule sets, and point out potential weaknesses in security posture. Managing Access: With a strong change control process and optimized

Share:
Read Post

Third Time is the Charm

Nothing makes my day like getting to argue with my colleagues here at Securosis. Sadly today isn’t that day. The only thing that I love almost as much is when Mike and Rich think they are arguing with each other, but I get to point out that they are actually saying the same things, but from different angles, and therefore with different words. The fact is that both of them highlight a very important point: for security groups to be effective, they need to be much more engaged with the business. Security is in fact always reactive in the sense that they cannot do anything more than influence, until the business makes decisions about how things will be done. But there is ‘reactive’ in the sense that the business makes a choice and security deals with, it and then there’s ‘reactive’ in the sense of security teams which are completely disengaged from the business – they only know about stuff when the new app doesn’t work because the firewall rules are wrong or they get a request for a Qualys scan a couple hours before a new server must be live. But back to being engaged with the business. That doesn’t mean sitting in the C suite (though that can be nice) – it means finding out who the people & projects are in your organization which will impact your duties as a security practitioner, getting to know them, and convincing those folks to keep you in the loop. Demonstrate that you are adding value by being involved earlier – perhaps by identifying potential roadblocks and workarounds early so they can be funded, designed around, etc. Or perhaps by staying abreast of forthcoming changes to legislation/regulations and working with legal/audit to make sure your organization is ready before the changes go into effect. These are just a couple examples of ways to show that security can absolutely be proactive rather than merely reactive, and it also proves that I lied above. Today is totally that day: O frabjous day! Callooh! Callay! I get to argue with both Rich and Mike. WIN! Share:

Share:
Read Post

Security is Reactive. Learn to Love It.

Few things make me happier than getting to publicly disagree with one of my coworkers. Earlier today Mike suggested that security is too reactive and tactical to succeed. Then we hear the usual platitudes about treating security as a risk management function, better metrics, blah blah blah. Not that there is anything wrong with all that, but it needs to be discussed in context of the fundamental nature of security. Which is an ongoing state of disruptive innovation. Security is reactive by nature – the moment it isn’t is the moment you really lose. The question is how to best provide yourself with the most time to plan your reactions, and what kind of infrastructure you can put in place to reduce the cost and complexity of any course corrections. Business innovation tends to result from three primary drivers: Competitive response. A competitor does something; you need to respond to stay in the game. Competitive advantage. You do something to gain an edge, and force others to respond. Efficiency/effectiveness. You streamline and improve your processes to reduce overhead. But security only shares one of those drivers. Security innovation is dominated by externalities: Business innovation. The business does something new, so you need to respond. Attacker innovation. Internal efficiency/effectiveness. “Doing security better”. Two of the three forces on a business are internal, with only competitive response driven by an outside actor. Security flips that. We can’t ever fully predict what the business or attackers will do down the road, so we need to scramble to react. That’s why we can never seem to skate ahead of the puck. You can’t skate ahead of a quantum field state that will eventually collapse into a single wave function – there are too many options to choose one. The trick, as Chris Hoff and I have been talking about at RSA for about 6 years now, is to take a strategic approach to prediction. This is why even a risk-based security approach is, in reality, just another tactical piece. The strategic piece is building a methodology to inform your working assumptions for what the future holds, and building your program to respond quickly once a direction is set. It isn’t magic, and some of you do this intuitively. You stay up to date on the latest research, both in and out of security. You track both new attack and general technology trends. You engage heavily with the business to understand their strategic direction before they make the tactical technology choices you later have to secure. A lot of this looks almost identical to Mike’s recommendations, but the reason organization after another fails in their risk-based, metrics-driven, incident response programs is they to and run them in a bubble, and assume situations are static on at least an annual basis. If you build your program assuming everything will change underneath you, you will be in much better shape. And I absolutely believe this is a realistic and pragmatic goal that others have achieved. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.