Securosis

Research

Firestarter: On “Architectural Limbo”

Yesterday Lori MacVittie posted another thoughtful article, Cloud Computing: Architectural Limbo, where she highlights percived problems with the NIST description. I usually agree with her cloud posts, but this is a rare case where I think she is wrong. Consider, for a moment, the stark reality of a realm with no real network boundaries offered by AWS in “Building three-tier architectures with security groups”: “Unlike with traditional on-premise physical deployments, AWS’s virtualization of compute, storage, and network elements requires that you think differently about how to build network segregation into your projects. There are no distinct physical networks, no VLANs, and no DMZs.” The post goes on to describe the means in which a secure, traditional three-tiered application architecture can be deployed using AWS security groups. This architecture is a fine approximation of the traditional, data center deployed architecture based on the available abstractions offered by AWS Note the use of the term “approximation”. That’s important, because it’s indicative of one of the core issues with cloud today: the inability to replicate architecture. You might be thinking that’s okay as long as you can replicate it using available services. Actually, it is okay because it does work. AWS does provide logical and physical barriers, and while they are presented in a way that only mimics traditional networks, they do so to ease understanding through familiar concepts. Being different does not make it less secure. And I’ve lost count of the number of organizations that have successfully deployed this (admittedly basic) architecture and are running it in production environments. It works so well that we even teach it in the CSA Certificate of Cloud Security Knowledge (CCSK) classes we run. One of the great joys of running in an IaaS environment is its bare simplicity. You don’t have the crutches of a vast array of technology to rely on. Instead you have to think about your real needs, instead of adding huge amounts of complexity because that’s how we’re doing it in-house today. It’s a prime opportunity to start over and avoid repeating the sins of the past. The problem is that in order to fully deploy in the cloud you have to deploy an architecture that will be different from the one you currently maintain in the data center. What that ultimately entails is a separate and environment-specific set of processes, as well, that could quickly become operationally expensive. This is especially true when compliance enters the picture, and even more so when the regulations in question are those that focus on process (think SOX) and not just technological implementation. While it is true that in many cases, different network architectures and security requirements cause differences in cloud architectures, that doesn’t necessarily mean that the applications residing on those architectures will be fundamentally different. In my experience most operations teams have little or no knowledge of how the underlying network architectures are laid out. It’s simply irrelevant, so long as the necessary ports are open. And this is the model offered by cloud providers like AWS. As for separate and environment-specific sets of processes, this is just a red herring. Network and especially security teams already have to do this, especially in larger organizations. You could just as well make this argument about every application deployment, regardless of locale. This is just part of life, and any good IT shop should be familiar it. Share:

Share:
Read Post

New Series: Tokenization Guidance

Tokenization Guidance. I have wanted to write this post since the middle of August. Every time I started writing another phone phone call came in from a merchant, payment processor, technology vendor, or someone loosely associated with a Payment Card Industry (PCI) task force or steering committee (SIG). And every conversation yielded some new sliver of information that changed what I wanted to say, or implied some research work had already been conducted that was far more interesting and useful than anything being provided to the public. This in turned prompted more calls, new conversations, more digging and – like a good mystery novel – prompted me to iteratively peel back another layer of the onion. I’ve finally reached a point where I believe I have enough of the story to understand what was published and why it’s not what they should have published. But enough of the preamble: let’s back up and dive into the subject at hand. As of August 12, 2011, the PCI task force driving the study of tokenization published an “Information Supplement” called the PCI DSS Tokenization Guidelines. More commonly known as the ‘Tokenization Guidance’ document, it discussed the dos and don’ts of using token surrogates for credit card data. The only problem is that this document is sorely lacking in actual guidance. Even the section on “Maximizing PCI DSS Scope Reduction” is a collection of broad generalizations on security, rather than practical advice. After spending the better part of the last two weeks with this wishy-washy paper, a better title would be “Quasi-acceptance of Tokenization Without Guidance”. And all my conversations indicate that this opinion is universally held outside the PCI council. “We read the guidance but we don’t know what falls out of scope!” is the universal merchant response to the tokenization information supplement. “Where are the audit guidelines?” is the second most common statement. The tokenization guidlines provides an overview of the elements of a tokenization system, along with the promise of reduced compliance requirements, but they don’t provide a roadmap on how to get there. Let’s make one thing very clear right from the start: There is very wide interest in tokenization because it promises better security, lower risk and – potentially – significant cost reductions for compliance efforts. Merchants want to reduce the work they must do in order to comply with the PCI requirements – which is exactly why they are interested in tokenization technologies. Security and lower risk are secondary benefits. But without a concrete idea of the actual cost reduction – or worse, an understanding of how they will be audited once tokenization is deployed – they are dragging their feet on adoption. There is no good reason to omit a basic cookbook for scope reduction when using tokenization. I am going to take the guesswork out of it and provide real guidance for evaluating tokenization, and clarify how to benefit from tokenization. This will be in the form of concrete, actionable steps for merchants deploying tokenization, with checklists for auditors reviewing tokenization systems. I’ll fill in the gaps from the PCI supplement, poke at the topics they decided it was politically unpalatable to discuss, and specify what you can reasonably omit from the scope of your assessment. Given an overview of what you can reasonably consider to be out of scope, I’ll advise you on how to approach compliance and follow up with some checklists to make it easier. This is more than I can cover in a simple post, so I will cover these topics over the next two weeks, ultimately wrapping this into my own tokenization guidance white paper. The series will have four parts: Key points from supplement: Outline what the PCI information supplement on tokenization means and discuss the important aspects of the technology for users to focus on. We’ll discuss what is missing from the guidance and what does – and does not – help reduce PCI assessment effort. Guidance for merchants: How tokenization changes PCI compliance. We’ll discuss critical areas of concern when deciding to adopt a tokenization solution, with guidance on reducing audit scope. This will encompass areas including implementation tradeoffs, integration, rollout, and vendor lock-in. The audit process: How tokenization impacts the auditing process, how to work with your assessor to establish testing criteria, and where to look to reduce the scope of your audit. We’ll provide guidance for working with QSAs and self assessment. Checklists: The guidance describes major components of the technology but lacks operational guidelines for assessors or merchants. As with the original PCI-DSS documents, I’ll include an audit checklist to supplement the PCI standard on what should be considered out of scope, and where you can shave time from your auditing process. I will present information I feel should have been included in the tokenization supplement. And I will advise against use of some technologies and deployment models that frankly should not have been lumped in with the supplement as they don’t simplify and reduce risks in the way any merchant should be looking for. I am willing to bet that some of my recommendations will make many interested stakeholders quite angry. I accept this as unavoidable – my guidance is geared toward making the lives of merchants who buy tokenization solutions easier, rather than to avoid conflict with vendor products. No technology vendor or payment provider ever endorses any guidance that negatively impacts their sales, so I expect blowback. As always, if you think some of my recommendations are BS, I encourage you to comment. We are open to criticism and alternate viewpoints, and we always factor relevant comments into our final research. I do ask vendors to identify yourselves. I will also assume some prior knowledge of tokenization and PCI-DSS. There is a ton of research on the Securosis blog and the Research Library on these subjects. If you are not fully up to speed on tokenization systems, or are interested in learning more about tokenization in general, I suggest you review

Share:
Read Post

Paper Released: Fact-Based Network Security: Metrics and the Pursuit of Prioritization

What should you do right now? That’s one of the toughest questions for any security professional to answer. The list is endless, the priorities clear as mud, the risk of compromise ever present. But doing nothing is never the answer. We have been working with practitioners to answer that question for years, and we finally got around to documenting some of our approaches and concepts. That’s what “Fact-Based Network Security: Metrics and the Pursuit of Prioritization” is all about. We spend some time defining ‘risk’, trying to understand the metrics that drive decisions, working to make the process a systematic way to both collect data and make those decisions, and understanding the compliance aspects of the process. Finally we go through a simple scenario that shows the approach in practice. Check out the landing page for the report, if you want a better feel for the content, or download the report directly: Fact-Based Network Security: Metrics and the Pursuit of Prioritization (PDF) We would like to thank RedSeal Networks for sponsoring this research. Finally, if you are looking to check out the blog posts (with comments), here is an index of the posts: Introduction Defining Risk Outcomes and Operational Data Operationalizing the Facts Compliance Benefits In Action Share:

Share:
Read Post

Friday Summary: Goodbye to the Crazy One

Yesterday afternoon I decided to head out for my first run since my August health scare (which turned out to be pretty much nothing). I grabbed my iPhone, and as I was putting it into my armband case a news alert popped up. Steve Jobs is dead I stopped. The world paused for a moment. Standing in front of my desk, I turned and opened up a web browser to read the press release from the Apple board. It was true, and it wasn’t a surprise. Like nearly all of you reading this, I never met Steve Jobs. Unlike most of you I was fortunate enough to get to attend his last Macworld keynote and experience the reality distortion field myself. I walked in carrying a BlackBerry. I went home with an iPhone. Call the RDF what you will, but I never regretted that decision. I have spoken with other Apple executives, but never the man himself. My love of technology started with Apple and, to a lesser degree, Commodore. That’s when I started hacking; and by hacking I mean exploring. But I never owned an Apple. I didn’t buy my first Mac until 2005; a victim of the halo effect from the beauty of my first iPod (a third gen model). Today there are 6 or so Macs in my house, a couple iPads, a few iPhones, and various other products. Including, still, that third generation iPod I can’t seem to let go. It doesn’t matter if you love or hate Apple – everything we do in technology today is influenced by the work of the teams Steve led. Every computer, every modern phone, and every music player is influenced more by Apple designs than by any other single source. Even the CG animated cartoons my daughter loves so much. I used to criticize Apple. Too expensive. Too constraining. But over the course of several years I have found my own beliefs aligning with the “rules” Jobs defined. People won’t know what they want until you show them. Don’t let customers derail your vision, but be ready to move when they’re right. Design and usability are every bit as important as features – if either fails, the product fails. Remove as much as possible. Imagine if we had a security leader as visionary as Jobs. We have many who might think they are, but no one comes close. Can you imagine Steve in a UI design meeting for nearly any security product on the market? His death hit me harder than I expected. Because not only do we not have a Steve Jobs in security, we no longer have one at all. The entire technology world just lost the one person climbing the hills in front of us, breaking the trail, and turning back to wave and shout “follow me”. Now we’re on our own. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian & Mel Shakir on SIEM Replacement. Rich is giving a webcast on cloud security next week. This is with Dome9, but all the content coming from me is objective and influence-free. Favorite Securosis Posts Adrian Lane: The iPad-Enterprise-Data Security Spectrum. Face it, the iPad is so compelling that it is forcing its way into the enterprise – Rich offers good tips for facing the inevitable. David Mortman: Force Attacker Perfection. Mike Rothman: Force Attacker Perfection – Rich is right. We can’t stop them, but we should make them work for it. Rich: Need a CISO cert? Got $200? Get one while they’re hot…. Other Securosis Posts When to Use Amazon S3 Server Side Encryption. Incite 10/5/2011: Time waits for no one. Nitro & Q1: SIEM/Log Management vendors dropping right and left. Introducing the Securosis Nexus. Incite 9/28/2011: Renewal. Comment on the Next Version of the Cloud Security Alliance Guidance. Favorite Outside Posts Mike Rothman: Text of Steve Job’s Commencement Address (2005). Passed on, but Steve Jobs’ teachings will stick with me forever. I look at this speech every couple of months. Puts everything (life, job, happiness, purpose, etc.) into context for me. Everything. David Mortman: Application-Layer DDoS Attacks Are Growing: Three to Watch Out For. Adrian Lane: The Web won’t be safe, let alone secure, unless we break it. Topics Jeremiah has covered before, but a very nice overview of the situation. Browsers, like many other platforms, have idiotic ‘features’ that make security impossible, and it’s time to throw some of the garbage out. Rich: The Vendor Beating. I’ve been in similar meetings as an analyst. Nothing beats the blame game. Dave Lewis: Some SCADA Problems Too Big to Call Bugs. Yeah… That will fix it. Top News and Posts Amex XSS Vuln But it’s the twitter dialog that’s worth reading. This is just so typical for a McBank response to any inquiry – they can only follow the script. Awesome. Tool to crack SSL. Hacker nabbed after topping up three EasyCards. Using ICMP Reverse Shell to Remotely Control a Host. Privacy and security implications of Amazon’s new “Silk” browser. Microsoft Pushes Emergency Update After Security Products Call Chrome “Banking Trojan” Cisco patches the other iOS. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Bill, in response to Nitro & Q1: SIEM/Log Management vendors dropping right and left. Excellent analysis. Until recently, SIEM vendors were a kind of “Switzerland” with respect to third party event sources, i.e, treating them all the same for the most part. I think customers will become concerned if the big three manufacturers start favoring their own complementary security products. What do you think? Share:

Share:
Read Post

The iPad-Enterprise-Data Security Spectrum

As I mentioned in the Incite yesterday, Symantec announced DLP support for the iPad. I have been meaning to talk about this for a while, as various products have been popping onto the market, and now seems like the time. Note: I’m focusing on the iPad because that’s what most people are interested in, but much of what I’m going to talk about also applies to the iPhone. The iPad is an extremely secure device; odds are it is much more secure than any laptop or desktop you let your users on. The main reason is that it is locked down so tightly with a combination of hardware and software controls. This is also a challenge for security, because you can’t run any background tasks. For the record, I really like this approach – it eliminates the need for things like antivirus in the first place. For data security, that means we are limited in what we can do. No DLP running in the background, for example. To fill this gap, a spectrum of approaches and tools have hit the market. I like to list them as a spectrum from least control to most. Most control doesn’t mean it’s better – which of these to use depends heavily on the needs of both your organization and your users. As a baseline I assume you allow access to corporate assets in some way using the device. I’m skipping the “do nothing” and “don’t let them in at all” options: Here we go: ActiveSync and device profiles. You allow users access to corporate email, but enforce a basic device profile to require a passcode/password and enable remote wiping if the device is lost. This enables basic encryption of the entire device (easier to crack), with data protection for email attachments. Server-side DLP. You create DLP policies that restrict the email/files going to an otherwise approved device. Websense offers this – not sure who else. Walled-garden applications. These are apps like Good for Enterprise, the new Zenprise SharePoint client for iPad, Watchdox, and GroupLogic mobilEcho. All access to documents is purely through the approved app, and the app can restrict opening or usage of that document elsewhere on the device. Remember, if you don’t totally wall the content off, any standard document format can be opened in another app – thus losing any security controls. These usually offer viewing but not editing, because that would require building in a complete editor. There is a very broad range of variation between these apps. Fully-managed device with always-on VPN. You use mobile device management (MDM) to enforce an always-on VPN connection and block unmanaged network traffic. Then you use DLP on your network to manage traffic and content. This is how Symantec works. They use an app on the device to enforce the VPN, and made changes on the DLP gateway to improve the user experience with the device. For example, the iPad doesn’t handle failed email connections well (it tends to stall), so they had to play games to block protected content from going to Gmail without ruining the device experience. Each of these models has its own advantages, and there are different levels of control within each tier. But these should give you a good idea of the options. Someday I might write a paper with more detail, but hopefully this is enough for now. Share:

Share:
Read Post

When to Use Amazon S3 Server Side Encryption

This week Amazon announced that S3 now supports server side encryption. You can encrypt S3 items through either the API or web management console, or you can require encryption for S3 buckets. A few details: They manage the keys. This is full transparent AES-256 encryption, and you only manage the access controls. Encryption is at the object level, not the bucket level. You can set a policy to require any uploads into a bucket to be encrypted. You can manage it via API or the AWS Management Console. It’s interesting, but from a security perspective only protects you from one thing – hard drives lost or stolen from Amazon. Going back to my Three Laws of Data Encryption, you would use this if you are worried about lost/stolen drives or if someone says you have to encrypt. It doesn’t protect from hacking attacks or anything like that. Client-side encryption is more important for improving security. This isn’t really much of a security play, but it’s a big assurance/compliance play. Since I like bullet lists and clear advice, you should use S3 server side encryption: If you are required to encrypt data at rest, and said requirement does not also require you to segregate keys from Amazon. You want to market that you are encrypting the data, but still don’t have a requirement to lock out Amazon. That’s about it. If you are worried about drive loss/theft it’s probably due to a compliance or disclosure requirement, and so I recommend client side encryption instead, for its greater security benefit. This is a checkbox. Sometimes you need them, but if security is that important you have other options which should be higher priority. Share:

Share:
Read Post

Incite 10/5/2011: Time waits for no one

Time is a funny thing. You don’t really think about it until it’s running out. Deadlines. Mortality. It’s all the same. Time just sneaks up on you, and then it’s gone. Yeah, I’m a little nostalgic this week because my birthday is Friday. And yes, there is some fodder for you social engineers out there. The kids get more excited about my birthday than I do. They want to know about cakes, parties, and the like. Personally, I’d take a day to sleep in, but who has time for that? There are things to do and places to be. We at Securosis hit a milestone this week, unveiling the Securosis Nexus on Monday night. Honestly, I’m both exhilarated and terrified. We (especially Rich) have spent many hours conceiving, building, and populating our new online research ‘product’. I joke that building the Nexus took twice as long and cost 3 times as much as we expected. I’m probably understating it. But all of us have built software before, so we knew what to expect. What’s a little different this time is that we funded the project out of cash flow. So every check we wrote to our developers and designers could have been used to pay my mortgage. That really makes the investment real. Rich, Adrian, and I aren’t really gamblers. We all go to Vegas a few times a year for conferences, and you’ll find us hanging out at a bar – not the tables. We live conservative lifestyles (even if Adrian drives a Corvette). On the other hand, we’re making a huge bet folks who don’t have the word Security in their titles will pay for impactful, actionable security research. And that even some folks who do have Security in their titles will find enough value to make a modest investment. But what if we are wrong? It’s not like anyone has ever successfully delivered a research product to this market segment. Are we nuts? Compound that with the fact that we have built a pretty good business. We’re very busy writing blog series, pontificating, and doing strategy work, all of which I love. So why take the risk? Why make the investment? Why not just sit on our hands, keep pontificating, and enjoy the lifestyle? I’ll tell you why. Because time waits for no one. Rich and I decided back in 2006 that this market opportunity was real, and we believe it. Just because no one has tried it before doesn’t mean we are wrong. We want to build leverage into our business and be bigger than just Rich, Mike, and Adrian showing up and waving our hands. Ultimately we want to make a difference and believe the Nexus provides a great opportunity to help folks who can’t afford Big IT research. But we aren’t kidding ourselves – it’s scary. Fear is no excuse. It won’t hold us back. The train has left the station and now we will see where it takes us. The only thing we can’t get is more time, so we plan to make the most of it. Check out the Nexus. Sign up for the beta. Help us make it great. –Mike Photo credits: “Time” originally uploaded by Jari Schroderus Share:

Share:
Read Post

Nitro & Q1: SIEM/Log Management vendors dropping right and left

It must be SIEM acquisition Tuesday. McAfee hit first by announcing their expected deal with Nitro Security. But then IBM surprised pretty much everyone by acquiring Q1 Labs. Don’t blink or you may miss another 2-3 SIEM/Log Management vendor acquisitions. Obviously we have been talking about consolidation in the SIEM/Log Management space for quite a while – there are about 20 vendors left now – but it’s strange that deals involving the two most significant independent vendors happened on the same day. Coincidence? Our pal and contributor James Arlen doesn’t believe in it, and neither do we… Hot Tamales First let’s discuss why these SIEM/LM players are such hot commodities. As many of us have been whining, compliance drives security nowadays, and log management is a must-have technologies for compliance. So almost everyone has some kind of log aggregation capability to cover the basic requirements. Most customers are thinking about enterprise-class options, which is driving business in the SIEM/Log Management space, as they want to do stuff with the vast amounts of data they collect. At the same time, the products are maturing. They aren’t easy to use, but they are getting better, and vendors’ ability to support enterprise-class requirements has improved, especially for Q1 and Nitro. That’s it. Also consider that security management was always destined to become part of the IT management and operations stack. That’s what drove the EMC/RSA/Network Intelligence and HP/ArcSight deals of yore, and is driving today’s deals. In simplest terms, SIEM/LM was never destined to be an independent technology over the long term, so these deals are just the logical conclusion of a 3-4 year consolidation. Why Buy? Let’s look at the buyer profiles – why did both McAfee and IBM buy the leading (independent) players in this market? In McAfee’s case the answer is simple. They had NOTHING to address this client requirement. They needed something – not having either LM or SIEM was forcing their customers to buy other solutions, such as ArcSight and RSA – which is unacceptable if your goal is to own the entire security stack. McAfee had to buy something, and frankly they should have done this a long time ago. IBM, on the other hand, had a number of SIEM-type platforms, most buried within the Tivoli group. But none were competitive, and I can’t tell you the last time I heard an end-user organization taking an IBM SIEM offering seriously. They do a bit of security management as a managed service (using the former ISS platform), but that wasn’t an answer. The real kicker, and what forced IBM’s hand, was clearly HP. HP’s ownership of ArcSight as the cornerstone of its enterprise security strategy put IBM at a clear disadvantage. Eventually not having a competing offering would have hurt them. I’m sure they did the math and decided it was easier to buy Q1 now (even for a pretty big number), than to wait until Q1 went public. Clearly IBM was going to pay to get into this market, so they decided to pay now. Why Sell? You always have to wonder why companies with clear momentum in a growing market sell. But don’t worry about it too much – I suspect it just came down to economics. Every company has a price, and clearly since it took so long for McAfee to consummate the Nitro deal, they finally reached it. This is actually a great outcome for Nitro, given that they were a couple of years behind Q1 on pretty much every enterprise front (revenue/bookings, channel, enterprise deployment), so getting taken out was a better option. McAfee was the likely candidate in light of their successful coordination as part of SIA (Security Innovation Alliance), as well as Nitro’s more reasonable price tag. McAfee has never really broken the bank for technology acquisitions since DeWalt came to power. Based on technology, sales model, and price, Nitro was a better fit for McAfee. Likewise, Q1 is the best fit for IBM. IBM is a huge company, and when they buy, they need to move the needle. Or at least have a chance to move the needle. Q1 was clearly on a path to go public, with speculation that the IPO would happen in early 2012. But every company goes into a deal with stars in their eyes, and Q1 is no different. IBM is giving Q1 CEO Brendan Hannigan the keys to a new combined security group. So they hope IBM will have a big group like HP does, which obviously dramatically increases the Q1’s impact on the market. Speaking of HP, we really cannot overstate the impact of the HP/ARST deal on this week’s events. From everything we’ve heard, after a little integration heartburn, HP is now driving ARST into deals that none of the other players are seeing. IBM gets a similar benefit with Q1. Clearly Q1 needs IBM’s reach to accelerate their growth path and impact. Will it happen? Who knows? But IBM gives the Q1 team their best chance. What about the customers? As with every deal, customers will suffer. The question is how much and for how long. All things considered, HP actually did a decent job with their ARST integration, so if IBM leaves Q1 alone, they have a chance. But there will be disruption – there always is. Q1 is now selling to IBM’s field sales force, and less directly to end users. It will take some time for IBM to figure out what they have, and the Q1 team needs to focus on teaching them – which means something will fall through the cracks. If you are a Q1 customer, and your implementation is working well, you should see little impact. If your implementation isn’t working well, start pushing for additional services to fix it. That will push Q1 to train IBM’s services teams, which is a good thing. McAfee historically has bought technology and just plugged it into their channel. SIEM is not AV, nor is it vulnerability management,

Share:
Read Post

Introducing the Securosis Nexus

Rich, Adrian, and I have been hinting about our sekret plans to launch a new research ‘product’ for a while. Today we are finally ready to let you guys in on our the scoop. We are very excited about this next step in the evolution of Securosis. We call it the Securosis Nexus, and it’s an online environment built to help security professionals get their jobs done better and faster. We leverage our blog and white paper content (since that’s kind of what we do), but there are a bunch of community features that make this more than just a file cabinet of our stuff. What problem are we trying to solve? There is no lack of security content out there. But figuring out what’s important is the challenge. Most security folks spend far too much time wading through irrelevant content, as opposed to doing stuff. We have built the Nexus user experience to accelerate the process of figuring out what you need to know to achieve project success. Who is our target? First, the folks who probably don’t know what they don’t know about security. Unfortunately there are a lot of these folks – struggling every day because they don’t eat, sleep, and breathe this stuff like we do. Our working theory is that the vast majority of people working in security today don’t have security in their title, or even a security department or CISO in their company. We want to make sure those folks have enough information to be educated buyers and implementers of whatever product/project they are tackling, without having to spend 10 years taking classes and falling asleep in conferences. The Nexus is also for people who are working their behinds off every day, but aren’t experts in every little area. None of us know everything (just ask Rich about “IAM” if you want to see a blank stare), and we all need a little help from day to day. I have been describing it as a continuum. Most folks know perhaps 20% of what they need to know to do something. We believe the Nexus can get folks to 60-70% of what they need to know, with a much better chance to accomplish their tasks and do their jobs. There are two main aspects of the Nexus: Pragmatic Research: We tend to write 20-30 page papers, each providing a deep dive into a specific security topic. They aren’t for the Nexus – where our intended users don’t have time to read 30 pages about anything. They don’t get any awards for knowing everything about a topic, so the focus is instead on actionable information, not fluff or overly detailed description. The content is very modular and easy to navigate. Short descriptions, video, audio, checklists, and templates will be the bulk of the material on any specific topic. More about what needs to be done than why. There are a bunch of ways to view the content, and topics of interest can be stored in a library. All the content can be rated as well, so over time we’ll know what works and what doesn’t, and we will make it better. Ask an Analyst: We also know not every situation fits into a clean bucket of checklists and templates, so we have included a way to ask direct questions to an analyst and get direct answers. Privately and confidentially. The interface is built to make it easy to find both answers to your specific questions, and other public answers that may be helpful in solving your problem. We believe the Nexus will provide excellent value for expert practitioners and departments of larger enterprises as well, but likely more due to the Nexus community features. And best of all, we built the Nexus with economics in mind. Other research firms charge tens of thousands of dollars to ask them questions. For the Nexus, think hundreds rather than tens of thousands. Check out the Nexus site to see more features and view a video demo Rich put together. It’ll give you a good feel for the user experience. It looks great, if I do say so myself. We will launch Nexus later this year with a full set of content around PCI and associated technologies. Over time we will be building modules, templates, checklists, videos, and audio content for our entire coverage universe. We are just about ready to open the beta to a limited set of folks, and we’ll be inviting more over the next couple weeks as we continue building out the content. You can sign up for the beta on the Nexus site. We’ll talk more about the Nexus in the coming weeks as we add more content, flesh out the functionality, and launch to the public. In the meantime we’re interested in your feedback on what you can see in the video, so please check it out and let us know. Share:

Share:
Read Post

Force Attacker Perfection

I will fully admit that I sometimes finding myself parroting standard industry tropes. For example, I can’t recall how many times I’ve said in presentations and interviews: The defender needs to be perfect all the time. The attacker only needs to succeed once. And yes, it’s totally true. But we spend so much time harping on it that we forget how we can turn that same dynamic to our advantage. If all the attacker cares about is getting in once, that’s true. If we only focus on stopping that first attack, it’s still true. But what if we shift our goal to detection and containment? Then we open up some opportunities. As defenders, the more barriers and monitors we put in place, the more we demand perfection from attackers. Look at all those great heist movies like Ocean’s 11 – the thieves have to pass all sorts of hurdles on the way in, while inside, and on the way out to get away with the loot. We can do the same thing with compartmentalization and extensive alert-based monitoring. More monitored internal barriers are more things an attacker needs to slip past to win. Technically it’s defense in depth, but we all know that term has turned into an excuse to buy more useless crap, mostly on the perimeter, as opposed to increasing internal barriers. I am not saying it’s easy. Especially since you need alert-based monitors so you aren’t looking at everything by hand. And let’s be honest – although a SIEM is supposed to fill this role (at least the alerting one) almost no one can get SIEM to work that way without spending more than they wasted on their 7-year ERP project. But I’m an analyst so I get to spout out general philosophical stuff from time to time in hopes of inspiring new ideas. (Or annoy you with my mendacity). Stop wishing for new black boxes. Just drop more barriers, with more monitoring, creating more places for attackers to trip up. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.