Securosis

Research

The Securosis Nexus (and) Beta Test FAQ

We’ve been getting some questions about the beta test, so I decided to put an FAQ together which we will also post within the system. If you have any other questions, please feel free to ask: General What is the Securosis Nexus? The Securosis Nexus is an online environment to help you get your job done better and faster. It provides pragmatic research on security topics that tells you exactly what you need to know, backed with industry-leading expert advice to answer your questions. The Nexus was designed to be fast and easy to use, and to get you the information you need as quickly as possible. Who is it for? The Nexus is for anyone with security responsibilities at their organization. We know that most of the people who work on security don’t have ‘Security’ in their titles, but need to protect their business every bit as much as the Chief Information Security Officer of a Fortune 500 company. The Nexus provides pragmatic research and advice for everyone, even if security is only one of the many hats you wear. What’s “pragmatic research”? Pragmatic research is information you can use. The Nexus doesn’t waste your time on theory and background information (although it is available for the curious). The content tells you exactly what you need to get your job done, and makes it very easy to find. Does that mean it tells me how to configure my products? No. The Nexus provides everything except product-specific information. What’s “expert advice”? Through our Ask an Analyst feature you can submit questions directly to our analysts, each with decades of security experience. We know sometimes reading research isn’t enough and you need direct advice from an experienced professional to get a clear answer on your specific issue. What makes this any different from a wiki, Gartner, or something like LinkedIn? The research is specific to security, and the Nexus presents data in several different ways, making it as quick and easy to locate information as possible. Unlike a wiki, all the content is written by professional research analysts, edited by folks who know how to write and topics are covered completely – not a hodge-podge of whatever people want to contribute. It’s far more structured and pragmatic than big analyst firms like Gartner. And unlike LinkedIn and social media sites, you are guaranteed answers to your questions. The Nexus is not just academic reference data – it is written by people who have built and deployed security products for a living. What else is included? Research and specific answers to your questions is core, but the Nexus includes much more. It offers videos, checklists, podcasts, templates, and other tools to help you get your job done. All the research can be rated and commented on, which helps ensure the content is useful and up to date as well as helping us to improve the content over time based on your feedback. The system tracks your history so you never forget what you read, and enables you to build a custom library of your favorite content. Good questions are anonymized and tied back to the content, to help others with the same problems. And we are just getting going, there will be even more capabilities in the coming months. What are the platform requirements? The Nexus should work with all current browsers, as well as Internet Explorer version 7 and later. Although we don’t have an iOS app yet, we have optimized the site to work well on the iPad. Beta Testing When does the beta start? If you are reading this, it has started. Why can’t I log in yet? We are running the beta in phases and will be adding people on an ongoing basis. We are very conservative, and really want to ensure the system is ready before we let too many people in. We will email you when your name comes up, and we plan to eventually include everyone who signs up for the beta. What can I expect in the beta? This is a real beta test – while the entire system is functional, there will be some bugs. We have set up forum for feedback and will directly answer system questions (but not research questions) there. During the beta, we will be adding research on a daily basis. The beta is opening with the first layer of PCI information, but we have a ton more to add before we open the system to the public (and ask people to pay for it). We will post announcements on the portal page as we add material throughout the beta. Right now, the weakest area is multimedia and tools/templates – such as checklists and PowerPoint samples. We will be adding these along with the rest of the content throughout the beta period. Ask an Analyst is completely open for business, so please do your best to stump us. Is the beta free? Yes. In exchange for your help testing, we provide access to all the content as we build it, plus the Ask an Analyst tool for questions. Will I get a free membership after the beta? No. The Nexus will be competitively priced (think hundreds, not thousands), but beta testers will need to subscribe after we open it up to the public. But until then you get all the free research and advice you can eat. Where should I leave feedback? Please use the beta forum linked on the portal page. That provides direct access to our developers and doesn’t clutter up the comments or the rest of the live system. After the beta, will you delete my account? No – you won’t have access, but your account will stay there if you want to come back. You should also review our privacy policy. Privacy Policy The Securosis Nexus does not sell your information to anyone, ever. We do retain the right to sell or distribute bulk statistics (e.g., what content is most viewed, what topics create the

Share:
Read Post

Incite 10/12/2011: Impact and Legacy

As have been overly reported over the past week, Steve Jobs is gone. As Rich so adroitly pointed out, “His death hit me harder than I expected. Because not only do we not have a Steve Jobs in security, we no longer have one at all.” You know, someone who seems to be the master of the universe. Perfection personified. Of course, the reality is never perfection. But what’s perfect is imperfection. Jobs failed. Jobs started over. He took chances and ultimately triumphed. Jobs had the perspective you wished you could have. This is clearly demonstrated by what I believe to be the best speech written in my lifetime (at least so far), Steve Jobs’ Stanford Commencement speech. Why? Because if you pay attention, really pay attention to the words, it’s about the human struggle. Do what you love. Follow your own path. Don’t settle for mediocrity. Live each day to the fullest. Realize we are here for a short time, and act accordingly. It’s not trite. You can and should strive for this. You see impact and legacy works itself out, depending on the actions you take every day. Probably none of us will have an impact like Steve Jobs. Nor should we. You don’t need to be Steve Jobs. Just be you. You don’t have to change the world. Just make it a little better. Be a giver, not a taker. Believe in some kind of karma. Pay it forward. Do the right thing. Lead by example, and hopefully people around you will do the right thing ti. If that happens, we all win, collectively. I’m not going to say don’t change the world. Or don’t try. We need folks who want to change things on a massive scale, and will do the work to make it happen. My point is that it doesn’t have to be you. As Steve Jobs said, “Your time is limited, so don’t waste it living someone else’s life. Don’t be trapped by dogma – which is living with the results of other people’s thinking. Don’t let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.” Change happens in many forms. We all want to leave the world better than when we got here. That’s what I’m working for. It’s not my place to strive for a legacy or to worry about my impact. All I can do is get up every day and do something positive. Some days will go better than others. And eventually (hopefully many years from now), I’ll be gone. Then it will be up to others to figure out my impact and legacy. Since I don’t know when my time will be up, I had better get back to work. –Mike Photo credits: “Legacy Parkway shield” originally uploaded by CountyLemonade Incite 4 U Take my cards, give me back my wallet: It’s always interesting to see the market value of anything. Not just what you think something is worth. But what someone is actually willing to pay. So thanks to Imperva for mining some bad sites and posting the Current Value of Credit Cards on the Black Market. If you take a look at what’s in my wallet, you’ll see about $15 bucks worth of cards (2 AmEx, a MasterCard, and a bank card). My wallet is worth at least $30, since it’s nice Corinthian leather (said in my best Ricardo Montalban voice). So take my cards, but I’ll fight you for my wallet. – MR Free malware scans: Google announced a Free Safe Browsing Alert for Network Administrators this week, alerting IT when malware is discovered by Google on their machines. The service leverages their malware detection capability announced last year, which discovers malware through a combination of user generated Safe Browsing data and Google’s site indexing crawlers. IT admins can register for alerts when Google discovers malware on the public servers within their control. This free tool will be disruptive to all the security vendors positioning malware detection as a ‘must-have’ feature – so long as it works. Hard to see how folks can continue charging a premium for this ‘differentiating’ service. – AL How about a tour of Alaska? We all know that no matter what you do, bad stuff still happens. As we always say around here, you will be breached at some point. The true test of your security mettle isn’t if you keep the bad guys out, but how you respond when they get in. A lot of that is in the heart of our paper on advanced incident response. One of the main things we talk about in that paper is knowing when, and how, to escalate your incident response process and bring in the next level of experts. While we didn’t explicitly mention it, having your command and control center for air combat drones infected with a virus would be pretty high on the list. It seems the folks on the ground failed to escalate and let the cybersecurity experts get involved. The cybersecurity command learned about it by reading Wired. If a four star general is learning that your control center for those buzzing things sometimes armed with missiles might be a staging depot for the latest warez, it might be time to break out your cold weather gear. -RM Maybe actually do something: OK, time for some snark. I just had to see what pearls of wisdom were in the article 8 ways to become a cloud security expert. Basically it’s a list of conferences and a few blogs. So let me get this straight. Go to RSA or the CSA Congress and you are all of a sudden an expert. C’mon, man! I have a different idea. Why don’t you actually do something in the cloud and protect it. Yeah, maybe build an instance, harden it, configure some security

Share:
Read Post

Tokenization Guidance: PCI Supplement Highlights

The PCI DSS Tokenization Guidelines Information Supplement – which I will refer to as “the supplement” for the remainder of this series – is intended to address how tokenization may impact Payment Card Industry (PCI) Data Security Standard (DSS) scope. The supplement is divided into three sections: a discussion of the essential elements of a tokenization system, PCI DSS scoping considerations, and new risk factors to consider when using tokens as a surrogate for credit card numbers. It’s aimed at merchants who process credit card payment data and fall under PCI security requirements. At this stage, if you have not downloaded a copy, I recommend you do so now. It will provide a handy reference for the rest of this post. The bulk of that document covers tokenization systems as a whole: technology, workflow, security, and operations management. The tokenization overview does a good job of introducing what tokenization is, what tokens look like, and the security impact of different token types. The diagrams do an excellent job of illustrating of how token substitution fits within normal payment processing flow, providing a clear picture of how an on-site tokenization system – or a tokenization service – works. The supplement stresses the need for authorization and network segmentation – the two critical security tools needed to secure a token server and reduce compliance scope. The last section of the supplement helps readers understand the risks inherent to using tokens – which are new and distinct from the issues of traditional security controls. Using tokens directly for financial exchange, instead of as simple references to the real financial data in a private token database, carries its own risk – a hacker could use the tokens to conduct transactions, without needing to crack the token database. Should they penetrate the IT systems, even if there is no credit card, if it can be used as a financial instrument, hackers will misuse it. If the token can initiate a transaction, force a repayment, or be used as money, there is risk. This section covers a couple critical risk factors merchants need to consider; although this has little to do with the token service – it is simply an effect of how tokens are used. Those were the highlights of the supplement – now the lowlights. The section on PCI Scoping Considerations is convoluted and ultimately unsatisfying. I wanted bacon but only got half a piece of Sizzlean. Seriously, it was one of those “Where’s the beef?” moments. Okay, I am mixing my meats – if not my metaphors – but I must say that initially I thought the supplement was going to be an excellent document. They did a fantastic job answering the presales questions of tokenization buyers in section 1.3: simplification of merchant validation, verification of deployment, and unique risks to token solutions. But after my second review, I realized the document does offer “scoping considerations”, but does not provide advice, nor a definitive standard for auditing or scope reduction. That’s when I started making phone calls to others who have read the supplement – and they were as perplexed as I was. Who will evaluate the system and what are the testing procedures? How does a merchant evaluate a solution? What if I don’t have an in-house tokenization server – can I still reduce scope? Where is the self-assessment questionnaire? The supplement does not improve user understanding of the critical questions posed in the introduction. As I waded through page after page, I was numbed by the words. It slowly lulled me to sleep with stuff that sounded like information – but wasn’t. Here’s an example: The security and robustness of a particular tokenization system is reliant on many factors including the configuration of the different components, the overall implementation, and the availability and functionality of the security features for each solution. No sh&$! Does that statement – which sums up their tokenization overview – help you in any way? Is this statement be true for every software or hardware system? I think so. Uselessly vague statements like this litter the supplement. Sadly, the first paragraph of the ‘guidance’ – a disclaimer repeated at the foot of each page, quoted from Bob Russo in the PCI press release – reflects the supplement’s true nature: “The intent of this document is to provide supplemental information. Information provided here does not replace or supersede requirements in the PCI Data Security Standard”. Tokenization should replace some security controls and should reduce PCI DSS scope. It’s not about layering. Tokenization replaces one security model for another. Technically there is no need to adjust the PCI DSS specification to account for a tokenization strategy – they can happily co-exist – with one system handling non-sensitive systems and the other handling those which store payment data. But not providing a clear definition of which is which, and what merchants will be held accountable for, demonstrates the problem. It seems clear to me that, based on this supplement, PCI DSS scope will never be reduced. For example, section 2.2 rather emphatically states “If the PAN is retrievable by the merchant, the merchant’s environment will be in scope for PCI DSS.” Section 3.1, “PCI DSS Scope for Tokenization”, starts from the premise that everything is in scope, including the tokenization server, as it should be. But what falls out of scope and how is not made clear in section 3.1.2 “Out-of-scope Considerations”, where one would expect to find such information. Rather than define what is out of scope, it outlines many objectives to be met, seemingly without regard for where the credit card vault resides, or the types of tokens used. Section 3.2, titled “Maximizing PCI DSS Scope Reduction”, states that “If tokens are used to replace PAN in the merchant environment, both the tokens, and the systems they reside on will need to be evaluated to determine whether they require protection and should be in scope of PCI DSS”. From this statement, how can anything then be out of

Share:
Read Post

Isolated Computing

IBM, with researchers at North Carolina State University, has annnounced an effective way to protect information and processes in multi-tenant environments – such as cloud and virtual deployments. In what they are calling the Strongly Isolated Computing Environment, installed below the hypervisor. The teaser is that the code is a mere 300 lines – a very small footprint means simplicity, which in turn implies both performance and security. A new technique called Strongly Isolated Computing Environment (SICE) aims to isolate sensitive information and workload from the rest of the functions performed by a hypervisor, which serves as gateway to a virtual, cross-platform workspace shared by users in a cloud system. This is positioned as VMM security for x86 architectures, residing in the BIOS. The code leverages the Systems Management Mode (SMM) of the Intel processor – think of it as something between a mini embedded OS and a hardware debugger. SMM is a general utility used for things such as power management, cryptographic subprocesses, and the occasional attack vector. The flexibility of this feature makes the approach interesting. But make no mistake: this is not ‘cloud’ security. This is quasi-hardware security for the benefit of virtual machine managers. Hijacking the overused ‘cloud’ term is purely PR. While the research is not fully public at this time, it’s clear their goal is to provide secure containers for data and processes in multi-tenant environments. I find this interesting as, despite wide use of virtualization, questions on how best to secure the hypervisor – and the partitions that run on top of it – are still open for debate. And plenty of companies are offering different ideas for how to make this work. Technically the NC State team’s proposal is not a new approach. Isolating critical functions at the OS/BIOS/hardware layer has been done before – sometimes all three at once, with each layer validating the other. Nor is reducing attack surface a novel concept. And that’s why I am skeptical – given that every few years we are presented with a ‘new’ approach to security, which is as a rule nothing more than cycling through the different layers of the computing infrastructure. Network centric security, or host or OS security, or application layer, or perhaps user and and information centric security. For example, if you are using information centric security, you work at the data (DRM) or application (DLP) layer. The problem is that we have been cycling around for 20 years, and we never settle on a final answer. Chris Hoff has written a ton about this perpetual cycle, and suggested why we should expect virtualization and security functions to evolve directly into the CPU. I think this is the first of many efforts we will see. Placing these functions in the BIOS/SMM could be the right solution – or just the next step before it’s fully embedded in the hardware. And then we’ll find that’s not flexible enough and place protections in the OS…. Share:

Share:
Read Post

Good versus bad FAIL

On reflection I talk about failure a lot. As I look back at my own career experience, FAIL has commonly appeared at inopportune times. Though it’s hard to say you can pinpoint a good time to fail. It’s part of both the business and human experience, so to me failure can be positive and productive, and position you for future success. But not always, and a lot depends on the form it takes. I guess when I think of the wrong kind of failure, I point to Andreas’ post on Network World, Fail a security audit already – it’s good for you. I do understand where he’s coming from. As I mentioned, failure can serve as a catalyst for action, as a good way to assess progress (ask the ATL Falcons about that), or as a way to figure out when it’s time to pack up your tent and move on. I guess my issue is with looking at an audit as a good venue for failure. Why? An audit is an awfully low bar for anything. Yes, I understand that’s a crass generalization. Many auditors are very talented and can find unseen issues and add value. But many aren’t that. Many adhere blindly to their checklists and ensure your security controls fit into a clean little box, even if there isn’t much clean about security in today’s environment. Have you ever heard the story about the scorpion and the frog? I think of it because many auditors adhere to their playbooks, disregarding actual circumstances – like the scorpion in that story. To be clear, the auditor will find something. They always do, or they understand they won’t be invited back. That doesn’t mean the stuff they find really matters. So what’s a better approach? How can you leverage an audit failure to your best advantage? Script it out and use the auditor as a piece of your evil plans. It’s okay – that’s how things get done in the real world. If you are a clued-in security professional, you know where the issues are. At least some of them. You also may face some organizational resistance to fixing issues. So you might direct the audit to miraculously find the issues you want/need fixed. Don’t make it too easy, but make sure they find what you need them to find. Amazingly enough, if something shows up in an audit’s findings of fact, it forces a decision. The decision may be to do nothing, but that will at least be a conscious decision to not address the risk. Then you can move onto the next thing and stop tilting at windmills. Or get the action you need. Either way it’s a win. So I’m all for failing. But fail correctly. Fail with a purpose. Use failure to your advantage. In some cases, actually stage your failure to make a point. I guess my real point is that any failure you face shouldn’t be a total surprise, though that will happen from time to time. Surprise failure is the kind you need to avoid. But that’s another story for another day. Photo credit: “Fail Whale Pale Ale” originally uploaded by jamesplankton Share:

Share:
Read Post

Firestarter: On “Architectural Limbo”

Yesterday Lori MacVittie posted another thoughtful article, Cloud Computing: Architectural Limbo, where she highlights percived problems with the NIST description. I usually agree with her cloud posts, but this is a rare case where I think she is wrong. Consider, for a moment, the stark reality of a realm with no real network boundaries offered by AWS in “Building three-tier architectures with security groups”: “Unlike with traditional on-premise physical deployments, AWS’s virtualization of compute, storage, and network elements requires that you think differently about how to build network segregation into your projects. There are no distinct physical networks, no VLANs, and no DMZs.” The post goes on to describe the means in which a secure, traditional three-tiered application architecture can be deployed using AWS security groups. This architecture is a fine approximation of the traditional, data center deployed architecture based on the available abstractions offered by AWS Note the use of the term “approximation”. That’s important, because it’s indicative of one of the core issues with cloud today: the inability to replicate architecture. You might be thinking that’s okay as long as you can replicate it using available services. Actually, it is okay because it does work. AWS does provide logical and physical barriers, and while they are presented in a way that only mimics traditional networks, they do so to ease understanding through familiar concepts. Being different does not make it less secure. And I’ve lost count of the number of organizations that have successfully deployed this (admittedly basic) architecture and are running it in production environments. It works so well that we even teach it in the CSA Certificate of Cloud Security Knowledge (CCSK) classes we run. One of the great joys of running in an IaaS environment is its bare simplicity. You don’t have the crutches of a vast array of technology to rely on. Instead you have to think about your real needs, instead of adding huge amounts of complexity because that’s how we’re doing it in-house today. It’s a prime opportunity to start over and avoid repeating the sins of the past. The problem is that in order to fully deploy in the cloud you have to deploy an architecture that will be different from the one you currently maintain in the data center. What that ultimately entails is a separate and environment-specific set of processes, as well, that could quickly become operationally expensive. This is especially true when compliance enters the picture, and even more so when the regulations in question are those that focus on process (think SOX) and not just technological implementation. While it is true that in many cases, different network architectures and security requirements cause differences in cloud architectures, that doesn’t necessarily mean that the applications residing on those architectures will be fundamentally different. In my experience most operations teams have little or no knowledge of how the underlying network architectures are laid out. It’s simply irrelevant, so long as the necessary ports are open. And this is the model offered by cloud providers like AWS. As for separate and environment-specific sets of processes, this is just a red herring. Network and especially security teams already have to do this, especially in larger organizations. You could just as well make this argument about every application deployment, regardless of locale. This is just part of life, and any good IT shop should be familiar it. Share:

Share:
Read Post

New Series: Tokenization Guidance

Tokenization Guidance. I have wanted to write this post since the middle of August. Every time I started writing another phone phone call came in from a merchant, payment processor, technology vendor, or someone loosely associated with a Payment Card Industry (PCI) task force or steering committee (SIG). And every conversation yielded some new sliver of information that changed what I wanted to say, or implied some research work had already been conducted that was far more interesting and useful than anything being provided to the public. This in turned prompted more calls, new conversations, more digging and – like a good mystery novel – prompted me to iteratively peel back another layer of the onion. I’ve finally reached a point where I believe I have enough of the story to understand what was published and why it’s not what they should have published. But enough of the preamble: let’s back up and dive into the subject at hand. As of August 12, 2011, the PCI task force driving the study of tokenization published an “Information Supplement” called the PCI DSS Tokenization Guidelines. More commonly known as the ‘Tokenization Guidance’ document, it discussed the dos and don’ts of using token surrogates for credit card data. The only problem is that this document is sorely lacking in actual guidance. Even the section on “Maximizing PCI DSS Scope Reduction” is a collection of broad generalizations on security, rather than practical advice. After spending the better part of the last two weeks with this wishy-washy paper, a better title would be “Quasi-acceptance of Tokenization Without Guidance”. And all my conversations indicate that this opinion is universally held outside the PCI council. “We read the guidance but we don’t know what falls out of scope!” is the universal merchant response to the tokenization information supplement. “Where are the audit guidelines?” is the second most common statement. The tokenization guidlines provides an overview of the elements of a tokenization system, along with the promise of reduced compliance requirements, but they don’t provide a roadmap on how to get there. Let’s make one thing very clear right from the start: There is very wide interest in tokenization because it promises better security, lower risk and – potentially – significant cost reductions for compliance efforts. Merchants want to reduce the work they must do in order to comply with the PCI requirements – which is exactly why they are interested in tokenization technologies. Security and lower risk are secondary benefits. But without a concrete idea of the actual cost reduction – or worse, an understanding of how they will be audited once tokenization is deployed – they are dragging their feet on adoption. There is no good reason to omit a basic cookbook for scope reduction when using tokenization. I am going to take the guesswork out of it and provide real guidance for evaluating tokenization, and clarify how to benefit from tokenization. This will be in the form of concrete, actionable steps for merchants deploying tokenization, with checklists for auditors reviewing tokenization systems. I’ll fill in the gaps from the PCI supplement, poke at the topics they decided it was politically unpalatable to discuss, and specify what you can reasonably omit from the scope of your assessment. Given an overview of what you can reasonably consider to be out of scope, I’ll advise you on how to approach compliance and follow up with some checklists to make it easier. This is more than I can cover in a simple post, so I will cover these topics over the next two weeks, ultimately wrapping this into my own tokenization guidance white paper. The series will have four parts: Key points from supplement: Outline what the PCI information supplement on tokenization means and discuss the important aspects of the technology for users to focus on. We’ll discuss what is missing from the guidance and what does – and does not – help reduce PCI assessment effort. Guidance for merchants: How tokenization changes PCI compliance. We’ll discuss critical areas of concern when deciding to adopt a tokenization solution, with guidance on reducing audit scope. This will encompass areas including implementation tradeoffs, integration, rollout, and vendor lock-in. The audit process: How tokenization impacts the auditing process, how to work with your assessor to establish testing criteria, and where to look to reduce the scope of your audit. We’ll provide guidance for working with QSAs and self assessment. Checklists: The guidance describes major components of the technology but lacks operational guidelines for assessors or merchants. As with the original PCI-DSS documents, I’ll include an audit checklist to supplement the PCI standard on what should be considered out of scope, and where you can shave time from your auditing process. I will present information I feel should have been included in the tokenization supplement. And I will advise against use of some technologies and deployment models that frankly should not have been lumped in with the supplement as they don’t simplify and reduce risks in the way any merchant should be looking for. I am willing to bet that some of my recommendations will make many interested stakeholders quite angry. I accept this as unavoidable – my guidance is geared toward making the lives of merchants who buy tokenization solutions easier, rather than to avoid conflict with vendor products. No technology vendor or payment provider ever endorses any guidance that negatively impacts their sales, so I expect blowback. As always, if you think some of my recommendations are BS, I encourage you to comment. We are open to criticism and alternate viewpoints, and we always factor relevant comments into our final research. I do ask vendors to identify yourselves. I will also assume some prior knowledge of tokenization and PCI-DSS. There is a ton of research on the Securosis blog and the Research Library on these subjects. If you are not fully up to speed on tokenization systems, or are interested in learning more about tokenization in general, I suggest you review

Share:
Read Post

Paper Released: Fact-Based Network Security: Metrics and the Pursuit of Prioritization

What should you do right now? That’s one of the toughest questions for any security professional to answer. The list is endless, the priorities clear as mud, the risk of compromise ever present. But doing nothing is never the answer. We have been working with practitioners to answer that question for years, and we finally got around to documenting some of our approaches and concepts. That’s what “Fact-Based Network Security: Metrics and the Pursuit of Prioritization” is all about. We spend some time defining ‘risk’, trying to understand the metrics that drive decisions, working to make the process a systematic way to both collect data and make those decisions, and understanding the compliance aspects of the process. Finally we go through a simple scenario that shows the approach in practice. Check out the landing page for the report, if you want a better feel for the content, or download the report directly: Fact-Based Network Security: Metrics and the Pursuit of Prioritization (PDF) We would like to thank RedSeal Networks for sponsoring this research. Finally, if you are looking to check out the blog posts (with comments), here is an index of the posts: Introduction Defining Risk Outcomes and Operational Data Operationalizing the Facts Compliance Benefits In Action Share:

Share:
Read Post

Friday Summary: Goodbye to the Crazy One

Yesterday afternoon I decided to head out for my first run since my August health scare (which turned out to be pretty much nothing). I grabbed my iPhone, and as I was putting it into my armband case a news alert popped up. Steve Jobs is dead I stopped. The world paused for a moment. Standing in front of my desk, I turned and opened up a web browser to read the press release from the Apple board. It was true, and it wasn’t a surprise. Like nearly all of you reading this, I never met Steve Jobs. Unlike most of you I was fortunate enough to get to attend his last Macworld keynote and experience the reality distortion field myself. I walked in carrying a BlackBerry. I went home with an iPhone. Call the RDF what you will, but I never regretted that decision. I have spoken with other Apple executives, but never the man himself. My love of technology started with Apple and, to a lesser degree, Commodore. That’s when I started hacking; and by hacking I mean exploring. But I never owned an Apple. I didn’t buy my first Mac until 2005; a victim of the halo effect from the beauty of my first iPod (a third gen model). Today there are 6 or so Macs in my house, a couple iPads, a few iPhones, and various other products. Including, still, that third generation iPod I can’t seem to let go. It doesn’t matter if you love or hate Apple – everything we do in technology today is influenced by the work of the teams Steve led. Every computer, every modern phone, and every music player is influenced more by Apple designs than by any other single source. Even the CG animated cartoons my daughter loves so much. I used to criticize Apple. Too expensive. Too constraining. But over the course of several years I have found my own beliefs aligning with the “rules” Jobs defined. People won’t know what they want until you show them. Don’t let customers derail your vision, but be ready to move when they’re right. Design and usability are every bit as important as features – if either fails, the product fails. Remove as much as possible. Imagine if we had a security leader as visionary as Jobs. We have many who might think they are, but no one comes close. Can you imagine Steve in a UI design meeting for nearly any security product on the market? His death hit me harder than I expected. Because not only do we not have a Steve Jobs in security, we no longer have one at all. The entire technology world just lost the one person climbing the hills in front of us, breaking the trail, and turning back to wave and shout “follow me”. Now we’re on our own. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian & Mel Shakir on SIEM Replacement. Rich is giving a webcast on cloud security next week. This is with Dome9, but all the content coming from me is objective and influence-free. Favorite Securosis Posts Adrian Lane: The iPad-Enterprise-Data Security Spectrum. Face it, the iPad is so compelling that it is forcing its way into the enterprise – Rich offers good tips for facing the inevitable. David Mortman: Force Attacker Perfection. Mike Rothman: Force Attacker Perfection – Rich is right. We can’t stop them, but we should make them work for it. Rich: Need a CISO cert? Got $200? Get one while they’re hot…. Other Securosis Posts When to Use Amazon S3 Server Side Encryption. Incite 10/5/2011: Time waits for no one. Nitro & Q1: SIEM/Log Management vendors dropping right and left. Introducing the Securosis Nexus. Incite 9/28/2011: Renewal. Comment on the Next Version of the Cloud Security Alliance Guidance. Favorite Outside Posts Mike Rothman: Text of Steve Job’s Commencement Address (2005). Passed on, but Steve Jobs’ teachings will stick with me forever. I look at this speech every couple of months. Puts everything (life, job, happiness, purpose, etc.) into context for me. Everything. David Mortman: Application-Layer DDoS Attacks Are Growing: Three to Watch Out For. Adrian Lane: The Web won’t be safe, let alone secure, unless we break it. Topics Jeremiah has covered before, but a very nice overview of the situation. Browsers, like many other platforms, have idiotic ‘features’ that make security impossible, and it’s time to throw some of the garbage out. Rich: The Vendor Beating. I’ve been in similar meetings as an analyst. Nothing beats the blame game. Dave Lewis: Some SCADA Problems Too Big to Call Bugs. Yeah… That will fix it. Top News and Posts Amex XSS Vuln But it’s the twitter dialog that’s worth reading. This is just so typical for a McBank response to any inquiry – they can only follow the script. Awesome. Tool to crack SSL. Hacker nabbed after topping up three EasyCards. Using ICMP Reverse Shell to Remotely Control a Host. Privacy and security implications of Amazon’s new “Silk” browser. Microsoft Pushes Emergency Update After Security Products Call Chrome “Banking Trojan” Cisco patches the other iOS. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Bill, in response to Nitro & Q1: SIEM/Log Management vendors dropping right and left. Excellent analysis. Until recently, SIEM vendors were a kind of “Switzerland” with respect to third party event sources, i.e, treating them all the same for the most part. I think customers will become concerned if the big three manufacturers start favoring their own complementary security products. What do you think? Share:

Share:
Read Post

The iPad-Enterprise-Data Security Spectrum

As I mentioned in the Incite yesterday, Symantec announced DLP support for the iPad. I have been meaning to talk about this for a while, as various products have been popping onto the market, and now seems like the time. Note: I’m focusing on the iPad because that’s what most people are interested in, but much of what I’m going to talk about also applies to the iPhone. The iPad is an extremely secure device; odds are it is much more secure than any laptop or desktop you let your users on. The main reason is that it is locked down so tightly with a combination of hardware and software controls. This is also a challenge for security, because you can’t run any background tasks. For the record, I really like this approach – it eliminates the need for things like antivirus in the first place. For data security, that means we are limited in what we can do. No DLP running in the background, for example. To fill this gap, a spectrum of approaches and tools have hit the market. I like to list them as a spectrum from least control to most. Most control doesn’t mean it’s better – which of these to use depends heavily on the needs of both your organization and your users. As a baseline I assume you allow access to corporate assets in some way using the device. I’m skipping the “do nothing” and “don’t let them in at all” options: Here we go: ActiveSync and device profiles. You allow users access to corporate email, but enforce a basic device profile to require a passcode/password and enable remote wiping if the device is lost. This enables basic encryption of the entire device (easier to crack), with data protection for email attachments. Server-side DLP. You create DLP policies that restrict the email/files going to an otherwise approved device. Websense offers this – not sure who else. Walled-garden applications. These are apps like Good for Enterprise, the new Zenprise SharePoint client for iPad, Watchdox, and GroupLogic mobilEcho. All access to documents is purely through the approved app, and the app can restrict opening or usage of that document elsewhere on the device. Remember, if you don’t totally wall the content off, any standard document format can be opened in another app – thus losing any security controls. These usually offer viewing but not editing, because that would require building in a complete editor. There is a very broad range of variation between these apps. Fully-managed device with always-on VPN. You use mobile device management (MDM) to enforce an always-on VPN connection and block unmanaged network traffic. Then you use DLP on your network to manage traffic and content. This is how Symantec works. They use an app on the device to enforce the VPN, and made changes on the DLP gateway to improve the user experience with the device. For example, the iPad doesn’t handle failed email connections well (it tends to stall), so they had to play games to block protected content from going to Gmail without ruining the device experience. Each of these models has its own advantages, and there are different levels of control within each tier. But these should give you a good idea of the options. Someday I might write a paper with more detail, but hopefully this is enough for now. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.