Securosis

Research

Incite 7/17/2013: 80 años

If you want a feel for how long 80 years is, here are a few facts. In 1933, the President was Herbert Hoover until March, when FDR became President. The Great Depression was well underway in the US and spreading around the world. Hitler first rose to power in Germany. And Prohibition was repealed in the US. I’ll certainly drink to that. Some famous folks were born in 1933 as well. Luminaries such as Joan Collins, Larry King, and Yoko Ono. Have you seen Larry or Yoko lately? Yeah, 80 seems pretty old. Unless it’s not. My father-in-law turned 80 this year. In fact his birthday was yesterday and he looks a hell of a lot better than most 80-year-olds. He made a joke at his birthday party over the weekend that 80 is the new 60. For him it probably is. He has been both lucky and very healthy. We all think his longevity can be attributed to his outlook on life. He has what we call jokingly _the Happy Gene.. In the 20 years I have been with the Boss I have seen him mad twice. Twice. It’s actually kind of annoying – I probably got mad twice already today. But the man is unflappable. He’s a stockbroker, and has been for 35 years, after 20 years in retail. Stocks go up, he’s cool. Stocks go down, he’s cool. Clients yell at him, he’s cool. He just doesn’t get bent out of shape about anything. He does get fired up about politics, especially when I intentionally bait him, because we see things from opposite sides. He gets excited about baseball and has been known to scream at the TV during Redskins games. But after the game is done or the discussion is over, he’s done. He doesn’t hold onto anger or perceived slights or much of anything. He just smiles and moves on. It is actually something I aspire to. The Boss said a few words at his party and summed it up perfectly. She had this entire speech mapped out, but when I heard her first sentence I told her to stop. It’s very hard to sum up a lifestyle and philosophy in a sentence, but she did it. And anything else would have obscured the beauty of her observation. Worry less, enjoy life more. That’s it. That’s exactly what he does, and it has worked great for 80 years. It seems so simple yet it’s so hard to do. So. Hard. To. Do. But for those, like my father-in-law, who can master worrying less… a wonderful life awaits. Even when it’s not so wonderful. Happy Birthday, Sandy. I can only hope to celebrate many more. –Mike Photo credit: “Dad’s 80th Birthday Surprise” originally uploaded by Ron and Sandy with Kids Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. The Endpoint Security Buyer’s Guide Introduction Continuous Security Monitoring Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Attacks Introduction API Gateways Implementation Key Management Developer Tools Security Analytics with Big Data Deployment Issues Integration New Events and New Approaches Use Cases Introduction Newly Published Papers Quick Wins with Website Protection Services Email-based Threat Intelligence: To Catch a Phish Network-based Threat Intelligence: Searching for the Smoking Gun Understanding and Selecting a Key Management Solution Building an Early Warning System Incite 4 U Social responsibility: Before I get too far I need to acknowledge that this is definitely a bit of he said/she said. Now that has been put out there, what we know is that Microsoft released a patch for a bug discovered and released on the full disclosure list by security researcher Tavis Ormandy (who works for Google, but I think that’s incidental here). Microsoft stated last week that the bug is being actively used in targeted attacks after it was disclosed.. Tavis was clear that he didn’t notify Microsoft before posting the details publicly. Here’s what I think: we all have a social responsibility. While MS may have treated Tavis poorly in the past – justified or not – his actions put the rest of us at risk. It’s about users, not Microsoft or the researcher. If Tavis knew the bug was being used in the wild, I support full disclosure. If the vendor doesn’t respond or tries to cover it up and users are at risk, disclose publicly and quickly. But at least give them a chance, which requires thinking about the impact on everyone else first. To be balanced, vendors have a responsibility to respond in a timely fashion, even if it isn’t convenient. But to release a bug with no evidence that anyone else is using it? That doesn’t seem responsible. – RM Identity theft fullz up: Interesting research hit this week from Dell SecureWorks, per a Dark Reading article, about seeing complete packets of stolen information (fullz), including healthcare information, appearing in marketplaces for $1,200 or so. With a full kit an attacker would have everything they need for identity theft, including counterfeit documents such as driver’s licenses and health insurance cards. Also interesting were that credit cards with CVV can be had for $1-2 each, although a prestige card (such as AmEx Black) can cost hundreds. This is Mr. Market at work. Prices for commodities go down but valuable information still demands a premium. It appears online game accounts are a fraud growth market because turning virtual items into real money can be easier due to less stringent fraud detection. – MR Easier than making coffee: Gunnar Peterson’s keynote at CIS2013 was full of valuable witticisms (the entire presentation is on his blog) – but made a particularly profound point regarding code security: it needs to be so easy that a seventeen-year-old can do it time and time again without fail. Gunnar drew a parallel between Five Guys’ recipe for burger success against behemoth competitors:

Share:
Read Post

The Temptation of the Developer

Threat modeling involves figuring out ways the system can be gamed and your [fill in the blank] can be compromised. Great modelers can take anything and come up with new ways to question the integrity of the system. When it comes to 0-day attacks, many tend to focus on increasingly sophisticated fuzzers and other techniques to find holes in code, like the tactics described in the Confessions of a Cyber Warrior interview. But Jeremiah takes a step back to find yet another logic flaw in our assumptions of our adversaries in this succinct and eye opening tweet. As 0-days go for 6 to 7 figures, imagine the temptation for rogue developers to surreptitiously implant bugs in the software supply chain. I can see it now. The application security marketing engine will get ramped up around the rogue developer threat, which is just another form of the insider attack bogey man, starting in 3, 2, 1… But the threat is real. The real question is whether awareness of this kind of adversary would change how you do application security. I’m sure my partners (and many of you) have opinions about what should be done differently. Share:

Share:
Read Post

Intel Software Guard Extensions (SGX) Is Mighty Interesting

I am in a bit over my head here, but take a look at the first two presentations at the Workshop on Hardware and Architectural Support for Security and Privacy. Intel is preparing to introduce a new capability in their processors to support use of secure encrypted memory spaces on commodity CPUs. Their objective is to provide applications with a secure ‘enclave’ (their term) with a protected memory and execution space. It’s called Intel Software Guard Extensions (SGX). This could be significant – especially for battling malware and cloud computing. Think secure key management in the cloud with hardware-enforced sandboxes on endpoints. Developers will need to code their software to use the feature, so this isn’t an overnight fix. However… It seems like a powerful tool to battle malware on endpoints, especially if operating system manufacturers leverage the capability in Windows and OS X to further improve their sandboxes. And imagine a version of Java or Flash that’s fully isolated. This could offer material improvements to hypervisor security – for example by eliminating memory parsing attacks. And encrypted memory should mean volatile memory (RAM) is even protected from cloud administrators trying to peek at encryption keys. HSM vendors should also keep an eye on this because it might offer comparable security to hardware-based key managers (but probably not for key generation and a few other important pieces, for those who need them). Think of virtual HSMs and key managers that run within the cloud, without the worry of keys being compromised in memory. It looks extremely interesting but I freely admit that some of it is over my head. But if I am reading right, the long-term potential to improve security is impressive. Share:

Share:
Read Post

FireStarter: KNOX vs. AZA mobile throwdown

A group of us were talking about key takeaways for the 2013 Cloud Identity Summit last week in Napa. CIS 2012 focused on getting rid of passwords; but the conversation centered on infrastructure and identity standards such as OAuth, OpenID Connect, and SAML, which provide tool to authenticate users to cloud services. 2013 was still about minimizing usage of passwords, but focused on the client side where the “rubber meets the road” with mobile client apps. Our discussion highlighted different opinions regarding the two principal models presented at the conference for solving single sign-on (SSO) issues for mobile devices. One model, the Authorization Agent (AZA) is an app that handles authentication and authorization services for other apps. KNOX is a Samsung-specific container that provides SSO to apps in the container. It’s heartening to hear developers stress that unless they get the end user experience right, the solution will not be adopted. No disagreement on that but buyers have other issues of equal importance, and I think we are going to see mobile clients embrace these approaches over the next couple years so it is worth discussing the issues in an open public forum. So I am throwing out the first pitch in this debate. Statement I believe the KNOX “walled garden” mobile app authentication model offers a serious challenge to Authorization Agents (AZA) – not because KNOX is technically superior but because it provides a marginally better user experience while offering IT better management, stronger security, and a familiar approach to mobile apps and data security. I expect enterprises to be much more comfortable with the KNOX approach given the way they prefer to to manage mobile devices. I am not endorsing a product or a company here – just saying I believe the subtle difference in approach is very important to the buyers. Problem User authentication on mobile devices must address a variety of different goals: a good user experience, not passing user IDs and passwords around, single sign-on, support for flexible security tokens, Two-Factor Authentication (2FA) or equivalent, and data security controls – just to name a few. But the priority is to provide single sign-on for corporate applications on mobile devices. Unfortunately the security model in most mobile operating systems is primarily intended to protect apps for other apps, so SSO (which must manage authentication for multiple other apps) is a difficult problem. Today you need to supply credentials for every app you use, and some apps require re-authentication whenever you switch between apps. It gets even worse if you use lengthy passwords and a password manager – the process looks something like this: You start the app you need to run, bounce over to the password manager, log into the password manager, grab credentials, bounce back to the original application, and finally supply credentials (hopefully pasting them in so you don’t forget or make an invisible typo). At best case it’s a pain in the ass. Contrasting Approaches Two approaches were discussed during CIS 2013. I will simplify their descriptions, probably at the expense of precision, so please comment if you believe I mischaracterized either solution. First, let’s look at the AZA workflow for user authentication: The AZA ‘agent’ based solution is essentially an app that acts as a gateway to all other (corporate) apps. It works a bit like a directory listing, available once the user authenticates to the AZA agent. The workflow is roughly: a. The app validates the user name and password (1.). b. The app presents a list of apps which have been integrated with it. c. The user selects the desired app, which requests authentication tokens from an authorization server (2.). d. The tokens enable the mobile application to communicate with the cloud service (Box, Evernote, Twitter, etc). If the service requires two-factor authentication the user may be provided with a browser-based token (3.) to supplement their username and password. e. The user can now use the app (4.). For this to work, each app needs to be modified slightly to cooperate with the AZA. KNOX is also an agent but not a peer to other apps – instead it is a container that manages apps. The KNOX (master) app collects credentials similarly to AZA, and once the container app is opened it also displays all the apps KNOX knows about. The user-visible difference is that you cannot go directly to a corporate app without first validating access to the container. But the more important difference for data security is that the container provides additional protection to its apps and stored data. The container can verify stack integrity, where direct application logins do not. KNOX also requires apps be slightly modified to work within in the container, but it does not require a different authentication workflow. User authentication for KNOX looks like this – but not on iOS: Rationale Both approaches improvement on standalone password managers, and each offers SSO, but AZA is slightly awkward because most users instinctively go directly to the desired app – not the AZA service. This is a minor annoyance from a usability standpoint but a major management issue – IT wants to control app usage and data. Users will forget and log directly into productivity apps rather than through the AZA if they can. To keep this from happening AZA providers need the app vendor to alter their apps to a) check for the presence of an AZA, b) force users through the AZA if present, and c) pass user credentials to the AZA. The more important issue is data security and compliance as drivers for mobile technologies. The vast majority of enterprises use Virtual Desktop Infrastructure (VDI) to manage mobile data and security policy, and the KNOX model mirrors the VDI model. It’s a secure controlled container, rather than a loosely-coupled federation of apps linked to an authorization agent. A container provides a clear control model which security organizations are comfortable with today. A loose confederation of applications cannot guarantee data security or policy enforcement the way containers can. One final point on buying centers: buyers do not look for the ‘best’

Share:
Read Post

Counterpoint: KNOX vs. AZA throwdown

Adrian makes a number of excellent points. Enterprises need better usability and management for mobile devices, but co-mingling these goals complicates solutions. Adrian contrasted two approaches: AZA and KNOX, which I also want to discuss. Let me start by saying I think we are in the first or second inning for mobile. I do not expect today’s architectural choices to stick for 10+ years. I think we will see substantial evolution, root and branch, for a while. Here is a good example of a mobile project: The Wall St. Journal just published their 1,000th edition on iPad. It is a great example of a mobile app, works in both offline and online modes, is easy to navigate and packed with information (okay – just ignore the editorial page) – it is a great success. The way they started the project is instructive: Three and a half years ago, The Wall Street Journal locked six people in a windowless room and threw down a secret challenge: Build us an iPad app. You have six weeks. And so we did. We started with a blank slate–no one had ever seen a tablet news app before. This is not uncommon for mobile projects. A few takeaways: We are learning our lessons as we go. There is an architectural vision but it evolves quickly and adapts, and did I mention we are leaning as we go? Evolution today is less about enterprise-level grand architecture (we already have those, called iOS and Android, themselves evolving while we scramble to keep up) – it is incremental improvement. Looking at AZA vs. KNOX from ground level, I see attractive projects for enterprise, with AZA more focused the here and now. KNOX seems to be shooting for DOD today, and enterprise down the road. This all reminds me of how Intel does R&D. They roll out platforms with a tick/tock pattern. Ticks are whole new platforms and tocks are incremental improvements. To me AZA looks like classic tock: it cleans up some things for developers, improves capabilities of existing systems, and connects some dots. KNOX is a tick: it is a new ballgame, new management, and a new way to write apps. That doesn’t mean KNOX cannot succeed, but would the WSJ start a new project by learning a new soup-to-nuts architecture just to handle security requirements (remember: you need to launch in six weeks)? I know we as security people wish they would, but how likely is that in the near term, really? The positive way to look at this choice is that, for a change, we have two interesting options. I may be overly pessimistic. It is certainly possible that soup-to-nuts security models – encompassing hardware, MAC, Apps, Platforms – will rule from here on out. There is no doubt plenty of room for improvement. But the phrase I keep hearing on mobile software projects is MVP: Minimum Viable Product. KNOX doesn’t fit that approach – at least not for most projects today. I can see why Samsung wants to build a platform – they do not want to be just another commoditized Android hardware manufacturer, undifferentiated from HTC or Googorola. But there is more to it than tech platforms – what do customers want? There is at least one very good potential customer for KNOX, with DOD-type requirements. But will it scale to banks? Will KNOX scale to healthcare, manufacturing, and ecommerce? That is an open question, and app developers in those sectors will determine the winner(s). Share:

Share:
Read Post

Summary: Here’s to the Defenders

I was reading Roger Grimes’ interview with an offensive cybersecurity operator, and one key quote really stood out: I wish we spent as much time defensively as we do offensively. We have these thousands and thousands of people in coordinate teams trying to exploit stuff. But we don’t have any large teams that I know of for defending ourselves. In the real world, armies spend as much time defending as they do preparing for attacks. We are pretty one-sided in the battle right now. As much as I enjoy playing offensive security guy once a year at Defcon, I find defense to be a much more interesting challenge. Unfortunately many in our community don’t consider it as ‘sexy’ as penetration testing or vulnerability research. We need to change that. Most of us started our exploration of technology as hackers. I am fully willing to admit I was fascinated by cracking systems, and engaged in activities as a kid that could land me in jail now. Nothing major – I always assumed it was much easier to catch hackers and phreaks than it really was. I mean seriously, it wouldn’t have been all that hard back then. It turns out no one was looking – who knew? That’s what I get for assessing national computer law enforcement capabilities based on repeated viewings of War Games. But breaking things is, in many ways, far less challenging than protecting them. I am sick and tired of seeing researchers and pen testers on various mailing lists brag about how easy it is to get into their clients’ systems. I suspect the ones who understand the complexity of defending complex environments with limited resources keep their mouths shut. Breakers, with very few exceptions, aren’t accountable. Outside of movies, there are no consequences if they fail. Not yet, at least. No guns to the head as you sit in front of 32 widescreen monitors with 8 keyboards spread out in front of you and a coked- megelomaniac watching you waste part of your 60-second window on a visualization so your code looks good for the cameras. Nope. Builders? Defenders? Our lives are nothing but accountability. We are the firefighters, doctors, cops, and engineers all wrapped into one. Without us who would keep the porn flowing? It is a far more complex challenge, with nowhere near enough disciples. Many of our smartest focus on offensive security for obvious economic reasons. If you are good there is more money, less accountability, and more freedom. Smart defenders, even if they come up with a groundbreaking idea, need time and resources to build it – which often means productizing it and dealing with idiotic investors and bureaucracies. There are far fewer opportunities for smart defenders to perform research leading to practical tools and techniques. The only thing that can change this is money. Sure, I’d love to lead a cultural revolution, but that is more my desire to send people to re-education camps than any inherent belief we will all suddenly focus on defense due to some higher calling. (I’m serious about the camps – I have some awesome ideas). We need some serious investment – and not in academic institutions who often fail to remember sh*t needs to work outside a lab. Breaking and offensive research are important. Doing them well is hard. But defending? That is a challenge. I suspect I will be talking about this at Defcon. But with more beer. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR Post Why Database Assessment. Rich has another article at Macworld on security for switchers. Rich’s DR post: Security Needs More Designers. Mike’s article at Information Week. Dave Lewis on Disaster Recovery at CSO Online Favorite Securosis Posts Mike Rothman: Using Amazon IAM Roles to Distribute Security Credentials (for Chef). Holy crap. A blog post from an analyst with code and screen shots! OMG… See, some analyst have some kung fu after all. David Mortman: Rich’s first post on security automation. Rich: Continuous Security Monitoring: Classification. This is a good series. Other Securosis Posts The Endpoint Security Buyer’s Guide [New Series]. Living to fight another day…. Another Disclosure Debacle, with a Twist. Using cloud-init and s3cmd to Automatically Download Chef Credentials. Incite 7/10/2013: Selfies. Kudos: Microsoft’s App Store Security Policy. How Not to Handle a Malware Outbreak. RSA Acquires Aveksa. Multitenancy is the Least Interesting Security Property of Cloud Computing. Continuous Security Monitoring: Defining CSM. Calendar Bites Google Security in the Ass. Proactive WebAppSec. Why. Continuous. Security. Monitoring? [New Series]. New Paper: Quick Wins with Website Protection Services. Database Denial of Service: Attacks. OpenStack Security Guide Released. Favorite Outside Posts Mike Rothman: Proving the skeptics wrong. You can only achieve true success when you do things for the right reasons. Seth Godin reminds me that proving someone wrong isn’t one of them. At some point you run out of people to rail against… Adrian Lane: Data Leakage In A Google World. People forget that Google is a powerful tool and often finds data companies did not want exposed. It’s a tool to hack with, and yes, a tool to phish with. Chris Pepper: Solaris patching is broken because Oracle is dumb and irresponsible. David Mortman: Dear Speaker, I Loathe You. Sincerely, Your Event Planner. Funny. Rich: No, Hacker Really Does Mean Hacker. Yep. Get over it. Research Reports and Presentations Quick Wins with Website Protection Services. Email-based Threat Intelligence: To Catch a Phish. Network-based Threat Intelligence: Searching for the Smoking Gun. Understanding and Selecting a Key Management Solution. Building an Early Warning System. Implementing and Managing Patch and Configuration Management. Defending Against Denial of Service (DoS) Attacks. Securing Big Data: Security Recommendations for Hadoop and NoSQL Environments. Tokenization vs. Encryption: Options for Compliance. Pragmatic Key Management for Data Encryption. Top News and Posts Google releases fix to OEMs for Blue Security Android security hole. This is seriously ugly. How the US (probably) spied on European allies’ encrypted faxes. Researcher finds way to commandeer any Facebook account from his mobile phone. Crimelords: Stolen credit cards… keep ‘em. It’s all about banking logins now. DEF CON to Feds. Aeroplan Provides Proactive Customer Alerting A Black Hat,

Share:
Read Post

Tips on SQL Azure Security

@gepeto42 had a good post: Windows Azure SQL Database, formely known as SQL Azure, is Microsoft’s managed database platform in Azure. While it is based on Microsoft SQL Server, it has various limitations that can impact how you secure and manage it. It also has some features that can help improve security. Helpful details. I am really hunting for more real-world cloud security examples, so please keep them coming… Share:

Share:
Read Post

API Gateways: Implementation

APIs go through a software lifecycle, just like any other application. The purchaser of the API develops, tests, and manages code as before, but when they publish new versions the API gateway comes into play. The gateway is what implements operational polices for APIs – serving as a proxy to enforce security, application throttling, event logging, and routing of API requests. Exposing APIs and parameters, as the API owner grants access to developers, is a security risk in and of itself. Injection attacks, semantic attacks, and any other way for an attacker to manipulate API calls is fair game unless you filter requests. Today’s post will focus on implementation of security controls through the API gateway, and how the gateway protects the API. Exposing APIs What developers get access to is the first step in securing an API. Some API calls may not be suitable for developers – some features and functions are only appropriate for internal developers or specific partners. In other case some versions of an API call are out of date, or use of internal features has been deprecated but must be retained for limited backward compatibility. The API gateway determines what a developer gets access to, based on their credentials. The gateway helps developers discover what API calls are available to them – with all the associated documentation, sample scripts, and validation tools. But behind the scenes it also constricts what each developer can see. The gateway exposes new and updated calls to developers, and acts as a proxy layer to reduce the API attack surface. The gateway may expose different API interfaces to developers depending on which credentials they provide and the authorization mapping provided by the API owner. Most gateway providers actually help with the entire production lifecycle of deployment, update, deprecation, and deletion – all based on security and access control settings. URL whitelisting We define ‘what’ an application developer can access when we provision the API – URL whitelisting defines ‘how’ it can be used. It is called a ‘whitelist’ because anything that matches it is allowed; unmatching requests are dropped. API gateways filter incoming requests according to the rules you set, validating that incoming requests meet formatting requirements. This checking catches and stops some mistakes; the API gateway’s security prevents some mistakes from proceeding by preventing use of unauthorized requests. This may be used to restrict which capabilities are available to different groups of developers, as well as which features are accessible to external requests; the gateway also prevents direct access to back end services. Incoming API calls run through a series of filters, checking general correctness of request headers and API call format. Calls that are too long, have missing parameters, or otherwise clearly fail to meet the specification are filtered out. Most whitelists are implemented as series of filters, which allows the API owner to add checks as needed and tune how API calls are validated. The owner of the API can add or delete filters as desired. Each platform comes with its own pre-defined URL filters but most customers create and add their own. Parameter parsing (Injection attacks: XML attacks JSON attacks CSRF) Attackers target application parameters. This is a traditional way to bypass access control and gain unauthorized access to back-end resources. API gateways also provide capabilities to examine user-defined content. “Parameter parsing” is examination of user-supplied content for specified attack signatures – they may identify attacks or API misuse. Content inspection works much like a ‘blacklists’ to identify known malicious API usage. Tests typically include regular expression checks of headers and content for SQL injection and cross-site scripting. Parameters are checked sequentially, one rule at a time. Some platforms provide means to programmatically extend checking, altering both which checks are performed and how they are parsed, depending on the parameters of the API call. For example you might check the contents of an XML stream for both structure and to ensure that it does not contain binary code. API gateways typically provide packaged policies for content signature of know malicious parameters, but the owner API determines which policies are deployed. Our next post will offer a selection guide – with specific comments on deployment models, evaluation checklists, and key technology differentiators. Share:

Share:
Read Post

Continuous Security Monitoring: Classification

As we discussed in Defining CSM, identifying your critical assets and monitoring them continuously is a key success factor for your security program – at least if you are interested in figuring out what’s been compromised. But reality says you can’t watch everything all the time, even with these new security big data analytical thingies. So the success of your security program hinges your ability to prioritize what to do. That was the main focus of our Vulnerability Management Evolution research last year. Prioritizing requires you to determine how different asset classes will be monitored. You need a consistent process to classify assets. To define this process let’s borrow liberally from Mike’s Pragmatic CSO methodology – identifying what’s important to your organization is the critical first step. So a critical step is to make sure you’ve got a clear idea about priorities and to get the senior management buy-in on what those priorities are. You don’t want to spend a lot of money protecting a system that has low perceived value to the business. That would be silly. – The Pragmatic CSO, p25. One of the hallmarks of a mature security program is having this elusive buy-in from all levels and areas of the organization. And that doesn’t happen by itself. Business System Focus When you talk to folks about their data leak prevention efforts, a big impediment to sustainable success for the initiative is the ongoing complexity of classification. It’s just overwhelming to try putting all your organization’s data into buckets and then to maintain those buckets over time. The same issues apply to classifying computing assets. Does this server fit into that bucket? What about that network security device? And that smartphone? Multiply that by a couple hundred thousand servers, endpoints, and users and you start to understand the challenges of classification. An approach that can be very helpful for overcoming this overwhelm is to think about your computing devices in terms of a business system. To understand what that means, let’s return to The Pragmatic CSO: The key to any security program is to make sure that the most critical business systems are protected. You are not concerned about specific desktops, servers or switches. You need only be focused on fully functioning business systems. Obviously every fully functioning system consists of many servers, switches, databases, storage, and applications. All of these components need to be protected to ensure the safety of the system. – The Pragmatic CSO, p23 This requires aligning specific devices to the business systems they serve. Then those devices inherit the criticality of the business system. Simple, right? Components such as SANs and perimeter security gateways are used by multiple business systems, so they need to be classified with the most critical business system they serve. By the way, you are doing this already if you have any regulatory oversight. You know those in-scope assets for your PCI assessment? You associated those devices with PCI-relevant systems with access to protected data. Thoey require protection in accordance with the PCI-DSS guidance. Those efforts have been based on what you need to do to understand your PCI (or other mandate) scope, and we are talking about extending that mentality across your entire environment. Limited Buckets To understand the difficulty of managing all these combinations, consider the inability of many organizations to implement role-based access control on their key enterprise applications. That was largely because something like a general ledger application has hundreds of roles, with each role involving multiple access rules. Each employee may have multiple roles, so RBAC required managing A * R * E entitlements. Good luck with that. We suggest limiting the number of buckets used to classify business systems. Maybe it’s 2. You know, the stuff where you will get fired if breached, and the stuff where you won’t. Or maybe it’s 3 or 5. It’s no more than that. We are talking about monitoring devices in this series, but you needs to implement and manage different security controls for each level. It’s the concept we called Vaulting a couple years ago, also commonly known as “security enclaves”. After identifying and classifying your business systems into a manageable number of buckets you can start to think about how to monitor each class of devices according to its criticality. Be sure to build in triggers and catalysts to revisit your classifications. For example, if a business system is opened to trading partners or you authorize a new device to access critical data. As long as you understand these classifications from a point in time and need to be updated periodically, this process works well. Later in this series we will talk about different levels of security monitoring, based on the data sources and access you have to devices and specific use case(s) you are trying to achieve. Employees Count Too We have been talking about business systems and associated computing devices used to support them, but we cannot forget the weakest link in pretty much every organization: employees. You need to classify employees just like business systems. Do they have access to stuff that would be bad if it’s breached, and how are they accessing it – mobile vs. desktop, remote vs. on-network, etc.? The reality is that you can place very limited trust in endpoint devices. We see a new story of this 0-day or that breach daily, compounded by idiotic action taken by some employee. It is no wonder no one trusts endpoints. And we have no issue with that stance. If that forces you to apply more discipline and tighter controls to devices it’s all good. There is definitely a different risk profile to a low-level employee operating on a device sitting on the corporate network, compared to your CFO accessing unannounced financials on an Android tablet from a cafe in China. Part of your CSM process must be classifying, protecting, and monitoring employee devices. Where legally appropriate, of course. Gaining Consensus Now that you have bought into this classification discipline, you need to make it reality. This is where the fun begins – it requires buy-in within the organization, which is

Share:
Read Post

Living to fight another day…

Our man Dave Lewis has a great post on CSO Online, When Disaster Comes Calling, about the importance of making sure your disaster recovery plan actually can help you when you have, uh, a disaster. Folks don’t always remember that sometimes success is living to fight another day. At one organization that I worked for the role of disaster recovery planning fell to an individual that had neither the interest nor the wherewithal to accomplish the task. This is a real problem for many companies and organizations. The fate of their operations can, at times, reside in the hands of someone who is disinclined to properly perform the task. Sounds like a recipe for failure to me. I would say the same goes for incident response. Far too many organizations just don’t put in the time, effort, or urgency to make sure they are prepared. Until they get religion – when their business is down or their darkest secrets show up on a forum in Eastern Europe. Or you can get a bit more proactive by asking some questions and making sure someone in your organization knows the answers. So what is the actionable take away to had from this post? Take some time to review your organizations disaster recovery plans. Are backups taken? Are they tested? Are they stored offsite? Does the disaster recovery plan even exist anywhere on paper? Has that plan been tested with the staff? No plan survives first contact with the “enemy” but, it is far better to be well trained and prepared than to be caught unawares. What Dave said. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.