Securosis

Research

Network Security in the Age of *Any* Computing: Containing Access

In the first post of this series, we talked about the risks inherent to this concept of any computing, where those crazy users want to get at critical data at any time, from anywhere, on any device. And we all know it’s not pretty. Sure, there are things we can do at the device layer to protect the and ensure a proper configurations. But in this series we will focus on how to architect and secure the network to protect critical data. The first aspect of that is restricting access to key portions of your network to only those folks that need it. Segmentation is your friend There is an old saying, “out of sight, out of mind,” which could be rephrased for information security as, “out of reach, out of BitTorrent.” By using a smart network segmentation strategy, you can keep the critical data out of the clutches of attackers. OK, that’s an overstatement, but segmentation is the first step to protecting key data. We want to make it as hard as possible for the data to be compromised, and that’s why we put up as many obstacles as possible for attackers. Unless you are being specifically targeted, simply not being the path of least resistance is a decent strategy. The fewer folks who have access to something, the less likely that access will be abused, and the more quickly and effectively we can figure out who is the bad actor in case of malfeasance. Not that we believe the PCI-DSS v2.0 standards represent even a low bar for security controls, but they do advocate and require segmentation of cardholder data. Here is the specific language: All systems must be protected from unauthorized access from untrusted networks, whether entering the system via the Internet as e-commerce, employee Internet access through desktop browsers, employee e-mail access, dedicated connections such as business-to-business connections, via wireless networks, or via other sources. Often, seemingly insignificant paths to and from untrusted networks can provide unprotected pathways into key systems. Firewalls are a key protection mechanism for any computer network. One architectural construct to think about segmentation is the idea of vaults, which really are just a different way of thinking about segmentation of all data – not just cardholder data. This entails classifying data sources into a few tiers of sensitivity and then designing a control set to ensure access to only those authorized. The goal behind classifying critical data sources is to ensure access is only provided to the right person, on the right device, from the right place, at the right time. Of course, that first involves defining rules for who can come in, from where, when, and on what device. And we cannot trivialize that effort, because it’s time consuming and difficult. But it needs to be done. Once the data is classified and the network is segmented – which will discuss in more depth as we progress through this series – we need to authenticate the user. An emerging means of enforcing access to only those authorized devices is to look at something like risk-based or adaptive authentication, where the authentication isn’t just about two or more factors, but instead dynamically evaluated based on any number of data points: including who you are, what you are doing, where you are connecting from, and when you are trying to gain access. This certainly works well for ensuring only the right folks get in, but what happens once they are in? The obvious weakness of a control structure focused purely on initial authentication is that a device could be compromised after entry – and then all the network controls are irrelevant because the device already has unfettered access. A deeper look at risk-based authentication is beyond our scope for this research project, but warrants investigation as you design control structures. We also need to ensure we are very critically looking at how the network controls can be bypassed. If a machine is compromised after getting access, that is a problem unless you are constantly scrutinizing who has access on a continuous basis. And yes, we’ll discuss that in the next post. You also need to worry about unauthorized physical access to your network. That could be a hijacked physical port or a rogue wireless access point. Either way, someone then gets physical access to your network and bypasses the perimeter controls. Architectural straw man Now let’s talk about one architectural construct in terms of three different use models for your network, and how to architect a network in three segments, depending on the use case for access. Corporate network: This involves someone who has physical access to your corporate network. Either via a wired connection or a wireless access point. External mobile devices: These devices access corporate resources via an uncontrolled network. That includes home networks, public wireless networks, cellular (3G) networks, and even partner networks. If your network security team can’t see the entirety of the ingress path, then you need to consider it an external connection and device. Internal guest access: These are devices that just need to access the Internet from inside one of your facilities. Typically these are smartphones used by employees, but we must also factor in a use case for businesses (retail/restaurants, healthcare facilities, etc.) to provide access as a service. We want to provide different (and increasing) numbers of hoops for users to jump through to get access to important data. The easiest to discuss is the third case (internal guest access), because you only need to provide an egress pipe for those folks. We recommend total physical isolation for these devices. That means a totally separate (overlay) wireless network, which uses a different pipe to the Internet. Yes, that’s more expensive. But you don’t want a savvy attacker figuring out a way to jump from the egress network to the internal network. If the networks are totally separate, you eliminate that risk. The techniques to support your corporate network and external mobile devices are largely the same under the philosophy of “trust, but verify.” So we need to design the control sets to scrutinize users. The real question is how many more

Share:
Read Post

Security Counter Culture

There’s nothing like a late-night phone call saying, “I think your email has been hacked,” to drop a security professional over the edge. My wife called me during the RSA Conference to tell me this, because some emails she got from me were duplicates that refused to be deleted. Weirdness like that always makes me question my security, and when I found the WiFi still enabled on my phone, I had my yearly conference ‘Oh $#(!’ moment early. I consider it a BH/DefCon and RSA tradition, as it happens every year: seething paranoia. And this year the HBGary hack kept my paranoia amped up. The good news is that when I am in this state of mind I find mistakes. It not only makes me suspicious of my own work – I assume I screwed up, and that critical mindset helped me discover a couple flaws. A missed setting on a router, and leaving WiFi on when I went to SF. And there was another mistake understanding how a 3rd party product worked, so I needed to rethink my approach to data security on that as well. Then I start thinking: if they got access to this email account, what would that enable an attacker to do? I don’t sleep for the rest of the night, thinking about different possibilities. Sleep deprivation makes it difficult to maintain this degree of focus long-term, but I always harbor the feeling that something is wrong. The bad news is that this state of mind does not go well with interpersonal relationships. Especially in the workplace. Suspicious, distrust, and critical are great traits when looking at source code trying to find security flaws. They are not so great when talking to the IT team about the new system crossover they will be doing in 3 days (despite, of course, being several weeks behind on pre-migration tasks). Stressed out of their minds trying to make sure the servers won’t crash, nobody wants you to point out all they ways they failed to address security – and all the (time consuming) remediation they really should/must perform. We take it out on those not tasked with security, because anyone who does not hold the security bar as high as we do must be an idiot. And God help those poor phone solicitors trying to sell IPS to me after RSA because they somehow managed to scan my conference badge – I now feel the need to educate them on all 99 ways their product sucks and how they don’t understand the threats. Do you have to have a crappy attitude to be effective in this job? Do we need to maintain a state of partial paranoia? I am unable to tell if I simply had this type of personality, which lead me into security; or if the profession built up my the glass is half-empty, cracked, and about to be stolen at any moment, attitude. I’d stop to smell the roses but I might suffer an alergic reaction, and I am certain those thorns would draw blood. Sometimes I feel like security professionals have become the NSA of the private sector – trust no one. We have gotten so tired of leading a charge no-one follows that we have begun to shoot each other. Camaraderie from shared experiences brings us together, but a sense of distrust and disrespect cause more infighting than within any other profession I can think of. We have become a small corporate counterculture without a cool theme song. Share:

Share:
Read Post

What No One Is Saying about That Big HIPAA Fine

By now you have probably seen that the U.S. Department of Health and Human Services (HHS) fined Cignet healthcare a whopping $4.3M for, and I believe this is a legal term, being total egotistical assholes. (Because “willfull neglect” just doesn’t have a good ring to it). This is all over the security newsfeeds, despite it having nothing to do with security. It’s so egregious I suggest that, if any vendor puts this number in their sales presentation, you should simply stand up and walk out of the room. Don’t even bother to say anything – it’s better to leave them wondering. Where do I come up with this? The fine was due to Cignet pretty much telling HHS and a federal court to f* off when asked for materials to investigate some HIPAA complaints. To quote the ThreatPost article: Following patient complaints, repeated efforts by HHS to inquire about the missing health records were ignored by Cignet, as was a subpoena granted to HHS’s Office of Civil Rights ordering Cignet to produce the records or defend itself in any way. When the health care provider was ordered by a court to respond to the requests, it disgorged not just the patient records in question, but 59 boxes of original medical records to the U.S. Department of Justice, which included the records of 11 individuals listed in the Office of Civil Rights Subpoena, 30 other individuals who had complained about not receiving their medical records from Cignet, as well as records for 4,500 other individuals whose information was not requested by OCR. No IT. No security breach. No mention of security issues whatsoever. Just big boxes of paper and a bad attitude. Share:

Share:
Read Post

On Science Projects

I think anyone who writes for a living sometimes neglects to provide the proper context before launching into some big thought. I please guilty as charged on some aspects of the Risk Metrics Are Crap FireStarter earlier this week. As I responded to some of the comments, I used the term science project to describe some technologies like GRC, SIEM, and AppSec. Without context, some folks jumped on that. So let me explain a bit of what I mean. Haves and Have Nots At RSA, I was reminded of the gulf between the folks in our business who have and those who don’t have. The ‘haves’ have sophisticated and complicated environments, invest in security, do risk assessment, hey periodically have auditors in their shorts, and are very likely to know their exposures. These tend to be large enterprise-class organizations – mostly because they can afford the requisite investment. Although you do see a many smaller companies (especially if they handle highly regulated information) that do a pretty good job on security. These folks are a small minority. The ‘have nots’ are exactly what it sounds like. They couldn’t care less about security, they want to write a check to make the auditor go away, and they resent any extra work they have to do. They may or may not be regulated, but it doesn’t really matter. They want to do their jobs and they don’t want to work hard at security. This tends to be the case more often at smaller companies, but we all know there are plenty of large enterprises in this bucket as well. We pundits, Twitterati, and bloggers tend to spend a lot of time with the haves. The have nots don’t know who Bruce Schneier is. They think AV keeps them secure. And they wonder why their bank account was looted by the Eastern Europeans. Remember the Chasm Lots of security folks never bothered to read Geoffrey Moore’s seminal book on technology adoption, Crossing the Chasm. It doesn’t help you penetrate a network or run an incident response, so it’s not interesting. Au contraire, if you wonder why some product categories go away and others become things you must buy, you need to read the book. Without going too deeply into chasm vernacular, early markets are driven by early adopters. These are the customers who understand how to use an emerging technology to solve their business problem and do much of the significant integration to get a new product to work. Sound familiar? Odds are, if you are reading our stuff, you represent folks at the early end of the adoption curve. Then there is the rest of the world. The have nots. These folks don’t want to do integration. They want products they buy to work. Just plug and play. Unless they can hit the Easy Button they aren’t interested. And since they represent the mass market (or mainstream in Moore’s lingo) unless a product/technology matures to this point, it’s unlikely to ever be a standalone, multi-billion-dollar business. 3rd Grade Science Fair Time and again we see that this product needs tuning. Or that product requires integration. Or isn’t it great how Vendor A just opened up their API. It is if you are an early adopter, excited that you now have a project for the upcoming science fair. If you aren’t, you just shut down. You aren’t going to spend the time or the money to make something work. It’s too hard. You’ll just move on to the next issue, where you can solve a problem with a purchase order. SIEM is clearly a science project. Like all cool exploding volcanoes, circuit boards, and fighting Legos, value can be had from a SIEM deployment if you put in the work. And keep putting in the work, because these tools require ongoing, consistent care and feeding. Log Management, on the other hand, is brain-dead simple. Point a syslog stream somewhere, generate a report, and you are done. Where do you think most customers needing to do security management start? Right, with log management. Over time a do make the investment to get to more broad analysis (SIEM), but most don’t. And they don’t need to. Remember – even though we don’t like it and we think they are wrong – these folks don’t care about security. They care about generating a report for the auditor, and log management does that just fine. And that’s what I mean when I call something a science project. To be clear, I love the science fair and I’m sure many of you do as well. But it’s not for everyone. Photo credit: “Science Projects: Volcanoes, Geysers, and Earthquakes” originally uploaded by Old Shoe Woman Share:

Share:
Read Post

Friday Summary: March 4, 2011

The Friday summary is our chance to talk about whatever, and this week I am going to do just that. This week’s introduction has nothing to do with security, so skip it if you are offended by such things. I am a fan of basketball – despite being too slow, too short, and too encumbered by gravity to play well. Occasionally I still follow my ‘local’ Golden State Warriors despite their playoff-less futility for something like 19 of the last 20 years. Not like I know much about how to play the game, but I like watching it when I can. Since moving to Phoenix over 8 years ago it’s tough to follow, but friends were talking last summer about the amazing rookie season performance of Stephen Curry and I was intrigued. I Googled him to find out what was going on and found all the normal Bay Area sports blogs plus a few independents – little more than random guys talking baskeball related nonesense. But one of them – feltbot.com – was different. After following the blog for a while an amazing thing happened: I noticed I could not stomach most of the mainstream media coverage of Warriors basketball. It not only changed my opinion on sports blogs, but cemented in my mind what I like about blogs in general – to the point that it’s making me rethink my own posts. The SF Bay Area has some great journalists, but it also has a number of people with great stature who lack talent, or the impetus to use their talent. These Bay Area personalities offer snapshots of local sports teams and lots of opinions, but very little analysis. They get lots of air but little substance. Feltbot – whoever he is – offers plenty of opinions, just like every other Bay Area sports blogger. And he has lots of biases, but they are in the open, such as being a Don Nelson fanboi. But his opnions are totally contrary to what I was reading and hearing on the radio. And he calls out everyone, from announcers to journalist when he thinks they are off the mark. What got me hooked was him going into great detail on why why – including lots of analysis and many specific examples to back up his assertions. You read one mainstream sports blog that says one thing, and another guy who says exactly the opposite, and then goes into great detail as to why. And over the course of a basketball season, what seemed like outlandish statements week one were dead on target by season’s end. This blog is embarrasing many of the local media folk, and downright eviscerating a few of them – making them look like clueless hacks. I started to realize how bad most of the other Bay Area sports blogs were (are); they provide minimal coverage and really poor analysis. Over time I have come to recognize the formulaic approach of the other major media personalities. You realize that most writers are not looking to analyze players, the coach, or the game – they are just looking for an inflammatory angle. Feltbot’s stuff is so much better that the other blogs I have run across that it makes me feel cheated. It’s like reading those late-career James Patterson novels where he is only looking for an emotional hook rather than trying to tell a decent story. For me, feltbot put into focus what I like to see in blogs – good analysis. Examples that illustrate the ideas. It helps a basketball noob like me understand the game. And a little drama is a good thing to stir up debate, but in excess it’s just clumsy shtick. Sometimes it takes getting outside security to remind me what’s important, so I’ll try to keep that in mind when I blog here. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s Dark Reading post: DAM Market Observation. Mort cited for talking about cloud security at Bsides. Rich and Mike covered on the Tripwire blog. Rich quoted on SearchSecurity. Favorite Securosis Posts Rich: Always Assume. This is a post I did a while back on how I think about threat/risk modeling. In a post HBGary world, I think it’s worth a re-read. Mike Rothman: What No One Is Saying about That Big HIPAA Fine. Sometimes you just need to scratch your head. Adrian Lane: FireStarter: Risk Metrics Are Crap. Yeah, it was vague in places and intentionally incendiary, but it got the debate going. And the comments rock! Other Securosis Posts On Science Projects. Random Thoughts on Securing Applications in the Cloud. Network Security in the Age of Any Computing: the Risks. Incite 3/2/2011: Agent Provocateur. React Faster and Better: Index. React Faster and Better: Piecing It Together. Favorite Outside Posts Rich: Numbers Good. Jeremiah’s been doing some awesome work on web stats for a while now, and this continues the trend. Mike Rothman: Post-theft/loss Response & Recovery With Evernote. We need an IR plan for home as well. Bob does a good job of describing one way to make filing claims a lot easier. Adrian Lane: Network Security Management-A Snapshot. Really nice overview by Shimmy! Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Alleged WikiLeaker could face death penalty. SMS trojan author pleads guilty. NIST SHA-3 Status Report. Robert Graham Predicts Thunderbolt’s an Open Gateway. Malware infects more than 50 Android apps. Thoughts on Quitting Security. Gh0stMarket operators sentenced. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Alex Hutton,

Share:
Read Post

Incite 3/2/2011: Agent Provocateur

It’s been a while since I have gotten into a good old-fashioned Twitter fight. Actually the concept behind FireStarter was to throw some controversial thought balloons out there and let the community pick our stuff apart and help find the break points in our research positions. As Jeremiah tweeted yesterday, “whatever the case, mission accomplished. Firestarter!” to my post Risk Metrics Are Crap. It devolved into a bare-knuckled brawl pretty quickly, with some of the vociferous risk metrics folks. After reading our Twitter exchanges yesterday and today, you might think that Alex Hutton and I don’t like each other. I can’t speak for him, but I like Alex a lot. He’s smart, well read, and passionate about risk metrics. I knew I’d raise his ire with the post, and it’s all good. It’s not the first time we’ve sparred on this topic, and it won’t be the last. Lord knows I make a trade of giving folks a hard time, so it would be truly hypocritical if I didn’t like the taste of my own medicine. And it don’t taste like chicken. Just remember, you won’t last in any business if you can’t welcome opposing perspectives and spirited debate. Though I do have to admit that Twitter has really screwed up the idea of a blog fight. In the good old days – you know, like 3 years ago – fights would be waged either in the comments or by alternating inflammatory blog posts. It was awesome and asynchronous. I wouldn’t lose part of an argument because I had to take a piss and was away from my keyboard for a minute. And I also wasn’t restricted to 140 characters, which makes it tough to discuss the finer points of security vs. risk metrics. But either way, I appreciate the willingness of Alex and other risk metrics zealots like Jack Jones and Chris Hayes to wade into the ThunderDome and do the intellectual tango. But hugging it out with these guys isn’t the point. I’ve always been lucky to have folks around to ask the hard questions, challenge assumptions, and make me think about my positions. And I do that for my friends as well. One of whom once called me a ‘provocateur’ – in a good way. He wanted to bring me into his shop to ask those questions, call their babies ugly, and not allow his team to settle for the status quo. Not without doing the work to make sure the status quo made sense moving forward. It doesn’t matter what side of the industry you play. Users need someone to challenge their architectures, control sets, and priorities. Vendors need someone to stir the pot about product roadmap, positioning, and go-to-market strategies. Analysts and consultants need someone to tell them they are full of crap, and must revisit their more hare-brained positions. The good news is I have folks, both inside and outside Securosis, lined up around the block to do just that. I think that’s good news. Where can you find these provocateurs? We at Securosis do a good bit of it, both formally and informally. And we’ll be doing a lot more when we launch the sekret project. You can also find plenty of folks at your security bitch sessions networking groups who will be happy to poke holes in your strategy. Or you can go to an ISSA meeting, and while trying to avoid a sales person humping your leg you might run into someone who can help. They would much rather be talking to you than be a sales spunk repository, for sure. Also keep in mind that the provocateur isn’t just a work thing. I like when folks give me pointers on child rearing, home projects, and anything else. I probably wouldn’t appreciate if someone blogged that “Rothman’s Drywall Skills Are Crap” – not at first, at least. But maybe if they helped me see a different way of looking at the problem (maybe wallpaper, or paneling, or a nice fellow who does drywall for a living), it would be a welcome intrusion. Or maybe I’d just hit them with a bat. Not all provocateurs find a happy ending. -Mike Photo credits: “So pretty.” originally uploaded by cinderellasg Incite 4 U Ready for the onslaught of security migrants?: Last week I ranted a bit about giving up, and how some folks weren’t really prepared for the reality of the Bizarro World of security. Well, sports fans, it won’t be getting better. When the CareerBuilder folks call “Cyber security specialist” the top potential job, we are all screwed. Except SANS – they will continue running to the bank, certifying this new generation of IT migrants looking for the next harvest. But we probably shouldn’t bitch too much, given the skills shortage. But do think ahead about how your organization needs to evolve, given the inevitable skill decline when you hire n00bs. We all know a company’s security is only as good as its weakest link, and lots of these new folks will initially be weak. So check your change management processes now and make sure you adequately test any change. – MR NSFW login: Every now and then an idea comes along that is so elegant, so divinely inspired, that it nearly makes me believe that perhaps there is more to this human experience than the daily grind of existence. I am, of course, talking about the Naked Password. Here’s how it works… you install the JavaScript on your site and as users create passwords – the (ahem) ‘longer’ and ‘stronger’ the password, the less clothing on the 8-bit illustrated woman next to the password field. Forget the password strength meter, this is a model I can… really get my arms around. Mike Bailey said it best when he reminded us that, for all the time we spend learning about social engineering attacks, perhaps we should apply some of those principles to our own people. – RM Old school cloud: When did Gmail & Hotmail become “The Cloud”? Seriously. Gmail goes down for a few hours – because of a bad patch – and that warrants

Share:
Read Post

Network Security in the Age of *Any* Computing: the Risks

We are pleased to kick off the next of our research projects, which we call “Network Security in the Age of Any Computing.” It’s about how reducing attack surface, now that those wacky users expect to connect to critical resources from any device, at any time, from anywhere in the world. Thus ‘any’ computing. Remember, in order to see our blog series (and the rest of our content) you’ll need to check out our Heavy feed. You can also subscribe to the Heavy feed via RSS. Introduction Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required raised flooring and an large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization. Whatever control we (IT) thought we had over the environment is gone. End users pick their devices and demand access to critical information within the enterprise. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time during the day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who either idolize Steve Jobs and will be using a Mac whether you like it or not, or are technically savvy and prefer Linux. Better yet, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? Sure, we could hearken back to the good old days. You know – the days of the Blackberry, when we had some semblance of control. All mobile access happened through your BlackBerry Enterprise Server (BES). You could wipe the devices remotely and manage policy and access. Even better, you owned the devices so you could dictate what happened on them. Those days are over. Deal with it. The Risks of Any Computing We call this concept any computing. You are required to provide access to critical and sensitive information on any device, from anywhere, at any time. Right – it’s scary as hell. Let’s take a step back and quickly examine the risks. If you want more detail, check out our white paper on Mobile Device Security (PDF): Lost Devices: Some numbnuts you work with manage to lose laptops, so imagine what they’ll do with these much smaller and more portable devices. They will lose them, with data on them. And be wary of device sales – folks will often use their own the devices, copy your sensitive data to them, and eventually sell them. A few of these people will think to wipe their devices first, but you cannot rely on their memory or sense of responsibility. Wireless Shenanigans: All of these any computing devices include WiFi radios, which means folks can connect to any network. And they do. So we need to worry about what they are connecting to, who is listening (man in the middle), and otherwise messing with network connectivity. And rogue access points aren’t only in airport clubs and coffee shops. Odds NetStumbler can find some ‘unauthorized’ networks in your own shop. Plenty of folks use 3G cards to get a direct pipe to the Internet – bypassing your egress controls, and if they’re generous they might provide an unrestricted hotspot for their neighbors. Did I hear you to say ubiquitous connectivity is a good thing? Malware: Really? To be clear, malware isn’t much of an issue on smart phones now. But you can’t assume it never will be, can you? More importantly, consumer laptops may not be protected against today’s attacks and malware. Even better, many folks have jailbroken their devices to load that new shiny application – not noticing that in the process they disabled many of their device’s built-in security features in the process. Awesome. Configuration: Though not necessarily a security issue, you need to consider that many of these devices are not configured correctly. They will load applications they don’t need and turn off key security controls, then connect to your customer database. So any computing creates clear and significant management issues as well. If not handled correctly, these will create vastly more attack surface. “Network Security in the Age of Any Computing” will take a look at these issue from a network-centric perspective. Why? You don’t control the devices, so you need to look at what types of environments/controls can provide some control at a layer you do control – the network. We’ll examine a few network architectures to deal with these devices. We will also looking at some network security technologies that can help protect critical information assets. Business Justification Finally, let’s just deal with the third wheel of any security initiative: business justification. Ultimately you need to make the case to management that additional security technologies are worthwhile. Of course, you could default to the age-old justification of fear – wearing them down with all the bad things that could happen. But with any computing it doesn’t need to be that complicated. List top line impact: First we need to pay attention to the top line, because that’s what the bean counters and senior execs are most interested in. So map out what new business processes can happen with support for these devices, and get agreement that the top line impact of these new process is bigger than a breadbox. It will be hard (if not impossible) to estimate true revenue impact, so the goal is to get acknowledgement that positive business impact is real. New attack vectors: Next have a very unemotional discussion about all the new ways to compromise your critical information via these new processes. Again, you don’t need to throw FUD (fear, uncertainty, and doubt) bombs, because you have reality on your side. Any computing does make it harder to protect information. Close (or not): Basically you are in a position to now close the loop and get funding – not by selling Armageddon, but instead providing a simple trade-off. The organization needs to support any computing for lots of business reasons. That introduces new attack vectors, putting critical data at risk. It will cost $X

Share:
Read Post

Random Thoughts on Securing Applications in the Cloud

How do you secure data in the cloud? The answer is “it depends”. What type of cloud are you talking about – IaaS, PaaS, or SaaS? Public or Private? What services or applications are you running? What data do you want to protect? Following up on the things I learned at RSA, one statement I heard makes sense now. Specifically, a couple weeks ago Chris Hoff surprised me when, talking about data security in the cloud, he tweeted: Really people need to be thinking more about app-level encryption. Statements like that normally make the information-centric security proponent in me smile with glee. But this time I did not get his point. Lots of different models of the cloud, and lots of ways to protect data, so why the emphatic statement? He answered the question during the Cloudiquantanomidatumcon presentation. Chris asked “How do you secure data in two virtual machines running in the cloud?” The standard answer: PKI and SSL. Data at rest and data in motion are covered. With that model in your head, it does not look too complex. But during the presentation, especially in an IaaS context, you begin to realize that this is a problem as you scale to many virtual machines with many users and dispersed infrastructure bits and pieces. As you start to multiply virtual machines and add users, you not only create a management problem, but also lose the context of which users should be able to access the data. Encryption at the app layer keeps data secure both at rest and in motion, should reduce the key management burden, and helps address data usage security. App layer encryption has just about the same level of complexity at two VMs; but its complexity scales up much more gradually as you expand the application across multiple servers, databases, storage devices, and whatnot. So Chris convinced me that application encryption is the way to scale, and this aligns with the research paper Rich and I produced on Database Encryption, but for slightly different reasons. I can’t possibly cover all the nuances of this discusion in a short post, and this is big picture stuff. And honestly it’s a model that theoretically makes a lot of sense, but then again so does DRM, and production deployments of that technology are rare as hen’s teeth. Hopefully this will make sense before you find yourself virtually knee deep in servers. Share:

Share:
Read Post

React Faster and Better: Index

With yesterday’s post, we have reached the end of the React Faster and Better series on advanced Incident Response. This series focuses a bit more on the tools and tactics than Incident Response Fundamentals. For some of you, this will be the first time you are seeing some of these posts. No, we aren’t cheating you. But we have moved our blog series to our Heavy Feed (http://securosis.com/blog/full) to keep the main feed focused on news and commentary. Over the next week or so, we’ll be turning the series into some white paper goodness, so stay tuned for that. Introduction Incident Response Gaps New Data for New Attacks Alerts & Triggers Initial Incident Data Organizing for Response Kicking off a Response Contain and Respond Respond, Investigate, and Recover Piecing It Together Check it out. Share:

Share:
Read Post

FireStarter: Risk Metrics Are Crap

I recently got into a debate with someone about cyber-insurance. I know some companies are buying insurance to protect against a breach, or to contain risk, or for some other reason. In reality, these folks are flushing money down the toilet. Why? Because the insurance companies are charging too much. We’ve already had some brave soul admit that the insurers have no idea how to price these policies because they have no data and as such they are making up the numbers. And I assure you, they are not going to put themselves at risk, so they are erring on the side of charging too much. Which means buyers of these policies are flushing money down the loo. Of course, cyber-insurance is just one example of trying to quantify risk. And taking the chance that the ALE heads and my FAIR-weather friends will jump on my ass, let me bait the trolls and see what happens. I still hold that risk metrics are crap. Plenty of folks make up analyses in attempts to quantify something we really can’t. Risk means something different to everyone – even within your organization. I know FAIR attempts to standardize vernacular and get everyone on the same page (which is critical), but I am still missing the value of actually building the models and making up plugging numbers in. I’m pretty sure modeling risk has failed miserably over time. Yet lots of folks continue to do so with catastrophic results. They think generating a number makes them right. It doesn’t. If you don’t believe me, I have a tranche of sub-prime mortgages to sell you. There may be examples of risk quantification wins in security, but it’s hard to find them. Jack is right: The cost of non-compliance is zero* (*unless something goes wrong). I just snicker at the futility of trying to estimate the chance of something going wrong. And if a bean counter has ever torn apart your fancy spreadsheet estimating such risk, you know exactly what I’m talking about. That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP). But you don’t need to be able to say the risk of firing the admin is 92, and the risk of not upgrading the perimeter is 25. Those numbers are crap and smell as bad as the vendors who try to tie their security products to a specific ROI. BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks. And we can always default to Shrdlu’s next-generation security metrics, which are awesome. But I think spending a lot of time trying to quantify risk continues to be a waste. I know you all make decisions every day because Symantec thinks today’s CyberCrime Index is 64 and that’s down 6%. Huh? WTF? I mean, that’s just making sh*t up. So fire away, risk quantifiers. Why am I wrong? What am I missing? How have you achieved success quantifying risk? Or am I just picking on the short bus this morning? Photo credits: “Smoking pile of sh*t – cropped” originally uploaded by David T Jones Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.