Securosis

Research

Network Security in the Age of *Any* Computing: Containing Access

In the first post of this series, we talked about the risks inherent to this concept of any computing, where those crazy users want to get at critical data at any time, from anywhere, on any device. And we all know it’s not pretty. Sure, there are things we can do at the device layer to protect the and ensure a proper configurations. But in this series we will focus on how to architect and secure the network to protect critical data. The first aspect of that is restricting access to key portions of your network to only those folks that need it. Segmentation is your friend There is an old saying, “out of sight, out of mind,” which could be rephrased for information security as, “out of reach, out of BitTorrent.” By using a smart network segmentation strategy, you can keep the critical data out of the clutches of attackers. OK, that’s an overstatement, but segmentation is the first step to protecting key data. We want to make it as hard as possible for the data to be compromised, and that’s why we put up as many obstacles as possible for attackers. Unless you are being specifically targeted, simply not being the path of least resistance is a decent strategy. The fewer folks who have access to something, the less likely that access will be abused, and the more quickly and effectively we can figure out who is the bad actor in case of malfeasance. Not that we believe the PCI-DSS v2.0 standards represent even a low bar for security controls, but they do advocate and require segmentation of cardholder data. Here is the specific language: All systems must be protected from unauthorized access from untrusted networks, whether entering the system via the Internet as e-commerce, employee Internet access through desktop browsers, employee e-mail access, dedicated connections such as business-to-business connections, via wireless networks, or via other sources. Often, seemingly insignificant paths to and from untrusted networks can provide unprotected pathways into key systems. Firewalls are a key protection mechanism for any computer network. One architectural construct to think about segmentation is the idea of vaults, which really are just a different way of thinking about segmentation of all data – not just cardholder data. This entails classifying data sources into a few tiers of sensitivity and then designing a control set to ensure access to only those authorized. The goal behind classifying critical data sources is to ensure access is only provided to the right person, on the right device, from the right place, at the right time. Of course, that first involves defining rules for who can come in, from where, when, and on what device. And we cannot trivialize that effort, because it’s time consuming and difficult. But it needs to be done. Once the data is classified and the network is segmented – which will discuss in more depth as we progress through this series – we need to authenticate the user. An emerging means of enforcing access to only those authorized devices is to look at something like risk-based or adaptive authentication, where the authentication isn’t just about two or more factors, but instead dynamically evaluated based on any number of data points: including who you are, what you are doing, where you are connecting from, and when you are trying to gain access. This certainly works well for ensuring only the right folks get in, but what happens once they are in? The obvious weakness of a control structure focused purely on initial authentication is that a device could be compromised after entry – and then all the network controls are irrelevant because the device already has unfettered access. A deeper look at risk-based authentication is beyond our scope for this research project, but warrants investigation as you design control structures. We also need to ensure we are very critically looking at how the network controls can be bypassed. If a machine is compromised after getting access, that is a problem unless you are constantly scrutinizing who has access on a continuous basis. And yes, we’ll discuss that in the next post. You also need to worry about unauthorized physical access to your network. That could be a hijacked physical port or a rogue wireless access point. Either way, someone then gets physical access to your network and bypasses the perimeter controls. Architectural straw man Now let’s talk about one architectural construct in terms of three different use models for your network, and how to architect a network in three segments, depending on the use case for access. Corporate network: This involves someone who has physical access to your corporate network. Either via a wired connection or a wireless access point. External mobile devices: These devices access corporate resources via an uncontrolled network. That includes home networks, public wireless networks, cellular (3G) networks, and even partner networks. If your network security team can’t see the entirety of the ingress path, then you need to consider it an external connection and device. Internal guest access: These are devices that just need to access the Internet from inside one of your facilities. Typically these are smartphones used by employees, but we must also factor in a use case for businesses (retail/restaurants, healthcare facilities, etc.) to provide access as a service. We want to provide different (and increasing) numbers of hoops for users to jump through to get access to important data. The easiest to discuss is the third case (internal guest access), because you only need to provide an egress pipe for those folks. We recommend total physical isolation for these devices. That means a totally separate (overlay) wireless network, which uses a different pipe to the Internet. Yes, that’s more expensive. But you don’t want a savvy attacker figuring out a way to jump from the egress network to the internal network. If the networks are totally separate, you eliminate that risk. The techniques to support your corporate network and external mobile devices are largely the same under the philosophy of “trust, but verify.” So we need to design the control sets to scrutinize users. The real question is how many more

Share:
Read Post

On Science Projects

I think anyone who writes for a living sometimes neglects to provide the proper context before launching into some big thought. I please guilty as charged on some aspects of the Risk Metrics Are Crap FireStarter earlier this week. As I responded to some of the comments, I used the term science project to describe some technologies like GRC, SIEM, and AppSec. Without context, some folks jumped on that. So let me explain a bit of what I mean. Haves and Have Nots At RSA, I was reminded of the gulf between the folks in our business who have and those who don’t have. The ‘haves’ have sophisticated and complicated environments, invest in security, do risk assessment, hey periodically have auditors in their shorts, and are very likely to know their exposures. These tend to be large enterprise-class organizations – mostly because they can afford the requisite investment. Although you do see a many smaller companies (especially if they handle highly regulated information) that do a pretty good job on security. These folks are a small minority. The ‘have nots’ are exactly what it sounds like. They couldn’t care less about security, they want to write a check to make the auditor go away, and they resent any extra work they have to do. They may or may not be regulated, but it doesn’t really matter. They want to do their jobs and they don’t want to work hard at security. This tends to be the case more often at smaller companies, but we all know there are plenty of large enterprises in this bucket as well. We pundits, Twitterati, and bloggers tend to spend a lot of time with the haves. The have nots don’t know who Bruce Schneier is. They think AV keeps them secure. And they wonder why their bank account was looted by the Eastern Europeans. Remember the Chasm Lots of security folks never bothered to read Geoffrey Moore’s seminal book on technology adoption, Crossing the Chasm. It doesn’t help you penetrate a network or run an incident response, so it’s not interesting. Au contraire, if you wonder why some product categories go away and others become things you must buy, you need to read the book. Without going too deeply into chasm vernacular, early markets are driven by early adopters. These are the customers who understand how to use an emerging technology to solve their business problem and do much of the significant integration to get a new product to work. Sound familiar? Odds are, if you are reading our stuff, you represent folks at the early end of the adoption curve. Then there is the rest of the world. The have nots. These folks don’t want to do integration. They want products they buy to work. Just plug and play. Unless they can hit the Easy Button they aren’t interested. And since they represent the mass market (or mainstream in Moore’s lingo) unless a product/technology matures to this point, it’s unlikely to ever be a standalone, multi-billion-dollar business. 3rd Grade Science Fair Time and again we see that this product needs tuning. Or that product requires integration. Or isn’t it great how Vendor A just opened up their API. It is if you are an early adopter, excited that you now have a project for the upcoming science fair. If you aren’t, you just shut down. You aren’t going to spend the time or the money to make something work. It’s too hard. You’ll just move on to the next issue, where you can solve a problem with a purchase order. SIEM is clearly a science project. Like all cool exploding volcanoes, circuit boards, and fighting Legos, value can be had from a SIEM deployment if you put in the work. And keep putting in the work, because these tools require ongoing, consistent care and feeding. Log Management, on the other hand, is brain-dead simple. Point a syslog stream somewhere, generate a report, and you are done. Where do you think most customers needing to do security management start? Right, with log management. Over time a do make the investment to get to more broad analysis (SIEM), but most don’t. And they don’t need to. Remember – even though we don’t like it and we think they are wrong – these folks don’t care about security. They care about generating a report for the auditor, and log management does that just fine. And that’s what I mean when I call something a science project. To be clear, I love the science fair and I’m sure many of you do as well. But it’s not for everyone. Photo credit: “Science Projects: Volcanoes, Geysers, and Earthquakes” originally uploaded by Old Shoe Woman Share:

Share:
Read Post

Incite 3/2/2011: Agent Provocateur

It’s been a while since I have gotten into a good old-fashioned Twitter fight. Actually the concept behind FireStarter was to throw some controversial thought balloons out there and let the community pick our stuff apart and help find the break points in our research positions. As Jeremiah tweeted yesterday, “whatever the case, mission accomplished. Firestarter!” to my post Risk Metrics Are Crap. It devolved into a bare-knuckled brawl pretty quickly, with some of the vociferous risk metrics folks. After reading our Twitter exchanges yesterday and today, you might think that Alex Hutton and I don’t like each other. I can’t speak for him, but I like Alex a lot. He’s smart, well read, and passionate about risk metrics. I knew I’d raise his ire with the post, and it’s all good. It’s not the first time we’ve sparred on this topic, and it won’t be the last. Lord knows I make a trade of giving folks a hard time, so it would be truly hypocritical if I didn’t like the taste of my own medicine. And it don’t taste like chicken. Just remember, you won’t last in any business if you can’t welcome opposing perspectives and spirited debate. Though I do have to admit that Twitter has really screwed up the idea of a blog fight. In the good old days – you know, like 3 years ago – fights would be waged either in the comments or by alternating inflammatory blog posts. It was awesome and asynchronous. I wouldn’t lose part of an argument because I had to take a piss and was away from my keyboard for a minute. And I also wasn’t restricted to 140 characters, which makes it tough to discuss the finer points of security vs. risk metrics. But either way, I appreciate the willingness of Alex and other risk metrics zealots like Jack Jones and Chris Hayes to wade into the ThunderDome and do the intellectual tango. But hugging it out with these guys isn’t the point. I’ve always been lucky to have folks around to ask the hard questions, challenge assumptions, and make me think about my positions. And I do that for my friends as well. One of whom once called me a ‘provocateur’ – in a good way. He wanted to bring me into his shop to ask those questions, call their babies ugly, and not allow his team to settle for the status quo. Not without doing the work to make sure the status quo made sense moving forward. It doesn’t matter what side of the industry you play. Users need someone to challenge their architectures, control sets, and priorities. Vendors need someone to stir the pot about product roadmap, positioning, and go-to-market strategies. Analysts and consultants need someone to tell them they are full of crap, and must revisit their more hare-brained positions. The good news is I have folks, both inside and outside Securosis, lined up around the block to do just that. I think that’s good news. Where can you find these provocateurs? We at Securosis do a good bit of it, both formally and informally. And we’ll be doing a lot more when we launch the sekret project. You can also find plenty of folks at your security bitch sessions networking groups who will be happy to poke holes in your strategy. Or you can go to an ISSA meeting, and while trying to avoid a sales person humping your leg you might run into someone who can help. They would much rather be talking to you than be a sales spunk repository, for sure. Also keep in mind that the provocateur isn’t just a work thing. I like when folks give me pointers on child rearing, home projects, and anything else. I probably wouldn’t appreciate if someone blogged that “Rothman’s Drywall Skills Are Crap” – not at first, at least. But maybe if they helped me see a different way of looking at the problem (maybe wallpaper, or paneling, or a nice fellow who does drywall for a living), it would be a welcome intrusion. Or maybe I’d just hit them with a bat. Not all provocateurs find a happy ending. -Mike Photo credits: “So pretty.” originally uploaded by cinderellasg Incite 4 U Ready for the onslaught of security migrants?: Last week I ranted a bit about giving up, and how some folks weren’t really prepared for the reality of the Bizarro World of security. Well, sports fans, it won’t be getting better. When the CareerBuilder folks call “Cyber security specialist” the top potential job, we are all screwed. Except SANS – they will continue running to the bank, certifying this new generation of IT migrants looking for the next harvest. But we probably shouldn’t bitch too much, given the skills shortage. But do think ahead about how your organization needs to evolve, given the inevitable skill decline when you hire n00bs. We all know a company’s security is only as good as its weakest link, and lots of these new folks will initially be weak. So check your change management processes now and make sure you adequately test any change. – MR NSFW login: Every now and then an idea comes along that is so elegant, so divinely inspired, that it nearly makes me believe that perhaps there is more to this human experience than the daily grind of existence. I am, of course, talking about the Naked Password. Here’s how it works… you install the JavaScript on your site and as users create passwords – the (ahem) ‘longer’ and ‘stronger’ the password, the less clothing on the 8-bit illustrated woman next to the password field. Forget the password strength meter, this is a model I can… really get my arms around. Mike Bailey said it best when he reminded us that, for all the time we spend learning about social engineering attacks, perhaps we should apply some of those principles to our own people. – RM Old school cloud: When did Gmail & Hotmail become “The Cloud”? Seriously. Gmail goes down for a few hours – because of a bad patch – and that warrants

Share:
Read Post

Network Security in the Age of *Any* Computing: the Risks

We are pleased to kick off the next of our research projects, which we call “Network Security in the Age of Any Computing.” It’s about how reducing attack surface, now that those wacky users expect to connect to critical resources from any device, at any time, from anywhere in the world. Thus ‘any’ computing. Remember, in order to see our blog series (and the rest of our content) you’ll need to check out our Heavy feed. You can also subscribe to the Heavy feed via RSS. Introduction Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required raised flooring and an large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization. Whatever control we (IT) thought we had over the environment is gone. End users pick their devices and demand access to critical information within the enterprise. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time during the day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who either idolize Steve Jobs and will be using a Mac whether you like it or not, or are technically savvy and prefer Linux. Better yet, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? Sure, we could hearken back to the good old days. You know – the days of the Blackberry, when we had some semblance of control. All mobile access happened through your BlackBerry Enterprise Server (BES). You could wipe the devices remotely and manage policy and access. Even better, you owned the devices so you could dictate what happened on them. Those days are over. Deal with it. The Risks of Any Computing We call this concept any computing. You are required to provide access to critical and sensitive information on any device, from anywhere, at any time. Right – it’s scary as hell. Let’s take a step back and quickly examine the risks. If you want more detail, check out our white paper on Mobile Device Security (PDF): Lost Devices: Some numbnuts you work with manage to lose laptops, so imagine what they’ll do with these much smaller and more portable devices. They will lose them, with data on them. And be wary of device sales – folks will often use their own the devices, copy your sensitive data to them, and eventually sell them. A few of these people will think to wipe their devices first, but you cannot rely on their memory or sense of responsibility. Wireless Shenanigans: All of these any computing devices include WiFi radios, which means folks can connect to any network. And they do. So we need to worry about what they are connecting to, who is listening (man in the middle), and otherwise messing with network connectivity. And rogue access points aren’t only in airport clubs and coffee shops. Odds NetStumbler can find some ‘unauthorized’ networks in your own shop. Plenty of folks use 3G cards to get a direct pipe to the Internet – bypassing your egress controls, and if they’re generous they might provide an unrestricted hotspot for their neighbors. Did I hear you to say ubiquitous connectivity is a good thing? Malware: Really? To be clear, malware isn’t much of an issue on smart phones now. But you can’t assume it never will be, can you? More importantly, consumer laptops may not be protected against today’s attacks and malware. Even better, many folks have jailbroken their devices to load that new shiny application – not noticing that in the process they disabled many of their device’s built-in security features in the process. Awesome. Configuration: Though not necessarily a security issue, you need to consider that many of these devices are not configured correctly. They will load applications they don’t need and turn off key security controls, then connect to your customer database. So any computing creates clear and significant management issues as well. If not handled correctly, these will create vastly more attack surface. “Network Security in the Age of Any Computing” will take a look at these issue from a network-centric perspective. Why? You don’t control the devices, so you need to look at what types of environments/controls can provide some control at a layer you do control – the network. We’ll examine a few network architectures to deal with these devices. We will also looking at some network security technologies that can help protect critical information assets. Business Justification Finally, let’s just deal with the third wheel of any security initiative: business justification. Ultimately you need to make the case to management that additional security technologies are worthwhile. Of course, you could default to the age-old justification of fear – wearing them down with all the bad things that could happen. But with any computing it doesn’t need to be that complicated. List top line impact: First we need to pay attention to the top line, because that’s what the bean counters and senior execs are most interested in. So map out what new business processes can happen with support for these devices, and get agreement that the top line impact of these new process is bigger than a breadbox. It will be hard (if not impossible) to estimate true revenue impact, so the goal is to get acknowledgement that positive business impact is real. New attack vectors: Next have a very unemotional discussion about all the new ways to compromise your critical information via these new processes. Again, you don’t need to throw FUD (fear, uncertainty, and doubt) bombs, because you have reality on your side. Any computing does make it harder to protect information. Close (or not): Basically you are in a position to now close the loop and get funding – not by selling Armageddon, but instead providing a simple trade-off. The organization needs to support any computing for lots of business reasons. That introduces new attack vectors, putting critical data at risk. It will cost $X

Share:
Read Post

React Faster and Better: Index

With yesterday’s post, we have reached the end of the React Faster and Better series on advanced Incident Response. This series focuses a bit more on the tools and tactics than Incident Response Fundamentals. For some of you, this will be the first time you are seeing some of these posts. No, we aren’t cheating you. But we have moved our blog series to our Heavy Feed (http://securosis.com/blog/full) to keep the main feed focused on news and commentary. Over the next week or so, we’ll be turning the series into some white paper goodness, so stay tuned for that. Introduction Incident Response Gaps New Data for New Attacks Alerts & Triggers Initial Incident Data Organizing for Response Kicking off a Response Contain and Respond Respond, Investigate, and Recover Piecing It Together Check it out. Share:

Share:
Read Post

FireStarter: Risk Metrics Are Crap

I recently got into a debate with someone about cyber-insurance. I know some companies are buying insurance to protect against a breach, or to contain risk, or for some other reason. In reality, these folks are flushing money down the toilet. Why? Because the insurance companies are charging too much. We’ve already had some brave soul admit that the insurers have no idea how to price these policies because they have no data and as such they are making up the numbers. And I assure you, they are not going to put themselves at risk, so they are erring on the side of charging too much. Which means buyers of these policies are flushing money down the loo. Of course, cyber-insurance is just one example of trying to quantify risk. And taking the chance that the ALE heads and my FAIR-weather friends will jump on my ass, let me bait the trolls and see what happens. I still hold that risk metrics are crap. Plenty of folks make up analyses in attempts to quantify something we really can’t. Risk means something different to everyone – even within your organization. I know FAIR attempts to standardize vernacular and get everyone on the same page (which is critical), but I am still missing the value of actually building the models and making up plugging numbers in. I’m pretty sure modeling risk has failed miserably over time. Yet lots of folks continue to do so with catastrophic results. They think generating a number makes them right. It doesn’t. If you don’t believe me, I have a tranche of sub-prime mortgages to sell you. There may be examples of risk quantification wins in security, but it’s hard to find them. Jack is right: The cost of non-compliance is zero* (*unless something goes wrong). I just snicker at the futility of trying to estimate the chance of something going wrong. And if a bean counter has ever torn apart your fancy spreadsheet estimating such risk, you know exactly what I’m talking about. That said, I do think it’s very important to assess risk, as opposed to trying to quantify it. No, I’m not talking out of both sides of my mouth. We need to be able to categorize every decision into a number of risk buckets that can be used to compare the relative risk of any decision we make against other choices we could make. For example, we should be able to evaluate the risk of firing our trusted admin (probably pretty risky, unless your de-provisioning processes kick ass) versus not upgrading your perimeter with a fancy application aware box (not as risky because you already block Facebook and do network layer DLP). But you don’t need to be able to say the risk of firing the admin is 92, and the risk of not upgrading the perimeter is 25. Those numbers are crap and smell as bad as the vendors who try to tie their security products to a specific ROI. BTW, I’m not taking a dump on all quantification. I have always been a big fan of security (as opposed to risk) metrics. From an operational standpoint, we need to measure our activity and work to improve it. I have been an outspoken proponent of benchmarking, which requires sharing data (h/t to New School), and I expect to be kicking off a research project to dig into security benchmarking within the next few weeks. And we can always default to Shrdlu’s next-generation security metrics, which are awesome. But I think spending a lot of time trying to quantify risk continues to be a waste. I know you all make decisions every day because Symantec thinks today’s CyberCrime Index is 64 and that’s down 6%. Huh? WTF? I mean, that’s just making sh*t up. So fire away, risk quantifiers. Why am I wrong? What am I missing? How have you achieved success quantifying risk? Or am I just picking on the short bus this morning? Photo credits: “Smoking pile of sh*t – cropped” originally uploaded by David T Jones Share:

Share:
Read Post

React Faster and Better: Piecing It Together

We have been through all the pieces of our advanced incident response method, React Faster and Better, so it is time to wrap up this series. The best way to do that is to actually run through a sample incident with some commentary to provide the context you need to apply the method to something tangible. It’s a bit like watching a movie while listening to the director’s commentary. But those guys are actually talented. For brevity we will use an extremely simple high-level example of how the three response tiers evaluate, escalate, and manage incidents: The alert It’s Wednesday morning and the network analyst has already handled a dozen or so network/IDS/SIEM alerts. Most indicate probing from standard network script-kiddie tools and are quickly blocked and closed (often automatically). He handles those himself, just another day in the office. The network monitoring tool pings an alert for an outbound request on a high port to an IP range located in a country known for intellectual property theft. The analyst needs to validate the origin of the packet, so he looks and sees the source IP is in Engineering. Ruh-roh. The tier 1 analyst passes the information along to a tier 2 responder. Important intellectual property may be involved and he suspects malicious activity, so he also phones the on-call handler to confirm the potential seriousness of the incident. Tier 2 takes over, and the tier 1 analyst goes back to his normal duties. This is the first indication that something may be funky. Probing is nothing new and tier 1 needs to handle that kind of activity itself. But the outbound request very well may indicate an exfiltration attempt. And tracing it back to a device that does have access to sensitive data means it’s definitely something to investigate more closely. This kind of situation is why we believe egress monitoring and filtering are so important. Monitoring is generally the only way you can tell if data is actually leaking. At this point the tier 1 analyst should know he is in deep water. He has confirmed the issue and pinpointed the device in question. Now it’s time to hand it off to tier 2. Note that the tier 1 analyst follows up with a phone call to ensure the hand-off happens and that there is no confusion. How bad is bad? The tier 2 analyst opens an investigation and begins a full analysis of network communications from the system in question. The system is no longer actively leaking data, but she blocks any traffic to that destination on the perimeter firewall by submitting a high priority request to the firewall management team. After that change is made, she verifies that traffic is in fact being blocked. She sets an alert for any other network traffic from that system and calls or visits the user, who predictably denies knowing anything about it. She also learns that system normally doesn’t have access to sensitive intellectual property, which may indicate privilege escalation – another bad sign. Endpoint protection platform (EPP) logs for that system don’t indicate any known malware. She notifies her tier 3 manager of the incident and begins a deeper investigation of previous network traffic from the network forensics data. She also starts looking into system logs to begin isolating the root cause. Once the responder notices outbound requests to a similar destination from other systems on the same subnet, she informs incident response leadership that they may be experiencing a serious compromise. Then she finds that the system in question connected to a sensitive file server it normally doesn’t access, and transferred/copied some entire directories. It’s going to be a long night. As we have been discussing, tier 2 tends to focus on network forensics because it’s usually the quickest way to pinpoint attack proliferation and severity. The first step is to contain the issue, which entails blocking traffic to the external IP – this should temporarily eliminate any data leakage. Remember, you might not actually know the extent of the compromise, but that shouldn’t stop you from taking decisive action to contain the damage as quickly as possible. At this point, tier 3 is notified – not necessarily to take action, but so they are aware there might be a more serious issue. It’s this kind of proactive communication that streamlines escalation between response tiers. Next, the tier 2 analyst needs to determine how much the issue has spread within the environment. So she searches through the logs and finds a similar source, which is not good. That means more than one device is compromised and it could represent a major breach. Worst yet, she sees that at least one of the involved systems purposely connected to a sensitive file store and removed a big chunk of content. So it’s time to escalate and fully engage tier 3. Not that it hasn’t been fun thus far, but now the fun really begins. Bring in the big guns Tier 3 steps in and begins in-depth analysis of the involved endpoints and associated network activity. They identify the involvement of custom malware that initially infected a user’s system via drive-by download after clicking a phishing link. No wonder the user didn’t know anything – they didn’t have a chance against this kind of attack. An endpoint forensics analyst then discovers what appears to be the remains of an encrypted RAR file on one of the affected systems. The network analysis shows no evidence the file was transferred out. It seems they dodged a bullet and detected the command and control traffic before the data exfiltration took place. The decision is made to allow what appears to be encrypted command and control traffic over a non-standard port, while blocking all outbound file transfers (except those known to be part of normal business process). Yes, they run the risk of blocking something legit, but senior management is now involved and has decided this is a worthwhile risk, given the breach in progress. To limit potential data loss through the C&C channels left open, they

Share:
Read Post

Incite 2/23/2011: Giving up

I’ve been in the security business a long time. I have enjoyed up cycles through the peaks, and back down the slope to the inevitable troughs. One of my observations getting back from RSAC 2011 is the level of sheer frustration on the part of many security professionals today. Frustration with management, frustration with users, frustration with vendors. Basically lots of folks are burnt out and mad at the world. Maybe it’s just the folks who show up at RSA, but I doubt it. This seems to be true across the industry. A rather blunt tweet from 0ph3lia sums up the way lots of you feel: Every day I’m filled with RAGE at this f***ing industry & the fact that I work in it. Maybe I’m just not cut out for the security industry. This is a manifestation of many things. Tight budgets for a few years. The ongoing skills gap. Idiotic users and management. Lying vendors. All contribute to real job dissatisfaction on broad scale. So do you just give up? Get a job at Starbucks or in a more general IT role? Leave the big company and go to a smaller one, or vice versa? Is the grass going to be greener somewhere else? Only you can answer that question. But many folks got into this business over the past 5 years because security offered assured employment. And they were right. There are tons of opportunities, but at a significant cost. I joke that security is Bizarro World, where a good day is when nothing happens. You are never thanked for stopping the attack, but instead vilified when some wingnut leaves their laptop in a coffee shop or clicks on some obvious phish. You don’t control much of anything, have limited empowerment, and are still expected to protect everything that needs to be protected. For many folks, going to work is like lying on a bed of nails for 10-12 hours a day. So basically to be successful in security you need an attitude adjustment. Shack had a good riff on this yesterday. You can’t own the behaviors of the schmucks who work for your company. Not and stay sane. Sure, you may be blamed when something bad happens, but you have to separate blame from responsibility. If you do your best, you should sleep well. If you can’t sleep or are grumpy because security gets no love and you get blame for user stupidity; or because you have to get a new job every 2-3 years; or for any of the million other reasons you may hate doing security; then it’s okay to give up. Your folks and/or your kids will still love you. Promise. I gave up being a marketing guy because I hated it. That’s right, I said it. I gave up. After my last marketing gig ended, I was done. Finito. No amount of money was worth coming home and snapping at my family because of a dickhead sales guy, failed lead generation campaign, or ethically suspect behavior from a competitor. My life is too short to do something I hate. So is yours. So do some soul searching. If security is no good for you, get out. Do something else. Change is good. Stagnation and anger are not. -Mike Photo credits: “happiness is a warm gun” originally uploaded by badjonni Domo Arigato My gratitude knows no bounds regarding winning the “Most Entertaining Security Blog” award at the Social Security Blogger Awards last week. Really. Truly. Honestly. I’ve got to thank the Boss because she’ll kick my ass if I don’t mention her first every time. Then I need to thank Rich and Adrian (and our extended contributor family) who put up with my nonsense every day. But most of all, I need to thank you. Every time you come up to me at a show and tell me you read my stuff (and actually like it), it means everything to me. I’m always telling you that I know how lucky I am. And it’s times like these, and getting awards like this, that make it real for me. So thanks again and I’ll only promise that I’ll keep writing as long as you keep reading. -Mike Incite 4 U Marketecture does not solve security problems: That was my tweet regarding Cisco’s new marketecture SecureX. The good news is that Cisco has nailed the issues – namely the proliferation of mobile devices and the requisite re-architecting of networks to address the onslaught of bandwidth-hogging video traffic. This will fundamentally alter how we provide ingress and egress, and that will require change in our network security architectures. But what we don’t need is more PowerPoints of products in the pipeline, due at some point in the future. And that’s not even adressing the likelihood of data tagging actually working at scale. If Cisco had delivered on any of their other grand marketecture schemes (all of which looked great on paper), I’d have a little more patience, but they haven’t. Maybe Gillis and Co. have taken some kind of execution pill and will get something done. But until then I wouldn’t be budgeting for much. Is there a SKU for a marketecture? Cisco will probably have it first. – MR You can’t secure a dead horse: Well, technically you can secure an actual deceased horse, but you know what I mean. Microsoft is getting ready to release Service Pack 1 for Windows 7, but nearly all organizations I talk with still rely on Windows XP to some degree. You know, the last operating system Microsoft produced before the Trustworthy Computing Initiative. The one that’s effectively impossible to secure. No matter what we do, we can’t possibly expect to secure something that was never built for our current threat environment. We’re hitting the point where the risks clearly outweigh the non-security related justifications. FWIW, my new favorite saying is: “If you are more worried about the security risks of cloud computing and iOS devices than using XP

Share:
Read Post

FireStarter: the New Cold War

It amuses me that folks were shocked by the latest treasure trove of goodies from the HBGary email spool. Basically these folks built custom malware on behalf of their government clients. Ars Technica digs in (with pretty impressive technical depth, I might add) and makes clear what you should already know. We are in the midst of another cold war. This war is not being fought with nuclear warheads, but computer malware. It’s not visible to most people – and, honestly, most people don’t really care. They should, because the new attacks could knock down our power grids, contaminate our water supplies, and basically cause chaos. You all know I’m no Chicken Little – and to be clear I sleep very well at night. I wasn’t even a glimmer in my parents’ eyes when the Cuban Missile Crisis brought us to the brink, but the ramifications of an all-out cyber conflict are similar. Plenty of folks have semantic issues with calling computers attacking each other ‘war’, because no one actually bleeds (directly). And I agree with that, somewhat. Cyber conflict won’t result in a mushroom cloud or tens of thousands vaporized in a split second (not yet anyway), but the potential for indirect damage is real. But to make the point again, I sleep well at night because as much as it hurts to know there are foreign nations in our most critical stuff (yes, APT, I’m talking about you), we are in their stuff as well. Stuxnet, anyone? What makes you think we aren’t in all the major systems of our potential adversaries? Right, that would be a bad assumption. So we have a good old-fashioned standoff. Another Cold War. Mutually assured destruction is a pretty good deterrent to anyone actually initiating a cyber conflict. Why do you think the APT doesn’t bother to cover its tracks? They want us to know they are there. Duh. Back in the days of the original Cold War, the private sector was engaged to improve our warheads, defend against enemy warheads (remember Star Wars?), and come up with other innovations to give us a snowball’s chance of surviving a nuclear conflict. In this Cold War, we have the private sector providing new weapons (read: malware) and new defenses (your very own security industry) to give us a snowball’s chance of surviving a cyber-conflict. HBGary is not unique in this pursuit. Not by a long shot. There are no white hats or black hats in this game. You need to play both offense and defense. And clearly the US does. We never got the opportunity to see any of the Beltway bandits’ mail spools during the last Cold War, but I suspect we’d be similarly nauseated. But with that nausea comes a sense of relief that the best and the brightest (including Greg Hoglund) are working to protect our interests. Now I understand these weapons can just as easily be used against us, but that has always been the case. So I guess my message is to grow up, people. National security (whatever that means) is a messy business. Share:

Share:
Read Post

The Securosis Guide to RSA 2011: The Full Monty

With great pleasure we post the 2nd annual Securosis Guide to the RSA Conference, 2011 edition. Last year’s guide we built as an experiment, but it has now effectively become an encyclopedia of all things RSA. As you’ve been seeing all week, we list out our key themes and then break down each major section of the industry. In this complete version we include vendor lists for each section and a comprehensive vendor list (with URLs) for easy reference during the show. Rich summed it up best – it’s not really an RSA Guide, it’s more “What’s coming in the next year of security”, which happens to be published in time for RSA. Below are links to both PDF and ePub versions. I loaded the Guide onto my iPad this morning and it looks great, so you may want to do that as well (in iBooks you may need to select the PDF tab). Enjoy and tell your friends. It’s free. Let’s just hope it doesn’t show up verbatim in the World’s #1 Hacker’s next book. PDF version: Securosis-GuidetoRSAC2011.pdf ePub version: Securosis-GuidetoRSAC2011.epub PS: Vendors, if you are looking for a nice giveaway for one last blast to your prospect lists to give away all those valuable expo passes, feel free to distribute the Guide. No fees, no nothing. It’s our little Valentine’s Day gift to you. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.