Securosis

Research

Incite 3/7/2012: Perspective

Life is a series of ebbs and flows. Highs and lows. Crests and troughs. It’s a yin/yang thing, and unfortunately most folks can’t appreciate that. Especially when they can’t see their way out of a down period. For a lot of security folks, the last two weeks have been such a contrast between those highs and lows that many are probably feeling whiplash. A lot of folks went to the RSA Conference last week and saw an industry thriving again after 3 years in the doldrums. We all felt good. Those who read blog posts and tweets from folks at the conference felt good. It was one of those highs, and I returned to ATL exhausted but in good spirits. Not necessarily feeling like the tide had turned, but that swimming upstream wouldn’t be as hard for a while – however brief. Then the discussions about whether we are losing started early this week. Ben’s post on LiquidMatrix verbalized a lot of what we all feel from time to time. And the burnout, building brick by brick which Rich described so eloquently is a clear explanation of the phenomenon. Rich’s point is that we will always have bad days, just as we have good days. And those who can survive in security for a long time don’t take things personally – especially the bad days. They know (and appreciate) the futility of the game, and enjoy the battles. The learning. The teamwork. They don’t get bitter and angry about the stupidity or the politics or the apathy. Or they hit the wall. Hard. Which is really the point. It’s not about winning or losing. It’s about enjoying the journey. You will lose some battles, just as you will win some. You may lose more than you win, but that’s because the game is rigged. Like Vegas. In the long run, math wins. It’s always been that way, and yet we (amazingly enough) still function. As Ranum says, the Internet will be as secure as it needs to be. In the wake of the shocking news that Sabu was an informer (sound familiar? Gonzalez the Sequel?) and he provided the smoking guns to take down LulzSec, some folks started gloating. That good wins over evil crap. But now is not the time to gloat. Nor is every compromise or incident the time to let despondency or depression creep in. If you get too high or too low you’ll burn out. Been there. Done that. To remain on even keel requires perspective. Perspective that is hard to appreciate when you are in the trenches and on the front lines. On the flight back from RSA we flew into a pretty nasty storm. The last 30 minutes of the flight was turbulent. Regardless of my understanding of statistics, which dictates that I’m as safe in the air during heavy turbulence as I am now – sitting in a coffee shop writing this missive – it’s still a bit unsettling. So I closed my eyes and visualized riding a roller coaster, which I love to do. The exhilaration, the perception of danger, the adrenaline rush – you get off a coaster feeling alive. Maybe a bit scared, but alive. And you want to do it again. That flight was a microcosm of life. Smooth and comfortable for a while, then not so much. Highs, lows, and everything in between. I enjoyed the flight because the bumpy air is part of the deal. You can’t avoid it – not entirely. So I chose to have perspective and enjoy the coaster. I just wish more folks in security could appreciate the journey… -Mike Photo credits: “Learning Perspective” originally uploaded by Yelnoc Lazy Deal Analysis: Trustwave buys another laggard We don’t care enough about the Trustwave/M86 merger to do a stand-alone post, but it does warrant a least a little snark… erm… analysis. 86-it: Trustwave announced today that they will be putting M86 out of its misery, acquiring the mixed bag of stuff web and email security vendor for an undisclosed sum. For those with long memories, M86 was formed as the merger of creaky web security appliance vendor 8e6 with the seriously outdated Marshall mail security software. The resultant M86 company tried to acquire themselves into relevance, making sage investments in Finjan’s secure web gateway software and Avinti’s behavior-based malware detection software. Yeah, 10 pounds of crap in a 5-pound bag. While those products were great additions, the core capabilities were several years behind the competition – and worse, never fully integrated. Details, details. While their Firefox secure browsing plugin was a fun toy, their ability to protect cloud data was suspect and the product development roadmap seemed driven by the trend du jour, rather than some holistic vision of web user security. Trustwave’s acquisition strategy has been reminiscent of the island of lost toys: buying laggards like Vericept, Mirage Networks, Breach Security, BitArmor, ControlPath, and Intellitactics. From that perspective M86 is a good fit with little overlap, but without really integrating the offerings, this is just more integration on the PO. More likely they will continue to target customers too lazy to perform a head-to-head comparisons with class-leading products and those trying to make audit deficiencies (found by Trustwave themselves, in an unholy alliance of audit and security product) go away. – AL & MR Incite 4 U Don’t be Lulzed into a false sense of security: By the time I submit this to Mike I’m sure someone else will slip in a link to the story about LulzSec getting nailed by the FBI with some good old-fashioned police work. You know, attempting to scare the crap out of the perp and turn him against his friends. Uh, like they did to Sabu. To be honest, the headlines don’t really matter that much to those of us in operational security (including me – someone has to keep Mike and Adrian safe) as we are pretty pragmatic about the media’s incentive to work everyone into a frenzy. Rafal Los does a great job pointing out how to handle headline hysteria. Raf’s point is to ignore the headlines, focus

Share:
Read Post

Upcoming Cloud Security Training Courses

Our world domination tour continues. At least if you consider training for the Certificate of Cloud Security Knowledge (CCSK) part of your plan to know all things Cloud Security. As authors of the training curriculum, we are the only folks who can train and certify instructors to deliver the training, so a couple times a year we deliver the courses, live and in person. We’ve got two courses coming up, one in San Jose and the other in Milan, Italy. If you want to become certified to teach, you’ll need to attend one of these courses. And if you aren’t interested in teaching, it’s also a good opportunity to get the training from the folks who built the course. San Jose: March 27-29 Milan, Italy: April 2-4 Here is the description of each of the 3 days of training: There is a lot of hype and uncertainty around cloud security, but this class will slice through the hyperbole and provide students with the practical knowledge they need to understand the real cloud security issues and solutions. The Certificate of Cloud Security Knowledge (CCSK) – Basic class provides a comprehensive one day review of cloud security fundamentals and prepares them to take the Cloud Security Alliance CCSK certification exam. Starting with a detailed description of cloud computing, the course covers all major domains in the latest Guidance document from the Cloud Security Alliance, and the recommendations from the European Network and Information Security Agency (ENISA). The Basic class is geared towards security professionals, but is also useful for anyone looking to expand their knowledge of cloud security. (We recommend attendees have at least a basic understanding of security fundamentals, such as firewalls, secure development, encryption, and identity management). The CCSK-Plus class builds upon the CCSK Basic class with expanded material and extensive hands-on activities with a second day of training. The Plus class (on the second day) enhances the classroom instruction with real world cloud security labs! Students will learn to apply their knowledge as they perform a series of exercises, as they complete a scenario bringing a fictional organization securely into the cloud. This second day of training includes additional lecture, although students will spend most of their time assessing, building, and securing a cloud infrastructure during the exercises. Activities include creating and securing private clouds and public cloud instances, as well as encryption, applications, identity management, and much more. The CCSK Instructor workshop adds a third day to train prospective trainers. More detail about how to teach the course will be presented, as well as a detailed look into the hands-on labs, and an opportunity for all trainers to present a portion of the course. Click here for more information on the CCSK Training Partner Program (PDF). We look forward to seeing you there. Share:

Share:
Read Post

Burnout

I feel fortunate that I’m not haunted by the images of what I have witnessed. If I don’t sleep well at night it’s due to stress at work or at home, not dark images from the years I spent working in emergency services. I realize I sometimes abuse my past as a paramedic in my security writings, but today it is far more relevant than usual. I became an EMT at the age of 19, and was in paramedic school by 21. By 22 years of age, I was in charge of my ambulance – often with only an EMT as a partner. In retrospect, I was too young. People I’d meet, especially in college, would often ask what the worst thing I saw was. I’d laugh it off, but the answer was never blood, guts, or brains. Yes, I saw a lot of dead and dying of all ages in all circumstances, but for the most part real life isn’t as graphic as the movies, and professional detachment is something I have always been good at. The real horrors are the situations we, as a species, place ourselves in. It was seeing poverty so abject that it changed my political views. It was children without a future. Public safety officials – paramedics, cops, firefighters – and our extended community of ER nurses and doctors, corrections officers, and other support positions… all suffer high rates of burnout and even suicide. Everyone hits the wall at some point – the question is whether you can move past it. Unless you have responded to some of the “big ones” that lead to PTSD, the wall isn’t often composed of particularly graphic memories. It is built, brick by brick, by pressure, stress, and, ultimately, futility. The knowledge that no matter how well you do your job, no matter how many people you help, nothing will change overall. Those who can’t handle the rough stuff usually leave the job early. It’s the cumulative effect of years or decades of despair that hammer the consciousness of those who easily slip past nightmares of any particular incident. Working in the trenches of information security can be no less demanding and stressful. Like those of us in public safety, you gird for battle every day knowing that if you’re lucky nothing bad will happen and you will get to spend the day on paperwork. And if you aren’t your employer ends up in the headlines and you end up living at your desk for a few days. Deep in the trenches, or on the streets, there’s no one else to call for help. You’re the last line; the one called in when all hell breaks loose and the situation is beyond the capacity of others to handle. What is often the single worst thing to happen to someone else is just another call for you. One day you realize there’s no winning. It won’t ever get better, and all your efforts and aspirations lead nowhere. At least, that’s one way to look at it. But not how the real professionals thrive on the job. You can focus on the futility or thrive on the challenge and freedom. The challenge of never knowing exactly what the day holds. The freedom to explore and play in a domain few get to experience. And, in the process, you can make that terrible event just a little bit easier on the victim. I nearly burned out in EMS at one point. From the start I knew I wasn’t any sort of hero; you don’t work the kinds of calls I did and believe that for long. But, eventually, even the thrill of the lights and sirens (and helicopters, and …) wears off. I realized that if I called out sick, someone else would take my place, and neither one of us would induce any macro changes. Instead I started focusing on the micro. On being a better paramedic/firefighter/rescuer. On being more compassionate while improving my skills. On treating even the 8th drunk with a head laceration that week like a human being. And then, on education. Because while I couldn’t save the human race, I might be able to help one person avoid needing me in the first place. Playing defense all the time isn’t for everyone. No matter how well-prepared you are mentally you will eventually face the burnout wall. Probably more than once. I thrive on the unexpected and continual challenges. Always have, and yet I’ve hit the burnout wall in both my emergency services and security careers. And for those of you at the entry level – looking at firewall logs and SIEM consoles or compliance reports all day – it is especially challenging. I always manage to find something new I love about what I do and move forward. If you want to play the game, you learn to climb over the wall or slip around it. But don’t blame the wall for being there. It’s always been there, and if you can’t move past it you need to find another job before it kills you. For the record, I’m not immune. Some of the things I have seen still hit me from time to time, but never in a way that interferes with enjoying my life. That’s the key. Share:

Share:
Read Post

Bringing Sexy back (to Security): Mike’s RSAC 2012 Wrap-up

Oh yeah. I’m back in the ATL after a week at the RSA Conference. Aside from severe sleep deprivation, major liver damage, and some con flu… I’m feeling great. It seems everyone else is as well. Something appeared at RSA that we haven’t seen for at least 3 years: smiles. Which I guess is to be expected, since in 2009 and 2010 everyone walked around with hard hats, expecting the sky to fall. In 2011 there were some positive signs but still a lot of skepticism, which was gone this year. Almost everyone I talked to was very optimistic for 2012 and beyond. As a contrarian, my first instinct was that we must be breathing our own exhaust. You point to two other guys and they say they are optimistic, and then it becomes the perception of optimism, rather than optimism you can pay your mortgage with. But even when challenged, everyone felt pretty good. Even the tools felt sexy. It didn’t help their hygiene much, but you can’t expect the world to change overnight, can you? But to be clear, the idea of Bringing Sexy back (to Security) is not mine. But someone said it to me when I was in a drunken haze. I thought it was Rich, but he wouldn’t acknowledge it. So if you were the one who said it to me, thanks. It’s a great assessment of where we are at, after years in the compliance-driven darkness. Pendulum Swinging back to Security Speaking of compliance, overt messaging around our least-favorite C word was pretty muted at the show this year. PCI is old news. HiTech enforcement is an unknown quantity, and for the most part unless an organization has been sleeping for the past 7 years they should be in decent shape regarding the low bar that a compliance mandate represents. Now actually securing something? That’s entirely different, and as such, the pendulum clearly swung back toward more of a security message on the floor this year. Which should warm the hearts of all you security folks nauseated at the game we have had to play to get our security projects paid for out of the compliance budget. So when you do next year’s holiday cards, send one to the Red Army and probably Anonymous. By then you’d expect both organizations to be Doxed, so you may even have an address. And they both probably own the USPS, so they can get their own mail as well, if they care to… Kidding aside, between high profile targeted attacks and chaotic actors, it is now clear to most organizations that PCI isn’t good enough. And that means we need to start talking about security again. Also be thankful that we’ve seen innovation in perimeter security gear (think NGFW), as well. Given the number of depreciated firewalls awaiting something interesting to drive a perimeter security renewal/re-architecture, having NGFW gear reach stability created a wave of buying that has also driven many of the public security companies. Those that HP and IBM haven’t overpaid for yet, anyway. Honestly, it was great to actually talk security this week, and not weird funding strategies. Really great. BigData Hype did not disappoint As we highlighted in the RSA Guide 2012, it has been obvious that BigData would be a big theme at the show. And it was. I ran into Joe Yeager from Lancope on my flight home and he joked to me that we should sell Powered by Hadoop stickers for $20K each. Given that every company needs to jump onto the BigData bandwagon, Joe is exactly right. Those would fly off the shelf. Apparently the marketers still haven’t figured out the difference between BigData and a lot of data, but that’s okay. Hyperbole rules the trade show floor (and some booth babes shaking their things), so it’s all good. But I suspect we’ll be seeing a lot of BigData at security conferences for the foreseeable future. Cloud still prominent It was also all cloud, all the time, at RSA this year. Again, not a surprise and probably justified. Though there was a lot more SECaaS (SECurity as a Service), than actual cloud security. I’m sure Rich will want to expand on this a bit at some point, but we saw plenty of folks talking about encrypting data in the cloud, along with lots of focus on managing cloud instances and the security of those instances. And all that is great to see. Real innovation is happening in this space, and not a second too soon – folks are doing this cloud thing, and we need to figure out how to protect that stuff. Yes, we saw a bunch of cloud washing, especially from the network security folks, who made a big deal about their VM instances that can run in the cloud. After hearing for years about how their hardware prowess makes their boxes great, it was kind of funny to hear them talk about how their stuff runs great in the cloud, but whatever. It’s a bandwagon and RSA requires you to jump aboard or get left behind. Good vibrations on BYOD The other area that we expected to hear a lot about was mobile security, specifically this BYOD stuff. At the e10+ session on Monday morning we did an entire section on BYOD and it spurred a great discussion. Here are some takeaways: iOS is cool, Android is not, and BlackBerry is dead: That’s not to say BlackBerry is gone, but it’s just a matter of time, as almost everyone in the room was migrating to another platform. It’s also not that Android isn’t showing up on corporate networks – it is, but with caveats. We’ll get to that. iOS is generally accepted as okay, mostly because of the way the App Store screens applications prior to availability. Everyone has policies. Most are not enforced. We spent a good portion of the session talking about policies, and everyone agreed that documenting policies is critical. Though enforcement of these policies is clearly lagging, especially for senior folks. But any employee seems to know

Share:
Read Post

Objectivity Matters

I owe a tremendous amount to social media. I wasn’t early to either blogging or Twitter (as my friends remind me), but once I got there a whole new world of opportunities opened. I created a boutique business (Security Incite) on the back of a blog and email newsletter. I met so many great people – many of whom became close friends – and even found a business partner or two. But the edge of social media cuts both ways. ‘News’ organizations have emerged with, uh, distinctly unjournalistic methods of handling conflicts of interest. You need to read Hit men, click whores, and paid apologists: Welcome to the Silicon Cesspool by Dan Lyons, about the unholy alliance between some very high-profile tech bloggers and what they publish about companies they invest in. You sort of knew that stuff was going on, but to see it laid bare was eye-opening. To be fair, none of these guys hide their investments in the companies they write about. Or that they leverage their audience to build brand and buzz for the chosen few who take their investment. Or that they strong-arm those that won’t or don’t. If you look hard enough you certainly can find the truth, but they certainly don’t publicize it. I don’t know. Maybe it’s me. Maybe I’m idealistic. Maybe I don’t understand how the world works. But that just seems wrong on so many levels. I guess I’m one of those guys that believes objectivity matters. Listen – we all have biases. I’m no Pollyanna, thinking anyone can truly be unbiased. But we at Securosis are pretty up front about our biases. And none of those biases are economic in nature. None. One of the things that really attracted me to the business model Rich built was the Totally Transparent Research method. We do the work. We write what needs to be written. When we are done, and only then, do we license content for sponsorship. We do line up sponsors ahead of time, but we only offer a right of first refusal, and either party can walk away at any time. We have. And sponsors have. We cannot afford to be beholden to someone, to write what they want, because we already took a down payment on our integrity. By the way, this model sucks for cash flow. We do all the work. We take all the risk. Then we hope the sponsors still have the budget and inclination to license the content. I can’t pay my mortgage with a right of first refusal. But objectivity matters to us, and we don’t see any other way to write credible research. Many folks who blog and tweet a lot about security will be out at the RSA Conference this week. You’ll likely be hearing about all sorts of shiny new objects. This one shinier than the next. But take every blog post and tweet with a grain of salt – even ours! The Internet can provide a wealth of information to help organizations make critical decisions, but it contains a tremendous amount of disinformation. Buyer beware – always. Understand who is writing what. Understand their biases and keep their point of view in mind. Most important: use all this information to get smarter and to zero in on the right questions to ask the right people. If you make buying decisions based on a blog post or a magic chart or anything other than your own research, then you (with all due respect) are an idiot. Share:

Share:
Read Post

Implementing DLP: Ongoing Management

Managing DLP tends to not be overly time consuming unless you are running off badly defined policies. Most of your time in the system is spent on incident handling, followed by policy management. To give you some numbers, the average organization can expect to need about the equivalent of one full time person for every 10,000 monitored employees. This is really just a rough starting point – we’ve seen ratios as low as 1/25,000 and as high as 1/1000 depending on the nature and number of policies. Managing Incidents After deployment of the product and your initial policy set you will likely need fewer people to manage incidents. Even as you add policies you might not need additional people since just having a DLP tool and managing incidents improves user education and reduces the number of incidents. Here is a typical process: Manage incident handling queue The incident handling queue is the user interface for managing incidents. This is where the incident handlers start their day, and it should have some key features: Ability to customize the incident for the individual handler. Some are more technical and want to see detailed IP addresses or machine names, while others focus on users and policies. Incidents should be pre-filtered based on the handler. In a larger organization this allows you to automatically assign incidents based on the type of policy, business unit involved, and so on. The handler should be able to sort and filter at will; especially to sort based on the type of policy or the severity of the incident (usually the number of violations – e.g. a million account numbers in a file versus 5 numbers). Support for one-click dispositions to close, assign, or escalate incidents right from the queue as opposed to having to open them individually. Most organizations tend to distribute incident handling among a group of people as only part of their job. Incidents will be either automatically or manually routed around depending on the policy and the severity. Practically speaking, unless you are a large enterprise this cloud be a part-time responsibility for a single person, with some additional people in other departments like legal and human resources able to access the system or reports as needed for bigger incidents. Initial investigation Some incidents might be handled right from the initial incident queue; especially ones where a blocking action was triggered. But due to the nature of dealing with sensitive information there are plenty of alerts that will require at least a little initial investigation. Most DLP tools provide all the initial information you need when you drill down on a single incident. This may even include the email or file involved with the policy violations highlighted in the text. The job of the handler is to determine if this is a real incident, the severity, and how to handle. Useful information at this point is a history of other violations by that user and other violations of that policy. This helps you determine if there is a bigger issue/trend. Technical details will help you reconstruct more of what actually happened, and all of this should be available on a single screen to reduce the amount of effort needed to find the information you need. If the handler works for the security team, he or she can also dig into other data sources if needed, such as a SIEM or firewall logs. This isn’t something you should have to do often. Initial disposition Based on the initial investigation the handler closes the incident, assigns it to someone else, escalates to a higher authority, or marks it for a deeper investigation. Escalation and Case Management Anyone who deploys DLP will eventually find incidents that require a deeper investigation and escalation. And by “eventually” we mean “within hours” for some of you. DLP, by it’s nature, will find problems that require investigating your own employees. That’s why we emphasize having a good incident handling process from the start since these cases might lead to someone being fired. When you escalate, consider involving legal and human resources. Many DLP tools include case management features so you can upload supporting documentation and produce needed reports, plus track your investigative activities. Close The last (incredibly obvious) step is to close the incident. You’ll need to determine a retention policy and if your DLP tool doesn’t support retention needs you can always output a report with all the salient incident details. As with a lot of what we’ve discusses you’ll probably handle most incidents within minutes (or less) in the DLP tool, but we’ve detailed a common process for those times you need to dig in deeper. Archive Most DLP systems keep old incidents in the database, which will obviously fill it up over time. Periodically archiving old incidents (such as anything 1 year or older) is a good practice, especially since you might need to restore the records as part of a future investigation. Managing Policies Anytime you look at adding a significant new policy you should follow the Full Deployment process we described above, but there are still a lot of day to day policy maintenance activities. These tend not to take up a lot of time, but if you skip them for too long you might find your policy set getting stale and either not offering enough security, or causing other issues due to being out of date. Policy distribution If you manage multiple DLP components or regions you will need to ensure policies are properly distributed and tuned for the destination environment. If you distribute policies across national boundaries this is especially important since there might be legal considerations that mandate adjusting the policy. This includes any changes to policies. For example, if you adjust a US-centric policy that’s been adapted to other regions, you’ll then need to update those regional policies to maintain consistency. If you manage remote offices with their own network connections you want to make sure policy updates pushed out properly and are

Share:
Read Post

RSA Conference 2012 Guide: Cloud Security

We’ve renamed this section from “Virtualization and Cloud Security” to simply “Cloud Security” since if you listen to any of the marketing messages, you can’t tell the difference, even though it’s a big one. And virtualization is a hassle to type, so buh bye! Overall, as we mentioned in the key themes post, cloud security will be one of the biggest trends to watch during the conference and it also happens to be one area where you should focus since there is some real innovation, and you probably have real problems that need some help. New Kids on the Cloud Security Block (NKOTCSB) Hiding in the corners will be some smaller vendors you need to pay attention to. Instead of building off existing security tools designed for traditional infrastructure (we’re looking at you Big Security), they’ve created new products built from the ground up specifically for the cloud. Each of them focuses on a different cloud computing problem that’s hard to manage using existing tools – identity management (federated identity gateways), instance security, encryption, and administrative access. Many of these have a SaaS component, but if you corner them in a back room and have enough cash they’ll usually sell you a stand-alone server you can manage yourself. NKOTCSB FTW. Cloudwashing vs. the Extreme Cloud Makeover If you haven’t heard the term before, “cloudwashing” refers to making a virtual appliance of a product ready to run on Amazon Web Services, VMWare, or some other cloud platform without really changing much in the product. This is especially amusing when it comes from vendors who spent years touting their special hardware secret sauce for their physical appliance. Consider these transitional products, typically better suited for private cloud IaaS. It might help, but in the long run you really need to focus on cloud-specific security controls. But some vendors are pushing deeper and truly adapting for cloud computing. It might be better use of cloud APIs, redesigning software to use a cloud architectural model, or extending an existing product to address a cloud-specific security issue that’s otherwise not covered. The best way to sniff the cloudwashing shampoo is to see if there are any differences between the traditional product and the virtual appliance version. Then ask, “do you use the //cloud platform// APIs or offer any new APIs in the product?” and see if their faces melt. Virtual Private Data We also cover this one in the data security post so we won’t go into much more detail here, but suffice it to say data security is pretty high on the list of things people moving to the cloud need to look at. Most encryption vendors are starting to support cloud computing with agents that run on cloud platforms as an extension of their to their existing management systems (thus requiring a hybrid model), but a couple are more cloud-specific and can deploy stand-alone in public cloud. CloudOps Most of the practical cloud-specific security, especially for Infrastructure as a Service comes from the (relatively) new group of cloud management vendors. Some might be at RSA, but not all of them since they sell to data center operations teams, not CISOs. Why? Well, it just might be the big wads of cash that Ops teams have in comparison. Keep an eye on these folks because aside from helping with configuration management automation, some are adding additional features like CloudAudit support, data protection/encryption, and network security (implemented on a virtualized host). While the NKOTCSB are totally focused on security innovation, the management and operations platforms concentrate on cloud operational innovation, which obviously has a big security component. We’ll be posting the assembled guide within the next day or so, so you’ll have it in plenty of time for your pilgrimage to San Francisco. Share:

Share:
Read Post

The Last Friday before the 2012 RSA Conference

It’s here. No, not the new iPad. Not those test results. And most definitely not that other thing you were thinking about. We’re talking about RSA. And for the majority of you who don’t run to the Moscone Center every February or March, you may not care. But love it or hate it, the RSA Conference is the main event for our industry, and a whole lot of things get tied up with it that have nothing to do with sessions and panels. Our friends Josh Corman and Andrew Hay have written up their survival guides, and after this preamble I’m going to link you to our 2012 Securosis Guide to RSA with an insane amount of information in it, much of which has more to do with what you will see in our industry over the next 12 months than with the conference itself. The RSA Conference is the World Series of Security Insider Baseball. The truth is most of you don’t need to care about any of that stuff. Sure, a lot of people will be on Twitter talking about parties and the hallway track, but that’s all a bunch of crap. They’re fun, and I enjoy seeing my friends, but none of it really matters if you are trying to keep the bad guys out. So here’s my advice for RSA 2012 – whether you attend or not: If you don’t go to RSA there are still important things you can pick up. A lot of the better presentations end up online and many vendors release major updates of products you might have… or at least announce their strategies. Even the marketing fluff can be useful, by giving you an idea of what’s coming over the next year (or two – shipping dates always slip). The hallway track is for social butterflies and business development – not security professionals. Not all sessions are of the same quality, but there is plenty of good content, and you are better served checking out product demos or finding some of the better presentations. Skip most of the panels. If it starts with bios that last more than a few lines, walk out. If any panelist tries to show their own slides rather than the preset decks RSA requires, walk faster. Not all vendor presentations suck, but many of them do. Given a choice, try to find end users talking about something they’ve done in the real world. If a presentation description starts with “we will examine the risks of…” skip it. You don’t need more FUD. Most presentations on policies and governance also suck. But as a techie I’m biased. Ignore the party scene. Yes, the parties can be fun and I enjoy hanging out with my friends, but that’s because I have a lot of people I consider real friends who are scattered across the world and work for different companies. If you aren’t tied into that social group, or roaming with a pack of friends, you are drinking alone in a room full of strangers. It wouldn’t bother me one bit if most of the parties stopped and I could have a few quiet dinners with people I enjoy chatting with. Use the expo floor. You will never have an opportunity to see so many product demos. Never sit in one of the mini-auditoriums with a hired actor giving a pitch – seek out the engineers hovering by the demo stations. You can learn a hell of a lot very quickly there. Get rid of the sales guy by asking a very technical question, and he or she will usually find the person you can dig in with. Never let anyone scan your badge unless you want the sales call – which you may. You are there to work. I’m there to work. Even at the social events I tend to moderate so I can function well the next day. I won’t say I’m perfect, but I can’t afford to sleep in past 6:30 or 7am or take a break during the day. Go to sessions. Talk to vendors. Have meetings. You’re there for that, nothing else. The rest is what Defcon is for 🙂 It’s really easy to be turned off by a combination of all the insider garbage you see on blogs like ours and the insanity of car giveaways on the show floor. But peel the superficial layers off and you have a show floor full of engineers, sessions full of security pros working every day to keep the bad guys out, and maybe even a self-described expert spouting random advice and buying you a free breakfast… or three. -Rich On to the Summary: Where to see us at the RSA Conference We keep busy schedules at RSA each year. But the good news is that we do a number of speaking sessions and make other appearances throughout the week. Here is where you can find us: Speaking Sessions DAS-108: Big Data and Security: Rich (Tuesday, Feb 28, 12:30pm) EXP-304: Grilling Cloudicorns: Rich (Thursday, March 1, 12:45pm) Flash Talks Powered by PechaKucha Mike will be presenting “A Day in the Life of a CISO, as told by Shakespeare” (Thursday, March 1, 5:30pm) Other Events e10+: Rich, Mike, and Adrian are the hosts and facilitators of the RSA Conference’s e10+ program, targeting CISO types. That’s Monday (Feb 27) from 8:30am until noon. America’s Growth Capital Conference: Mike will be moderating a panel at the AGC Conference on cloud management and security with folks from Afore Solutions, CipherCloud, Dome9, HyTrust, and Verizon. The session is Monday afternoon, Feb 27 at 2:15pm. And the 2012 Disaster Recovery Breakfast. Don’t forget to download the entire Securosis Guide to the RSA Conference 2012. Webcasts, Podcasts, Outside Writing, and Conferences The RSA Network Security Podcast. Other Securosis Posts Implementing DLP: Ongoing Management. Implementing DLP: Deploy. Implementing DLP: Deploying Storage and Endpoint. RSA Conference 2012 Guide: Cloud Security. RSA Conference 2012 Guide: Data Security. RSA Conference 2012 Guide: Security Management and Compliance. RSA Conference 2012 Guide: Email & Web Security. RSA Conference Guide 2012:

Share:
Read Post

The Securosis Guide to RSA 2012

Managing DLP tends to not be overly time consuming unless you are running off badly defined policies. Most of your time in the system is spent on incident handling, followed by policy management. To give you some numbers, the average organization can expect to need about the equivalent of one full time person for every 10,000 monitored employees. This is really just a rough starting point – we’ve seen ratios as low as 1/25,000 and as high as 1/1000 depending on the nature and number of policies. Managing Incidents After deployment of the product and your initial policy set you will likely need fewer people to manage incidents. Even as you add policies you might not need additional people since just having a DLP tool and managing incidents improves user education and reduces the number of incidents. Here is a typical process: Manage incident handling queue The incident handling queue is the user interface for managing incidents. This is where the incident handlers start their day, and it should have some key features: Ability to customize the incident for the individual handler. Some are more technical and want to see detailed IP addresses or machine names, while others focus on users and policies. Incidents should be pre-filtered based on the handler. In a larger organization this allows you to automatically assign incidents based on the type of policy, business unit involved, and so on. The handler should be able to sort and filter at will; especially to sort based on the type of policy or the severity of the incident (usually the number of violations – e.g. a million account numbers in a file versus 5 numbers). Support for one-click dispositions to close, assign, or escalate incidents right from the queue as opposed to having to open them individually. Most organizations tend to distribute incident handling among a group of people as only part of their job. Incidents will be either automatically or manually routed around depending on the policy and the severity. Practically speaking, unless you are a large enterprise this cloud be a part-time responsibility for a single person, with some additional people in other departments like legal and human resources able to access the system or reports as needed for bigger incidents. Initial investigation Some incidents might be handled right from the initial incident queue; especially ones where a blocking action was triggered. But due to the nature of dealing with sensitive information there are plenty of alerts that will require at least a little initial investigation. Most DLP tools provide all the initial information you need when you drill down on a single incident. This may even include the email or file involved with the policy violations highlighted in the text. The job of the handler is to determine if this is a real incident, the severity, and how to handle. Useful information at this point is a history of other violations by that user and other violations of that policy. This helps you determine if there is a bigger issue/trend. Technical details will help you reconstruct more of what actually happened, and all of this should be available on a single screen to reduce the amount of effort needed to find the information you need. If the handler works for the security team, he or she can also dig into other data sources if needed, such as a SIEM or firewall logs. This isn’t something you should have to do often. Initial disposition Based on the initial investigation the handler closes the incident, assigns it to someone else, escalates to a higher authority, or marks it for a deeper investigation. Escalation and Case Management Anyone who deploys DLP will eventually find incidents that require a deeper investigation and escalation. And by “eventually” we mean “within hours” for some of you. DLP, by it’s nature, will find problems that require investigating your own employees. That’s why we emphasize having a good incident handling process from the start since these cases might lead to someone being fired. When you escalate, consider involving legal and human resources. Many DLP tools include case management features so you can upload supporting documentation and produce needed reports, plus track your investigative activities. Close The last (incredibly obvious) step is to close the incident. You’ll need to determine a retention policy and if your DLP tool doesn’t support retention needs you can always output a report with all the salient incident details. As with a lot of what we’ve discusses you’ll probably handle most incidents within minutes (or less) in the DLP tool, but we’ve detailed a common process for those times you need to dig in deeper. Archive Most DLP systems keep old incidents in the database, which will obviously fill it up over time. Periodically archiving old incidents (such as anything 1 year or older) is a good practice, especially since you might need to restore the records as part of a future investigation. Managing Policies Anytime you look at adding a significant new policy you should follow the Full Deployment process we described above, but there are still a lot of day to day policy maintenance activities. These tend not to take up a lot of time, but if you skip them for too long you might find your policy set getting stale and either not offering enough security, or causing other issues due to being out of date. Policy distribution If you manage multiple DLP components or regions you will need to ensure policies are properly distributed and tuned for the destination environment. If you distribute policies across national boundaries this is especially important since there might be legal considerations that mandate adjusting the policy. This includes any changes to policies. For example, if you adjust a US-centric policy that’s been adapted to other regions, you’ll then need to update those regional policies to maintain consistency. If you manage remote offices with their own network connections you want to make sure policy updates pushed out properly and are

Share:
Read Post

Implementing DLP: Deploy

Up until this point we’ve focused on all the preparatory work before you finally turn on the switch and start using your DLP tool in production. While it seems like a lot, in practice (assuming you know your priorities) you can usually be up and running with basic monitoring in a few days. With the pieces in place, now it’s time to configure and deploy policies to start your real monitoring and enforcement. Earlier we defined the differences between the Quick Wins and Full Deployment processes. The easy way to think about it is Quick Wins is more about information gathering and refining priorities and policies, while Full Deployment is all about enforcement. With the Full Deployment option you respond and investigate every incident and alert. With Quick Wins you focus more on the big picture. To review: The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering vs. enforcement to help guide your full deployment. We previously detailed this process in a white paper and will only briefly review it in this series. The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert/response and not automated blocking/filtering) we spend more time tuning policies to produce desired results. We generally recommend you start with the Quick Wins process since it gives you a lot more information before jumping into a full deployment, and in some cases might realign your priorities based on what you find. No matter which approach you take it helps to follow the DLP Cycle. These are the four high-level phases of any DLP project: Define: Define the data or information you want to discover, monitor, and protect. Definition starts with a statement like “protect credit card numbers”, but then needs to be converted into a granular definition capable of being loaded into a DLP tool. Discover: Find the information in storage or on your network. Content discovery is determining where the defined data resides, while network discovery determines where it’s currently being moved around on the network, and endpoint discovery is like content discovery but on employee computers. Depending on your project priorities you will want to start with a surveillance project to figure out where things are and how they are being used. This phase may involve working with business units and users to change habits before you go into full enforcement mode. Monitor: Ongoing monitoring with policy violations generating incidents for investigation. In Discover you focus on what should be allowed and setting a baseline; in Monitor your start capturing incidents that deviate from that baseline. Protect: Instead of identifying and manually handling incidents you start implementing real-time automated enforcement, such as blocking network connections, automatically encrypting or quarantining emails, blocking files from moving to USB, or removing files from unapproved servers. Define Reports Before you jump into your deployment we suggest defining your initial report set. You’ll need these to show progress, demonstrate value, and communicate with other stakeholders. Here are a few starter ideas for reports: Compliance reports are a no brainer and are often included in the products. For example, showing you scanned all endpoints or servers for unencrypted credit card data could save significant time and resources by reducing scope for a PCI assessment. Since our policies are content based, reports showing violation types by policy help figure out what data is most at risk or most in use (depending on how you have your policies set). These are very useful to show management to align your other data security controls and education efforts. Incidents by business unit are another great tool, even if focused on a single policy, in helping identify hot spots. Trend reports are extremely valuable in showing the value of the tool and how well it helps with risk reduction. Most organizations we talk with who generate these reports see big reductions over time, especially when they notify employees of policy violations. Never underestimate the political value of a good report. Quick Wins Process We previously covered Quick Wins deployments in depth in a dedicated whitepaper, but here is the core of the process: The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a Full Deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response. In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources. Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action! Choose Your Flavor The first step is to decide which of two general approaches to take: * Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. * Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts. Choose

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.