Securosis

Research

Friday Summary: August 31, 2012

Rich here. Yesterday I published an article over at Macworld on the New Java exploits, and why Mac users likely aren’t at risk. As with many previous articles on Mac security, I’m getting really positive feedback. Heck, I have even had people tell me that I’m currently writing the best stuff out there on Apple security overall. (Probably not true, but I’ll take it.) When I asked some people privately about this, they told me they like my articles because they are accurate, hype free, and practical. The thing is, there really isn’t anything special about how I write this stuff up. Some days I feel like it’s some of the easiest prose I put on the screen. I think there is one compelling reason there are so many bad security articles out there in general (when we write about attacks), and even more crap about Apple products. Page views. Anytime anything remotely related to security and Apple comes up, there is a bum rush to snag as many mouse clicks as possible, which forces those writers to break pretty much every rule I have when writing on the issue. Here’s how I approach these pieces: Know the platform. Don’t hype. Research, and don’t single source. Accurately assess the risk. Accurately report the facts. This isn’t hard. It really comes down to understanding the facts and writing without unnecessary hype. From what I can tell, this also results in solid page views. I don’t see my Macworld or TidBITS stats, but from what they tell me the articles do pretty well, even if they come a day after everyone else. Why? Because many of the other articles suck, but also because users will seek out information that helps them understand an issue, rather than an article that just scares them. These are the articles that last, as opposed to the crap that’s merely thinly-disguised plagiarizing from a blog post. I get it. If it bleeds, it leads. But I would rather have a reputation for accuracy than for page views. There are also a bunch of articles (especially from AV vendors) that are technically accurate but grossly exaggerate the risk. Take all the calls for the impending Mac Malware Epidemic… by my count there have only been two large infections in the past two years, neither of which resulted in financial losses to consumers. I really don’t care how much Elbonian porn is back-doored with trojans. (I have been waiting 5 years to write that sentence). Anyway, for those of you who read these articles rather than writing them, here are a few warning signs that should raise your skepticism: Are all the quotes from representatives of security companies with something to gain from scaring you? Does the headline end in a question mark? Is it cross-platform, but ‘Mac’ or ‘iPhone’ got shoehorned into the headline to snag page views? Is more than one source cited? Multiple blog posts which all refer back to the same original source don’t count. Does the article provide a risk assessment in the lead or only in the conclusion? Does it use phrases like “naive Apple users”? Then again, I don’t get paid by hit counts. Or maybe I just underestimate how many people download Elbonian porn. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted in this Silicon Angle series on CyberWars. Probably too much hype and overuse of buzzwords, but decent perspectives on the attackers. Part 1, Part 2, Part 3 Mike’s Dark Reading column on making tough choices. Rich covered the Java exploit for Macworld. Favorite Securosis Posts Adrian Lane: Ur C0de Sux. Seeing as we have not been able to blog much lately, I pulled out an old favorite. Mike Rothman: Always on the Run. When I read my intro to this week’s Incite again, I realized it’s pretty good advice. To me. So I’ll bookmark it and get back to it every time I start dragging a bit. Just keep running and good things happen. David Mortman: Pragmatic WAF Management – Securing the WAF. Other Securosis Posts Slow week – and to be honest, next week will also be slow thanks to way too much travel. We promise to get back to annoying you more consistently soon. Maybe. Favorite Outside Posts Mike Rothman: How to set up two step verification on Dropbox. You probably use Dropbox. You probably don’t want anyone else in that file store. You probably should use two-step authentication. If it works as cleanly and easily as Gmail 2FA this is a no-brainer. I’ll be testing it over the weekend. Adrian Lane: Schneier on Security Engineering. “‘Security’ is now a catch-all excuse for all sorts of authoritarianism, as well as for boondoggles and corporate profiteering.” Excellent post. Dave Lewis: Identity is Center Stage in Mobile Security Venn. David Mortman: Don’t build a database of ruin. Rich: The rise of data-driven security. Scott Crawford is A Very Smart Dude and has been tracking this issue longer than any other analyst. The report is for-pay only, but there is a lot of good info in the long (and free) intro post. Research Reports and Presentations Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Report: Understanding and Selecting a Database Security Platform. Vulnerability Management Evolution: From Tactical Scanner to Strategic Platform. Top News and Posts New Java 0day With Maynor statements like “This is as about a bad a bug as I’ve ever seen,” and “This exploit is awesome,” you know it’s good. German police buy stolen data, accuse Swiss of aiding tax evaders. It sounds like German investigators not only sought out stolen financial data, but will continue to do so. New Trojan Discovered. Chrome: Blocked Plug-ins. For those who want a little more granularity than ‘On’ or ‘Off’. ISC(2) Board Petition Snafu. Oh, why am I not surprised? Project Viglio

Share:
Read Post

Incite 8/29/2012: Always on the Run

Wake up. Get the kids ready for school. Exercise (maybe). Drink some coffee. Write. Make calls. Eat (sometimes too much). Write some more. Make more calls. Drink more coffee. Think some big thoughts. Pick up the kids from some activity. Have dinner. Get the kids to bed. Maybe get back to writing. Maybe watch a little TV. Go to bed much too late. Wake up and do it again. That’s an oversimplified view of my life, but it’s not far off. But that isn’t a bad thing – I really enjoy what I do. I reflect at least daily on the deal I cut with Satan to be able to actually make a living as a professional pontificator. But I am always on the run. Until I’m not, because there are times when my frontal lobe just shuts down and I sit in a mostly vegetative state or pass out on our couch. There doesn’t seem to be much in between. Is it healthy? You know, running as fast as you can until you collapse and then getting up and running full tilt again? I’m no runner, but it doesn’t seem to be a prudent way to train or live. A mentor always told me, “It’s not a sprint, it’s a marathon.” With ‘it’ being basically everything. Intuitively I understand the message. But that doesn’t mean anything changes. I still run at the razor’s edge of burnout and implosion, and every so often the machine fails. Yet I still find myself running. Every day. Consulting my lists and getting agitated when there isn’t structure to what needs to get done, especially at home. I’m constantly badgering the Boss for my list of house tasks every Saturday morning, so I can get running. Yet if I’m being honest with myself, I like my lists. More specifically, I like checking things off my lists. I like to feel productive and useful and getting things done helps with that. Again, that doesn’t mean that at the end of a long day or on Sunday afternoon I’m not slipping into that vegetative state. That’s how I recharge and get ready for the next day. This run, collapse, repeat cycle works for me. At least it does for now. In another 15 years, when the kids are out of college and fending for themselves, maybe I’ll have a different opinion. Maybe I’ll want to play golf, lounge by the pool, or sit in a cafe all day and read the newspaper. Or read whatever delivers news to me at that point in time, which is unlikely to be paper. Maybe I’ll just chill out, stop running, and enjoy the fruits of my labor. Then again maybe not. As I look back, I’ve been running at this kind of pace as long as I can remember. But it’s different now. Over the past couple years I stopped worrying about where I’m running to. I just get up every morning and run. Obviously I know the general direction my efforts are pointed in, but I no longer fixate on when I’m going to get there. Or if I’ll ever get there. As long as I’m having fun, it’s all good. And then a funny thing happened. I realized that I have a shot at hitting some of those goals I set many years ago. To actually get to the place I thought I was running to all this time. That’s kind of weird. What happens now? Do I set new goals? Do I slow down? Do I savor my accomplishments and take a bow? I’ll take D) None of the above. I think I’ll just keep running and wind up where I wind up. Seems to have worked out okay for me so far. –Mike Photo credits: Running originally uploaded by zebble Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide Summary: 10 Questions to Ask Your Endpoint Security Management Vendor Platform Buying Considerations Pragmatic WAF Management Securing the WAF Application Lifecycle Integration Policy Management Incite 4 U Massive unpatched java flaw being exploited: First, just the facts. There is a massive remotely exploitable cross platform flaw in the latest version of Java. How exploitable? Just read David Maynor’s description of owning everything including OS X, Windows, and Linux. This is as bad as it gets folks. Here’s the drama: after FireEye posted some info, based on real world exploitations, the attack was quickly added to Metasploit and now any script kiddie can compromise nearly any vulnerable system they can get their hands on. I’m generally not thrilled when Metasploit adds exploit code for 0days without giving defenders any chance in hell of blocking or otherwise mitigating the problem. On the latest Network Security Podcast my co-host Zach mentioned that the exploit itself may have leaked from Immunity, who frequently includes 0days in their pen testing product and doesn’t notify vendors or wait for patches. Once again, we are shooting ourselves in the head as an industry because someone doesn’t like the smell of our feet. – RM Epic security research fail: You know those times when you aren’t paying attention to where you’re walking and you run into a pole? And when you get up you look around and hope no one is watching. That happened to FireEye’s research team last week when they inadvertently stumbled upon a honeypot set up by Kaspersky and made a big stink about a change in attacker tactics. It didn’t take long for the Kaspersky researchers to call them out, and within a few hours FireEye issued a retraction. As my kids say, whoopsie! But this is a manifestation of the race for something newsworthy to fill the media sites with fodder to

Share:
Read Post

Pragmatic WAF Management: Securing the WAF

WAFs themselves are an application, and as such they provide additional attack surface for your adversaries. Their goal isn’t necessarily to compromise the WAF itself (though that’s sometimes a bonus) – the short-term need is evasion. If attackers can figure out how to get around your WAF, many of its protections become moot. Your WAF needs to be secured, just like any other device sitting out there and accessible to attackers. So let’s start by discussing device security, including deployment and provisioning entitlements. Then we can get into some evasion tactics you are likely to see, before wrapping up with a discussion of the importance of testing your WAFs on an ongoing basis. Deployment Options Managing your WAF pragmatically starts when you plug in the device, which requires you to first figure out how you are going to deploy it. You can deploy WAFs either inline or out of band. Inline entails installing the WAF in front of a web app to block attacks directly. Alternatively, as with Network Access Control (NAC) devices, some vendors to provide an out-of-band option to assess application traffic via a network tap or spanning port, and then use indirect methods (TCP resets, network device integration, etc.) to shut down attack sessions. Obviously there are both advantages and disadvantages to having a WAF inline, and we certainly don’t judge folks who opt for out-of-band deployment rather than risking impact to applications. But as with NAC evasion, out-of-band enforcement can be evaded and presents an additional risk to the application. But balancing risks, such as reduced application protection against possible application disruption, is why you get the big bucks, right? You will also need to consider high availability (HA) deployment architectures. If a WAF device fails and takes your applications with it, that’s a bad day all around. So make sure you can deploy multiple boxes with consistent policy and utilize some kind of non-disruptive failover option (active/active, active/passive, load balancer front-end, etc.). Of course some folks opt for a managed WAF service, so the device doesn’t even sit in their data center. This offloads responsibility for scaling up the implementation, providing high availability, and managing devices (patching, etc.) to the service provider. Additionally, the service provider can offer some obfuscation of your IP addresses, complicating attacker reconnaissance and making WAF evasion harder. Depending on how the service is packaged, the service provider may also provide resources to manage policies. Of course they cannot offload accountability for protecting applications, and a service provider cannot be expected to interface directly with your developers. You should also understand the background of your WAF provider. Are they a security company? Does the WAF provide full application security features, or is it a glorified content distribution network (CDN)? Obviously a service provider isn’t likely to offer the full granular capabilities and policy options of a device in your data center, so you need to balance the security capabilities of a managed WAF service against what you can do yourself. Other Security Considerations Obviously you need to keep attackers away from the physical devices, so ensuring physical security of devices is the first step, and hopefully already largely covered by existing security measures. After that you need to ensure all credentials stored on the device are protected, including the SSL private keys used for SSL interception. You will also need to exercise good security hygiene on the device, which means detailed logging of any changes to device configuration and/or policies. Hopefully the logs will aggregated on an external aggregation system (a log management server) to prevent tampering, and alerts should be sent if logging is turned off. That also means keeping the underlying operating system (for software-based WAFs) and the WAF itself patched and up to date. No different than what you should do for every other security device in your environment. Again, a managed WAF service gets you out of having to update devices and/or WAF software, but make sure you can get access to the appropriate WAF activity logs. Make sure you have sufficient access for forensic investigation, if and when you need to go there. Finally, keep in mind that Denial of Service (DoS) attacks continue to be problematic, targeting applications with layer 7 attacks, in addition to simpler volume-based attacks. Make sure you have sufficent bandwidth to deal with any kind of DoS attack, a sufficiently hearty WAF implementation to deal with the flood, and a DDoS-focused service provider on call to handle additional traffic if necessary. Protecting against DoS attacks is a discipline unto itself, and we plan a series on that in the near future. Provisioning and Managing Entitlements Once you have secured the device, next make sure the policies and device configurations are protected. Take steps to control provisioning and management of entitlements. Given the sensitivity of the WAF, it makes sense to get back to the 3 A’s. Yeah, man, old school. Authorization: Who is allowed to access the WAF? Can they set up policies? Change configurations? Is this a group or set of individuals? Authentication: Once you know who can legitimately get into the device, how will you ensure it’s really them? Passwords? 2-factor authentication? Digital certificates? Retinal scans? Okay, that last was a joke, but this question isn’t. Audit: You want an audit event every time a WAF policy, configuration, entitlement, or anything else is changed. Note that a managed WAF service will complicate your ability to manage entitlements. The service provider will have the ability to change policies and may even be responsible for managing them. Ensure you adequately vet folks who will have access to your policies, with an audit trail. We know we are beating the audit horse, but it’s particularly important in this context. An alternative method for managing access to WAF devices is Privileged User Management (PUM). In this scenario administrators log into some sort of proxy, which manages credentials and provides access only to the WAFs each administrator has authorization for. That’s just one of

Share:
Read Post

Friday Summary: August 24, 2012.

This will probably sound weird, but for the first time in many years I am bummed that summer is ending. This is odd because I’m not really into vacations. I have only taken a real vacation – which I define as my wife and myself leaving the house together for more than 24 hours – twice in the last twelve years. And one of those vacations was a disaster I would not care to relive – drunken friends and crashing houseboats onto rocks is something I can do without. Anyway, vacations are just not something we really do. And when you have as many critters as we do – each needing regular attention – going anywhere gets a bit difficult. I travel a lot as part of this job, so I have no need to “get away” for its own sake. I’m happy to putter around the house, and I have made my home a great place to take time off. This year a close friend and I ventured up to south Lake Tahoe and visited Echo Lake. It’s a place my friend has been going with his parents since he was born, but both his parents have now passed, so we decided to keep the tradition alive. We planned a couple days hanging out and not catching fish. The trip started with a few bad omens: both on the way there and back, we got stuck in several traffic jams – including a high speed chase/rollover accident that stranded us for a few hours in the hot Oakland sun. But that did not matter. Sitting in traffic and sitting in the boat, I had a freaking great time! In fact I really did not want to come back. There was hiking I wanted to do but we ran out of time. And kayaking – no time. And swimming. And they had a Sailfish one-design regatta – I wanted in on that! Drinking Scotch with total strangers and just watching the sun set. And more fishing. I wanted to see if I could get my mountain bike back into the wilderness trails. I wanted a summer vacation, the three month kind I have not had since early high school. I started to fantasize about a tiny cabin on the water to help make all this happen. I could have stayed three months without a second thought. Honestly, I was like a little kid on the last week of summer. I really did not want to come back. I know about all the studies that say you need time off work to be mentally healthy and invigorate yourself. I see a blog post every year on the need for time off and the importance of vacations. And I have seen the benefits of employees regularly taking time off. Whatever. That’s for other people. Not me. Or it was. Now I want a real vacation. It was damn fun, and even if it doesn’t help me beat burnout or reinvigorate me mentally – although this trip did – I just want to go do that again. It was odd feeling that urge to get away for the first time in a very long time. And here I find myself looking at listings for vacation properties – weird. I included a boatload of news this week, so check it out. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich participated in Protecting Your Digital Life at TidBITS.. Adrian joined Rich and Martin on The Network Security Podcast, episode 285. Adrian won the Nimby Award for Best Identity Forecast Blog. Favorite Securosis Posts Adrian Lane: Endpoint Security Management Buyer’s Guide. I’m betting this is the most practical and helpful part for end users. Mike Rothman: Endpoint Security Management Buyer’s Guide – 10 Questions. Okay, it’s my post, so I’m a homer. But I love distilling down a bunch of content into only 10 questions. Makes you focus on what’s important. Rich: Force Attacker Perfection. This is an older post of mine, but I think it is becoming increasingly relevant now that we are seeing more interest in active countermeasures, which can really enhance the concept. Other Securosis Posts Incite 8/22/2012: Cassette Legends. [New White Paper] Understanding and Selecting Data Masking Solutions. Friday Summary: August 17, 2012. Favorite Outside Posts Dave Lewis: Identity is Center Stage in Mobile Security Venn. Mike Rothman: VOTE FOR DAVE!!!! Hey CISSPs! Our very own Dave Lewis is running for the ISC2 board, so if you have that (worthless) piece of paper, then get off your hind section and sign Dave’s petition. Significant and much-needed change is coming to the ISC2. And they don’t know what they are in for. It will start with the Brick of Enlightenment. Adrian Lane: Hacker Camp Recount. Very cool! Rich: Bill Brenner slams vendors for their useless briefings. I hope all marketing people read this. But keep in mind that the needs of a journalist are different than those of an analyst, which are different than those of a prospect in a sales situation. Tune the deck for the audience. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Research Reports and Presentations Understanding and Selecting Data Masking Solutions. Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Report: Understanding and Selecting a Database Security Platform. Vulnerability Management Evolution: From Tactical Scanner to Strategic Platform. Top News and Posts Hoff on SDN. It’s possible Rich and Hoff will team up again for RSA, and perhaps they will cover this material and combine it with Rich’s data and app-level automation research. Maybe. Amazon Glacier. $.01 per GB. Holy. Crap. McAfee update breaks computers. FBI surveillance backdoor might be open to hackers. New agnostic malware

Share:
Read Post

Incite 8/22/2012: Cassette Legends

The impact of technology cannot be overstated. Not compared to when I was a kid. So we were having dinner over the weekend and XX2 started changing the lyrics to Michael Jackson’s Beat It, by crooning out “Eat It.” Of course, I mentioned that she was creative but hardly original and that Weird Al Yankovic recorded that exact song some 20 years ago. Then the Boy piped in with the chorus to Weird Al’s other Michael Jackson parody, “Fat.” Wait, what? The Boss and I were amazed that he not only knew who Weird Al was, but another of his songs. Upon further interrogation, he admitted that a friend showed him Weird Al’s videos on the Internet. Then I launched into a story about how in the olden days, when MTV played actual music videos, you had to wait by the TV for a video you liked. That the first video I ever saw was the J. Geils Band’s “Centerfold”. I didn’t leave my room for a week after that. Not like today, where they just search YouTube and listen to what they want when they want. Then the Boss talked about how she had to sit by the radio with her little cassette recorder in hand, waiting for her favorite songs. The art was in hitting the Record button (or more likely Record and Play simultaneously) at the perfect time. Not too early or you got a bunch of DJ gibberish, and not too late or you’d miss the first few bars of the song. Stopping recording was a similar high-wire act. Then we described the magic of the double cassette deck/recorder and how that made life a zillion times easier, so we could dub tapes from our friends. I guess now I need to expect a retroactive Cease and Desist letter from the RIAA for 30 years ago, eh? The kid’s response was classic. What’s a cassette, Mommy? It’s hard to comprehend, but these kids have never actually seen a cassette tape. Well, they probably have, but had no idea what it was. I just traded in my old Acura that actually had a cassette player, but I last used it 7 years ago. They have no need to understand what a cassette is. Nor the hoops we jumped through to access the music we wanted. I didn’t have the heart to further complicate things by describing the setup my brother and I had to record music, which included an old condenser mic and a reel-to-reel tape deck. I saved up for months to buy a blank reel-to-reel tape and I remember recording from Casey Kasem’s Top 40 every Sunday. Then I got my portable Panasonic cassette recorder, bought Kiss Alive II and was forever changed. Then we told the story of the first Walkman units, and how liberating it was to be able to play cassettes without having to carry around a 30-pound boom box on your shoulder. And believe me – my boom box was huge, loud, and cool – requiring 8 D batteries. I’d get a hernia just lugging around extra batteries for that beast. Looking back, the Walkman was truly transformative. When I made the analogy to the iPod, but bigger and requiring tapes you could only fit 60 minutes of music on, they kind of got it. But not really. Replaying that conversation in my mind makes me excited for the kinds of crazy stories our kids will tell their kids about those iPods and iPhones back in the olden days. And it also makes me feel old. Really really old. But then again, I can’t even imagine what my folks feel like, remembering when they first got TV… –Mike Photo credits: Cassette Player originally uploaded by grundkonzept Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide Summary: 10 Questions to Ask Your Endpoint Security Management Vendor Platform Buying Considerations Ongoing Controls – File Integrity Monitoring Ongoing Controls – Device Control Pragmatic WAF Management Application Lifecycle Integration Policy Management The WAF Management Process Incite 4 U Another vendor ranking grid. Oh, joy! Our friends at NSS Labs have introduced a new way to compare security vendors, specifically the network security folks: their security value map. One axis is Block Rate, and the other is price per protected mbps. No, I don’t get it either. Actually I do, but I suspect most customers will find this chart of limited value. Especially when 80% of the products are in the ‘good’ quadrant. They must know that way too many users use the quadrant charts to make decisions for them. This chart might help compare devices but it doesn’t help make decisions. In fairness, I really like the work NSS does. Hardly anyone else really tests devices objectively, and I applaud their efforts to remove the bias of vendor-sponsored tests. I also understand the need to have a chart that vendors will license and the genius to set up the tolerances so the greatest percentage of vendors are in the right quadrant to license the report. And their research is very useful to customers who do the work and actually need to understand how devices work. But the other 95% of their audience will ask how they can ‘short’ list just about everything. – MR Fixing a problem that doesn’t exist: Most users don’t regard mobile security problems as a big threat, notes Ben Wood, Director of Research at CCS Insight. No kidding. Even on Android viruses and malware are not generally considered a big threat. Antivirus vendors would like to point out the one or two instances where malware has appeared hoping the FUD will drive a new wave of adoption. AV has had it good for a

Share:
Read Post

Endpoint Security Management Buyer’s Guide: 10 Questions

Normally we wrap up each blog series with a nice summary that goes through the high points of our research and summarizes what you need to know. But this is a Buyer’s Guide, so we figured it would be more useful to summarize with 10 questions. With apologies to Alex Trebek, here are the 10 key questions we would ask if we were buying an endpoint security management product or service. What specific controls do you offer for endpoint management? Can the policies for all controls be managed via your management console? Does your organization have an in-house research team? How does their work make your endpoint security management product better? What products, devices, and applications are supported by your endpoint security management offerings? What standards and/or benchmarks are offered out of the box for your configuration management offering? What kind of agentry is required for your products? Is the agent persistent or dissolvable? How are updates distributed to managed devices? What is done to ensure the agents are not tampered with? How do you handle remote/disconnected devices? What is your plan to extend your offering to mobile devices and/or virtual desktops (VDI)? Where does your management console run? Do we need a dedicated appliance? What kind of hierarchical management does your environment support? How customizable is the management interface? What kind of reports are available out of the box? What is involved in customizing specific reports? What have you done to ensure the security of your endpoint security management platform? Is strong authentication supported? Have you done an application pen test on your console? Does your engineering team use any kind of secure software development process? Of course we could have written another 10 questions. But these hit the highlights of device and application coverage, research/intelligence, platform consistency/integration, and management console capabilities. This list cannot replace a more comprehensive RFI/RFP, but can give you a quick idea of whether a vendor’s product family can meet your requirements. The one aspect of buying endpoint security management that we haven’t really discussed appears in question 5 (agents) and question 10 – the security of the management capability itself. Attacking the management plane is like a bank rather than individual account holders. If the attacker can gain control of the endpoint security management system, then they can apply malicious patches, change configurations, drop or block file integrity monitoring alerts, and allow bulk file transfers to thumb drives. But that’s just the beginning of the risks if your management environment is compromised. We focused on the management aspects of endpoint security in this series, but remember that we are talking about endpoint security, which means making sure the environment remains secure – both at the management console and agent levels. The endpoint security management components are all mature technology – so look less at specific feature/capability differentiation and more at policy integration, console leverage, and user experience. Can you get pricing leverage by adding capabilities from an existing vendor? Share:

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Platform Buying Considerations

As we wrap up the Endpoint Security Management Buyer’s Guide, we have already looked at the business impact of managing endpoint security and the endpoint security management lifecycle, and dug into the periodic controls (patch and configuration management) and ongoing controls (device control and file integrity monitoring). We have alluded to the platform throughout the posts, but what exactly does that mean? What do you need the platform to do? Platform Selection As with most other technology categories (at least in security), the management console (or ‘platform’, as we like to call it) connects the sensors, agents, appliances, and any other security controls. Let’s list the platform capabilities you need. Dashboard: The dashboard provides the primary exposure to the technology, so you will want to have user-selectable elements and defaults for technical and non-technical users. You will want to be able to only show certain elements, policies, and/or alerts to authorized users or groups, with the entitlements typically stored in the enterprise directory. Nowadays with the state of widget-based interface design, you can expect a highly customizable environment, letting each user configure what they need and how they want to see it. Discovery: You can’t protect an endpoint (or any other device, for that matter) if you don’t know it exists. So once you get past the dashboard, the first key feature of the platform is discovery. The enemy of the security professional is surprise, so make sure you know about new devices as quickly as possible – including mobile devices. Asset Repository Integration: Closely related to discovery is the ability to integrate with an enterprise asset management system/CMDB to get a heads-up whenever a new device is provisioned. This is essential for monitoring and enforcing policies. You can learn about new devices proactively via integration or reactively via discovery. But either way you need to know what’s out there. Alert Management: A security team is only as good as its last incident response, so alert management is key. This allows administrators to monitor and manage policy violations which could represent a breach. Time is of the essence during any response, so the ability to provide deeper detail via drill down and send information into an incident response process is critical. The interface should be concise, customizable, and easy to read at a glance – response time is critical. When an administrator drills down into an alert the display should cleanly and concisely summarize the reason for the alert, the policy violated, the user(s) involved, and any other information helpful for assessing the criticality and severity of the situation. This is important so we will dig deeper later. Policy Creation and Management: Alerts are driven by the policies you implement in the system, so policy creation and management is also critical. We will delve further into this later. System Administration: You can expect the standard system status and administration capabilities within the platform, including user and group administration. Keep in mind that for a larger more distributed environment you will want some kind of role-based access control (RBAC) and hierarchical management to manage access and entitlements for a variety of administrators with varied responsibilities within your environment. Reporting: As we mentioned when discussing the specific controls, compliance tends to funding and drive these investments, so substantiating their efficacy is necessary. Look for a mixture of customizable pre-built reports and tools to facilitate ad hoc reporting – both at the specific control level and across the entire platform. In light of the importance of managing your policy base and dealing with the resulting alerts – which could represent attacks and/or breaches – let’s go deeper into each of those functions. Policy Creation and Management Once you know what endpoint devices are out there, assessing their policy compliance (and remediating as necessary) is where the platform provides value. The resource cost to validate and assess each alert makes filtering relevant alerts becomes critical for successful endpoint security management. So policy creation and management can be the most difficult part of managing endpoint security. The policy creation interface should be accessible to both technical and non-technical users, although creation of heavily customized policies almost always requires technical skill. For policy creation the system should provide some baselines to get you started. For patching you might start with a list of common devices and then configure the assessment and patching cycles accordingly. This works for the other controls as well. Every environment has its own unique characteristics but the platform vendor should provide out-of-the-box policies to make customization easier and faster. All policies should be usable as templates for new policies. We are big fans of wizards to walk administrators through this initial setup process, but more sophisticated users need an “Advanced” tab or equivalent to set up more granular policies for more sophisticated requirements. Not all policies are created equal, so the platform should be able to grade the sensitivity of each alert and support severity thresholds. Most administrators tend to prefer interfaces that use clear, graphical layouts for policies – preferably with an easy-to-read grid showing the relevant information for each policy. The more complex a policy the easier it is to create internal discrepancies or accidentally define an incorrect remediation. Remember that every policy needs some level of tuning, and a good tool will enable you to create a policy in test mode to see how it would react in production, without firing all sorts of alerts or requiring remediation. Alert Management Security folks earn their keep when bad things happen. So you will want all your tools to accelerate and facilitate the triage, investigation, root cause analysis, and process in which you respond to alerts. On a day to day basis admins will spend most of their time working through the various alerts generated by the platform. So alert management/workflow is the most heavily used part of the endpoint security management platform. When assessing the alert management capabilities of any product or service, first evaluate them in terms of supporting

Share:
Read Post

[New White Paper] Understanding and Selecting Data Masking Solutions

Today we are launching a new research paper on Understanding and Selecting Data Masking Solutions. As we spoke with vendors, customers, and data security professionals over the last 18 months, we felt big changes occurring with masking products. We received many new customer inquires regarding masking, often for use cases outside the classic normal test data creation. We wanted to discuss these changes and share what we see with the community. Our goal has been to ensure the research addresses common questions from both technical and non-technical audiences. We did our best to cover the business applications of masking in a non-technical, jargon-free way. Not everyone who is interested in data security has a degree in data management or security, so we geared the first third of the paper to problems you can reasonably expect to solve with masking technologies. Those of you interested in the nut and bolts need not fear – we drill into the myriad of technical variables later in the paper. The following except offers an overview of what the paper covers: Data masking technology provides data security by replacing sensitive information with a non-sensitive proxy, but doing so in such a way that the copy looks – and acts – like the original. This means non-sensitive data can be used in business processes without changing the supporting applications or data storage facilities. You remove the risk without breaking the business! In the most common use case, masking limits the propagation of sensitive data within IT systems by distributing surrogate data sets for testing and analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is deemed ‘risky’. We are particularly proud of this paper – it is the result of a lot of research, and it took a great deal of time to refine the data. We are not aware of any other research paper that fully captures the breadth of technology options available, or anything else that discusses evolving uses for the technology. With the rapid expansion of the data masking market, many people looking for a handle on what’s possible with masking, and that convinced us on to do an deep research paper. We quickly discovered a couple of issues when we started the research. Masking is such a generic term that most people think they have a handle on how it works, but it turns out they are typically aware of only a small sliver of the available options. Additionally, the use cases for masking have grown far beyond creating test data, evolving into a general data protection and management framework. As the masking techniques and deployment options evolve we see a change in the vocabulary to describe the variation. We hope this research will enhance your understanding of masking systems. Finally, we would like to thank those companies who chose to sponsor this research: IBM and Informatica. Without sponsors like these who contribute to the work we do, we could not offer this quality research free of charge to the community. Please visit their sites to download the paper, or you can find a copy in our research library: Understanding and Selecting Data Masking Solutions. Share:

Share:
Read Post

Pragmatic WAF Management: Application Lifecycle Integration

As we have mentioned throughout this series, the purpose of a WAF is to protect web facing applications from attacks. We can debate build-security-in versus bolt-security-on ad infinitum, but ultimately the answer is both. In the last post we discussed how to build and maintain WAF policies to protect applications, but you also need to adapt your development process to incorporate knowledge of typical attack tactics into code development practices to address application vulnerabilities over time. This involves a two-way discussion between WAF administrators and developers. Developers do their part helping security folks understand applications, what input values should look like, and what changes are expected in upcoming releases. This ensures the WAF rules remain in sync with the application. Granted, not every WAF user will want to integrate their application development and WAF management processes. But separation limits the effectiveness of both the WAF and puts the application at risk. At a minimum, developers (or the DevOps group) should have ongoing communications with WAF managers to avoid having the WAF complicate application deployment and to keep the WAF from interfering with normal application functions. This collaboration is critical, so let’s dig into how this lifecycle should work. Web applications change constantly, especially given increasingly ‘agile’ development teams – some pushing web application changes multiple times per week. In fact many web application development teams don’t even attempt to follow formal release cycles – effectively running an “eternal beta cycle”. The team’s focus and incentives are on introducing new features as quickly as possible to increase customer engagement. But this doesn’t help secure applications, and poses a serious challenge efforts to keep WAF policies current and effective. The greater the rate of application change, the harder it is to maintain WAF policies. This simple relationship seriously complicates one of the major WAF selling points: its ability to implement positive security policies based on acceptable application behavior. As we explained in the last post, whitelist policies enumerate acceptable commands and their associated parameters. The WAF learns about the web applications it protects by monitoring user activity and/or by crawling application pages, determining which need protection, which serve static and/or dynamic content, the data types and value ranges for page variables, and other aspects of user sessions. If the application undergoes constant change, the WAF will always be behind, introducing a risky gap between learning and protecting. New application behavior isn’t reflected in WAF policies, which means some new legitimate requests will blocked, and some illegal requests will be ignored. That makes WAF much less useful. To mitigate these issues we have identified a set of critical success factors for integrating with the SDLC (software development lifecycle). When we originally outlined this process, we mentioned the friction between developers and security teams and how this adversely effects their working relationship. Our goal here is to help set you on the right path and prevent the various groups from feuding like the Hatfields and McCoys – trust us, it happens all to often. Here’s what we recommend: Executive Sponsorship: If, as an organization, you can’t get developers and operations in the room with security, the WAF administrators are stuck on a deserted island. It’s up to them to figure out what each application is supposed to do and how it should be protected, and cannot keep up without sufficient insight or visibility. Similarly, if the development team can say ‘no’ to the security team’s requests, they usually will – they are paid to ship new features, not to provide security. _So either security **is* important, or it isn’t._ To move past a compliance-only WAF, security folks need someone up the food chain – the CISO, the CIO, or even the CEO – to agree that the velocity of feature evolution must give some ground to address operational security. Once management has made that commitment, developers can justify improving security as part of their job. It is also possible – and in some organizational cultures advisable – to include some security into the application specification. This helps guarantee code does not ship until it meets minimum security requirements – either in the app or in the WAF. Establish Expectations: Here all parties learn what’s required and expected to get their jobs done with minimum fuss and maximum security. We suggest you arrange a sit-down with all stakeholders (operations, development, and security) to establish some guidelines on what really needs to happen, and what would be nice to have. Most developers want to know about broken links and critical bugs in the code, but they get surly when you send them thousands of changes via email, and downright pissed when all requests relate to the same non-critical issue. It’s essential to get agreement on what constitutes a critical issue and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Similarly, security people hate it when a new application enters production on a site they didn’t know existed, or significant changes to the network or application infrastructure break the WAF configuration. Technically speaking, each party removes work the other does not want to do, or does not have time to do, so position these discussions as mutually beneficial. A true win-win – or at least a reduction in aggravation and wasted time. Security/Developer Integration Points: The integration points define how the parties share data and solve problems together. Establish rules of engagement for how DevOps works with the WAF team, when they meet, and what automated tools will be used to facilitate communication. You might choose to invite security to development scrums, or a member of the development team could attend security meetings. You need to agree upon a communication medium that’s easy to use, establish a method for getting urgent requests addressed, and define a means of escalation for when they are not. Logical and documented notification processes need to be integrated in the application development lifecycle to ensure

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring

After hitting on the first of the ongoing controls, device control, we now turn to File Integrity Monitoring (FIM). Also called change monitoring, this entails monitoring files to see if and when files change. This capability is important for endpoint security management. Here are a few scenarios where FIM is particularly useful: Malware detection: Malware does many bad things to your devices. It can load software, and change configurations and registry settings. But another common technique is to change system files. For instance, a compromised IP stack could be installed to direct all your traffic to a server in Eastern Europe, and you might never be the wiser. Unauthorized changes: These may not be malicious but can still cause serious problems. They can be caused by many things, including operational failure and bad patches, but ill intent is not necessary for exposure. PCI compliance: Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it – you can justify the expenditure with the compliance hammer, but remember that security is about more than checking the compliance box, so we will focus on getting value from the investment as well. FIM Process Again we start with a process that can be used to implement file integrity monitoring. Technology controls for endpoint security management don’t work well without appropriate supporting processes. Set policy: Start by defining your policy, identifying which files on which devices need to be monitored. But there are tens of millions of files in your environment so you need to be pretty savvy to limit monitoring to the most sensitive files on the most sensitive devices. Baseline files: Then ensure the files you assess are in a known good state. This may involve evaluating version, creation and modification date, or any other file attribute to provide assurance that the file is legitimate. If you declare something malicious to be normal and allowed, things go downhill quickly. The good news is that FIM vendors have databases of these attributes for billions of known good and bad files, and that intelligence is a key part of their products. Monitor: Next you actually monitor usage of the files. This is easier said than done because you may see hundreds of file changes on a normal day. So knowing a good change from a bad change is essential. You need a way to minimize false positives from flagging legitimate changes to avoid wasting everyone’s time. Alert: When an unauthorized change is detected you need to let someone know. Report: FIM is required for PCI compliance, and you will likely use that budget to buy it. So you need to be able to substantiate effective use for your assessor. That means generating reports. Good times. Technology Considerations Now that you have the process in place, you need some technology to implement FIM. Here are some things to think about when looking at these tools: Device and application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We will talk about this more under research and intelligence, below. Policy Granularity: You will want to make sure your product can support different policies by device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You will also want to be able to set up those policies based on groups of users and device types (locking down Windows XP tighter, for example, as it doesn’t newer protections in Windows 7). Small footprint agent: In order to implement FIM you will need an agent on each protected device. Of course there are different definitions of what an ‘agent’ is, and whether one needs to be persistent or it can be downloaded as needed to check the file system and then removed – a “dissolvable agent”. You will need sufficient platform support as well as some kind of tamper proofing of the agent. You don’t want an attacker to turn off or otherwise compromise the agent’s ability to monitor files – or even worse, to return tampered results. Frequency of monitoring: Related to the persistent vs. dissolvable agent question, you need to determine whether you require continuous monitoring of files or batch assessment is acceptable. Before you respond “Duh! Of course we want to monitor files at all times!” remember that to take full advantage of continuous monitoring, you must be able to respond immediately to every alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process could work. Research & Intelligence: A large part of successful FIM is knowing a good change from a potentially bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource-constrained operations folks doing is assembling monthly lists of file changes for a patch cycle. Your vendor needs to do that. But it’s a bit more complicated, so here are some other notes on detecting bad file changes. Change detection algorithm: Is a change detected based on file hash, version, creation date, modification date, or privileges? Or all of the above? Understanding how the vendor determines a file has changed enables you to ensure all your threat models are factored in. Version control: Remember that even a legitimate file may not be the right one. Let’s say you are updating a system file, but an older legitimate version is installed. Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring that versions are managed by integrating with patch information is also a must. Risk assessment: It’s also helpful if the vendor can assess different kinds of changes

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.