Securosis

Research

Endpoint Security Management Buyer’s Guide: 10 Questions

Normally we wrap up each blog series with a nice summary that goes through the high points of our research and summarizes what you need to know. But this is a Buyer’s Guide, so we figured it would be more useful to summarize with 10 questions. With apologies to Alex Trebek, here are the 10 key questions we would ask if we were buying an endpoint security management product or service. What specific controls do you offer for endpoint management? Can the policies for all controls be managed via your management console? Does your organization have an in-house research team? How does their work make your endpoint security management product better? What products, devices, and applications are supported by your endpoint security management offerings? What standards and/or benchmarks are offered out of the box for your configuration management offering? What kind of agentry is required for your products? Is the agent persistent or dissolvable? How are updates distributed to managed devices? What is done to ensure the agents are not tampered with? How do you handle remote/disconnected devices? What is your plan to extend your offering to mobile devices and/or virtual desktops (VDI)? Where does your management console run? Do we need a dedicated appliance? What kind of hierarchical management does your environment support? How customizable is the management interface? What kind of reports are available out of the box? What is involved in customizing specific reports? What have you done to ensure the security of your endpoint security management platform? Is strong authentication supported? Have you done an application pen test on your console? Does your engineering team use any kind of secure software development process? Of course we could have written another 10 questions. But these hit the highlights of device and application coverage, research/intelligence, platform consistency/integration, and management console capabilities. This list cannot replace a more comprehensive RFI/RFP, but can give you a quick idea of whether a vendor’s product family can meet your requirements. The one aspect of buying endpoint security management that we haven’t really discussed appears in question 5 (agents) and question 10 – the security of the management capability itself. Attacking the management plane is like a bank rather than individual account holders. If the attacker can gain control of the endpoint security management system, then they can apply malicious patches, change configurations, drop or block file integrity monitoring alerts, and allow bulk file transfers to thumb drives. But that’s just the beginning of the risks if your management environment is compromised. We focused on the management aspects of endpoint security in this series, but remember that we are talking about endpoint security, which means making sure the environment remains secure – both at the management console and agent levels. The endpoint security management components are all mature technology – so look less at specific feature/capability differentiation and more at policy integration, console leverage, and user experience. Can you get pricing leverage by adding capabilities from an existing vendor? Share:

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Platform Buying Considerations

As we wrap up the Endpoint Security Management Buyer’s Guide, we have already looked at the business impact of managing endpoint security and the endpoint security management lifecycle, and dug into the periodic controls (patch and configuration management) and ongoing controls (device control and file integrity monitoring). We have alluded to the platform throughout the posts, but what exactly does that mean? What do you need the platform to do? Platform Selection As with most other technology categories (at least in security), the management console (or ‘platform’, as we like to call it) connects the sensors, agents, appliances, and any other security controls. Let’s list the platform capabilities you need. Dashboard: The dashboard provides the primary exposure to the technology, so you will want to have user-selectable elements and defaults for technical and non-technical users. You will want to be able to only show certain elements, policies, and/or alerts to authorized users or groups, with the entitlements typically stored in the enterprise directory. Nowadays with the state of widget-based interface design, you can expect a highly customizable environment, letting each user configure what they need and how they want to see it. Discovery: You can’t protect an endpoint (or any other device, for that matter) if you don’t know it exists. So once you get past the dashboard, the first key feature of the platform is discovery. The enemy of the security professional is surprise, so make sure you know about new devices as quickly as possible – including mobile devices. Asset Repository Integration: Closely related to discovery is the ability to integrate with an enterprise asset management system/CMDB to get a heads-up whenever a new device is provisioned. This is essential for monitoring and enforcing policies. You can learn about new devices proactively via integration or reactively via discovery. But either way you need to know what’s out there. Alert Management: A security team is only as good as its last incident response, so alert management is key. This allows administrators to monitor and manage policy violations which could represent a breach. Time is of the essence during any response, so the ability to provide deeper detail via drill down and send information into an incident response process is critical. The interface should be concise, customizable, and easy to read at a glance – response time is critical. When an administrator drills down into an alert the display should cleanly and concisely summarize the reason for the alert, the policy violated, the user(s) involved, and any other information helpful for assessing the criticality and severity of the situation. This is important so we will dig deeper later. Policy Creation and Management: Alerts are driven by the policies you implement in the system, so policy creation and management is also critical. We will delve further into this later. System Administration: You can expect the standard system status and administration capabilities within the platform, including user and group administration. Keep in mind that for a larger more distributed environment you will want some kind of role-based access control (RBAC) and hierarchical management to manage access and entitlements for a variety of administrators with varied responsibilities within your environment. Reporting: As we mentioned when discussing the specific controls, compliance tends to funding and drive these investments, so substantiating their efficacy is necessary. Look for a mixture of customizable pre-built reports and tools to facilitate ad hoc reporting – both at the specific control level and across the entire platform. In light of the importance of managing your policy base and dealing with the resulting alerts – which could represent attacks and/or breaches – let’s go deeper into each of those functions. Policy Creation and Management Once you know what endpoint devices are out there, assessing their policy compliance (and remediating as necessary) is where the platform provides value. The resource cost to validate and assess each alert makes filtering relevant alerts becomes critical for successful endpoint security management. So policy creation and management can be the most difficult part of managing endpoint security. The policy creation interface should be accessible to both technical and non-technical users, although creation of heavily customized policies almost always requires technical skill. For policy creation the system should provide some baselines to get you started. For patching you might start with a list of common devices and then configure the assessment and patching cycles accordingly. This works for the other controls as well. Every environment has its own unique characteristics but the platform vendor should provide out-of-the-box policies to make customization easier and faster. All policies should be usable as templates for new policies. We are big fans of wizards to walk administrators through this initial setup process, but more sophisticated users need an “Advanced” tab or equivalent to set up more granular policies for more sophisticated requirements. Not all policies are created equal, so the platform should be able to grade the sensitivity of each alert and support severity thresholds. Most administrators tend to prefer interfaces that use clear, graphical layouts for policies – preferably with an easy-to-read grid showing the relevant information for each policy. The more complex a policy the easier it is to create internal discrepancies or accidentally define an incorrect remediation. Remember that every policy needs some level of tuning, and a good tool will enable you to create a policy in test mode to see how it would react in production, without firing all sorts of alerts or requiring remediation. Alert Management Security folks earn their keep when bad things happen. So you will want all your tools to accelerate and facilitate the triage, investigation, root cause analysis, and process in which you respond to alerts. On a day to day basis admins will spend most of their time working through the various alerts generated by the platform. So alert management/workflow is the most heavily used part of the endpoint security management platform. When assessing the alert management capabilities of any product or service, first evaluate them in terms of supporting

Share:
Read Post

[New White Paper] Understanding and Selecting Data Masking Solutions

Today we are launching a new research paper on Understanding and Selecting Data Masking Solutions. As we spoke with vendors, customers, and data security professionals over the last 18 months, we felt big changes occurring with masking products. We received many new customer inquires regarding masking, often for use cases outside the classic normal test data creation. We wanted to discuss these changes and share what we see with the community. Our goal has been to ensure the research addresses common questions from both technical and non-technical audiences. We did our best to cover the business applications of masking in a non-technical, jargon-free way. Not everyone who is interested in data security has a degree in data management or security, so we geared the first third of the paper to problems you can reasonably expect to solve with masking technologies. Those of you interested in the nut and bolts need not fear – we drill into the myriad of technical variables later in the paper. The following except offers an overview of what the paper covers: Data masking technology provides data security by replacing sensitive information with a non-sensitive proxy, but doing so in such a way that the copy looks – and acts – like the original. This means non-sensitive data can be used in business processes without changing the supporting applications or data storage facilities. You remove the risk without breaking the business! In the most common use case, masking limits the propagation of sensitive data within IT systems by distributing surrogate data sets for testing and analysis. In other cases, masking will dynamically provide masked content if a user’s request for sensitive information is deemed ‘risky’. We are particularly proud of this paper – it is the result of a lot of research, and it took a great deal of time to refine the data. We are not aware of any other research paper that fully captures the breadth of technology options available, or anything else that discusses evolving uses for the technology. With the rapid expansion of the data masking market, many people looking for a handle on what’s possible with masking, and that convinced us on to do an deep research paper. We quickly discovered a couple of issues when we started the research. Masking is such a generic term that most people think they have a handle on how it works, but it turns out they are typically aware of only a small sliver of the available options. Additionally, the use cases for masking have grown far beyond creating test data, evolving into a general data protection and management framework. As the masking techniques and deployment options evolve we see a change in the vocabulary to describe the variation. We hope this research will enhance your understanding of masking systems. Finally, we would like to thank those companies who chose to sponsor this research: IBM and Informatica. Without sponsors like these who contribute to the work we do, we could not offer this quality research free of charge to the community. Please visit their sites to download the paper, or you can find a copy in our research library: Understanding and Selecting Data Masking Solutions. Share:

Share:
Read Post

Pragmatic WAF Management: Application Lifecycle Integration

As we have mentioned throughout this series, the purpose of a WAF is to protect web facing applications from attacks. We can debate build-security-in versus bolt-security-on ad infinitum, but ultimately the answer is both. In the last post we discussed how to build and maintain WAF policies to protect applications, but you also need to adapt your development process to incorporate knowledge of typical attack tactics into code development practices to address application vulnerabilities over time. This involves a two-way discussion between WAF administrators and developers. Developers do their part helping security folks understand applications, what input values should look like, and what changes are expected in upcoming releases. This ensures the WAF rules remain in sync with the application. Granted, not every WAF user will want to integrate their application development and WAF management processes. But separation limits the effectiveness of both the WAF and puts the application at risk. At a minimum, developers (or the DevOps group) should have ongoing communications with WAF managers to avoid having the WAF complicate application deployment and to keep the WAF from interfering with normal application functions. This collaboration is critical, so let’s dig into how this lifecycle should work. Web applications change constantly, especially given increasingly ‘agile’ development teams – some pushing web application changes multiple times per week. In fact many web application development teams don’t even attempt to follow formal release cycles – effectively running an “eternal beta cycle”. The team’s focus and incentives are on introducing new features as quickly as possible to increase customer engagement. But this doesn’t help secure applications, and poses a serious challenge efforts to keep WAF policies current and effective. The greater the rate of application change, the harder it is to maintain WAF policies. This simple relationship seriously complicates one of the major WAF selling points: its ability to implement positive security policies based on acceptable application behavior. As we explained in the last post, whitelist policies enumerate acceptable commands and their associated parameters. The WAF learns about the web applications it protects by monitoring user activity and/or by crawling application pages, determining which need protection, which serve static and/or dynamic content, the data types and value ranges for page variables, and other aspects of user sessions. If the application undergoes constant change, the WAF will always be behind, introducing a risky gap between learning and protecting. New application behavior isn’t reflected in WAF policies, which means some new legitimate requests will blocked, and some illegal requests will be ignored. That makes WAF much less useful. To mitigate these issues we have identified a set of critical success factors for integrating with the SDLC (software development lifecycle). When we originally outlined this process, we mentioned the friction between developers and security teams and how this adversely effects their working relationship. Our goal here is to help set you on the right path and prevent the various groups from feuding like the Hatfields and McCoys – trust us, it happens all to often. Here’s what we recommend: Executive Sponsorship: If, as an organization, you can’t get developers and operations in the room with security, the WAF administrators are stuck on a deserted island. It’s up to them to figure out what each application is supposed to do and how it should be protected, and cannot keep up without sufficient insight or visibility. Similarly, if the development team can say ‘no’ to the security team’s requests, they usually will – they are paid to ship new features, not to provide security. _So either security **is* important, or it isn’t._ To move past a compliance-only WAF, security folks need someone up the food chain – the CISO, the CIO, or even the CEO – to agree that the velocity of feature evolution must give some ground to address operational security. Once management has made that commitment, developers can justify improving security as part of their job. It is also possible – and in some organizational cultures advisable – to include some security into the application specification. This helps guarantee code does not ship until it meets minimum security requirements – either in the app or in the WAF. Establish Expectations: Here all parties learn what’s required and expected to get their jobs done with minimum fuss and maximum security. We suggest you arrange a sit-down with all stakeholders (operations, development, and security) to establish some guidelines on what really needs to happen, and what would be nice to have. Most developers want to know about broken links and critical bugs in the code, but they get surly when you send them thousands of changes via email, and downright pissed when all requests relate to the same non-critical issue. It’s essential to get agreement on what constitutes a critical issue and how critical issues will be addressed among the pile of competing critical requirements. Set guidelines in advance so there are no arguments when issues arise. Similarly, security people hate it when a new application enters production on a site they didn’t know existed, or significant changes to the network or application infrastructure break the WAF configuration. Technically speaking, each party removes work the other does not want to do, or does not have time to do, so position these discussions as mutually beneficial. A true win-win – or at least a reduction in aggravation and wasted time. Security/Developer Integration Points: The integration points define how the parties share data and solve problems together. Establish rules of engagement for how DevOps works with the WAF team, when they meet, and what automated tools will be used to facilitate communication. You might choose to invite security to development scrums, or a member of the development team could attend security meetings. You need to agree upon a communication medium that’s easy to use, establish a method for getting urgent requests addressed, and define a means of escalation for when they are not. Logical and documented notification processes need to be integrated in the application development lifecycle to ensure

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Ongoing Controls—File Integrity Monitoring

After hitting on the first of the ongoing controls, device control, we now turn to File Integrity Monitoring (FIM). Also called change monitoring, this entails monitoring files to see if and when files change. This capability is important for endpoint security management. Here are a few scenarios where FIM is particularly useful: Malware detection: Malware does many bad things to your devices. It can load software, and change configurations and registry settings. But another common technique is to change system files. For instance, a compromised IP stack could be installed to direct all your traffic to a server in Eastern Europe, and you might never be the wiser. Unauthorized changes: These may not be malicious but can still cause serious problems. They can be caused by many things, including operational failure and bad patches, but ill intent is not necessary for exposure. PCI compliance: Requirement 11.5 in our favorite prescriptive regulatory mandate, the PCI-DSS, requires file integrity monitoring to alert personnel to unauthorized modification of critical system files, configuration files, or content files. So there you have it – you can justify the expenditure with the compliance hammer, but remember that security is about more than checking the compliance box, so we will focus on getting value from the investment as well. FIM Process Again we start with a process that can be used to implement file integrity monitoring. Technology controls for endpoint security management don’t work well without appropriate supporting processes. Set policy: Start by defining your policy, identifying which files on which devices need to be monitored. But there are tens of millions of files in your environment so you need to be pretty savvy to limit monitoring to the most sensitive files on the most sensitive devices. Baseline files: Then ensure the files you assess are in a known good state. This may involve evaluating version, creation and modification date, or any other file attribute to provide assurance that the file is legitimate. If you declare something malicious to be normal and allowed, things go downhill quickly. The good news is that FIM vendors have databases of these attributes for billions of known good and bad files, and that intelligence is a key part of their products. Monitor: Next you actually monitor usage of the files. This is easier said than done because you may see hundreds of file changes on a normal day. So knowing a good change from a bad change is essential. You need a way to minimize false positives from flagging legitimate changes to avoid wasting everyone’s time. Alert: When an unauthorized change is detected you need to let someone know. Report: FIM is required for PCI compliance, and you will likely use that budget to buy it. So you need to be able to substantiate effective use for your assessor. That means generating reports. Good times. Technology Considerations Now that you have the process in place, you need some technology to implement FIM. Here are some things to think about when looking at these tools: Device and application support: Obviously the first order of business is to make sure the vendor supports the devices and applications you need to protect. We will talk about this more under research and intelligence, below. Policy Granularity: You will want to make sure your product can support different policies by device. For example, a POS device in a store (within PCI scope) needs to have certain files under control, while an information kiosk on a segmented Internet-only network in your lobby may not need the same level of oversight. You will also want to be able to set up those policies based on groups of users and device types (locking down Windows XP tighter, for example, as it doesn’t newer protections in Windows 7). Small footprint agent: In order to implement FIM you will need an agent on each protected device. Of course there are different definitions of what an ‘agent’ is, and whether one needs to be persistent or it can be downloaded as needed to check the file system and then removed – a “dissolvable agent”. You will need sufficient platform support as well as some kind of tamper proofing of the agent. You don’t want an attacker to turn off or otherwise compromise the agent’s ability to monitor files – or even worse, to return tampered results. Frequency of monitoring: Related to the persistent vs. dissolvable agent question, you need to determine whether you require continuous monitoring of files or batch assessment is acceptable. Before you respond “Duh! Of course we want to monitor files at all times!” remember that to take full advantage of continuous monitoring, you must be able to respond immediately to every alert. Do you have 24/7 ops staff ready to pounce on every change notification? No? Then perhaps a batch process could work. Research & Intelligence: A large part of successful FIM is knowing a good change from a potentially bad change. That requires some kind of research and intelligence capability to do the legwork. The last thing you want your expensive and resource-constrained operations folks doing is assembling monthly lists of file changes for a patch cycle. Your vendor needs to do that. But it’s a bit more complicated, so here are some other notes on detecting bad file changes. Change detection algorithm: Is a change detected based on file hash, version, creation date, modification date, or privileges? Or all of the above? Understanding how the vendor determines a file has changed enables you to ensure all your threat models are factored in. Version control: Remember that even a legitimate file may not be the right one. Let’s say you are updating a system file, but an older legitimate version is installed. Is that a big deal? If the file is vulnerable to an attack it could be, so ensuring that versions are managed by integrating with patch information is also a must. Risk assessment: It’s also helpful if the vendor can assess different kinds of changes

Share:
Read Post

Friday Summary: August 17, 2012

Rich here… Some weeks I can’t decide if I should write something personal, professional, or technical in the Summary intro. Especially when I’m absolutely slammed and haven’t been blogging. This week I’ll err on the side of personal, and I’m sure you all will give me a little feedback if you prefer the geeky. Last summer and fall I had a bit of a health scare, when I thought I was, well, dying, at breakfast with Mr. Rothman. Now I know many people think they’ve felt that way while dining with Mike, but I seriously thought I was going out for the count. Many months and medical tests later, they think it was just something with my stomach, and beyond a more-than-fair share of indigestion, I have moved on with life. I have always tried to be a relatively self-aware individual (well, not counting most of my life before age 27 or so). Some people blow off scares like that, but I figured it was a good way to review my life priorities. I came up with a simple list: 1 Family/Health 2 Fitness 3 Work … … … … … … 4 Everything else Yes, in that order. I can’t be there for my family if I’m not healthy, and my wife and kids matter a lot more to me than anything else. I don’t begrudge people who prioritize differently – I know plenty of people who put work/career in front of family (although few admit it to themselves). That’s their decision to make, and some of them see financial health the way I look at physical health. That’s also why I put fitness ahead of work. For me, it is intrinsically tied to health, but the difference is that I will skip a workout if necessary to be there for my family. But aside from the health benefits (not counting the injuries), I want to be very active and very old someday, which I can’t do without working out. A lot. And it keeps me sane. Then comes work. As the CEO of a startup facing its most important year ever, work has to come before everything else. This Securosis thing isn’t just a paycheck – it is long-term financial security for my family. I can’t afford to screw it up. The upside of The List is that it makes decisions simple. Can I carve out 2 hours for a workout during business hours so I can be home with my family that night? Yes. Do I skip a Saturday morning ride because I have been traveling a lot and need to spend time with the kids? Yes. Can I travel for that conference during a birthday? No. The downside of my list is that “everything else” is a distant fourth to the top three items. That means less contact with friends; dropping most of my hobbies that don’t involve pools, bikes, or running shoes; and not having the diversity in my life that I enjoyed before my family. Other than the part about friends, I am okay with those sacrifices. When the kids are older I can start woodworking, recreational hacking, and playing with the soldering iron again. I don’t have much of a social life and I work at home, which is pretty isolating, and maybe I will figure that out as the kids get older. It also means I dropped nearly all my emergency services work, and for the first time since I was 18 I might drop it completely for a few years. And, in case you were wondering, The List is the reason I haven’t been writing as much. We are completing some seriously important long-term projects for the company and those need to take priority over the short stuff. But I like algorithms. Keeps things simple, especially when you need to make the hard choices. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Quiet. Must be summer. Favorite Securosis Posts Adrian Lane: Pragmatic WAF Management: Policy Management. I don’t normally pick my own posts but I like this one. And there was so much ground to cover I am afraid I might have left something out, so I would appreciate feedback. Mike Rothman: Pragmatic WAF Management: Policy Management. WAFs still ain’t easy, nor are they going to be. But understanding how to build and manage your policies is the first step. Adrian lays out what you need to know. Rich: Always Assume. This is an older post, but it came up in an internal discussion today and I think it is still very relevant. Other Securosis Posts Pragmatic WAF Management: Application Lifecycle Integration. Incite 8/15/2012: Fear (of the Unknown). Endpoint Security Management Buyer’s Guide: Ongoing Controls – Device Control. Endpoint Security Management Buyer’s Guide: Ongoing Controls – File Integrity Monitoring. Favorite Outside Posts Mike Rothman: Triple DDoS vs. krebsonsecurity. You know you have friends in high places when a one man operation is blasted by all sorts of bad folks. Krebs describes his battle against DDoS in this post and it’s illuminating. We will do some research on DDoS in the fall – we think this is an attack tactic you will need to pay much more attention to. Adrian Lane: Software Runs the World. I’m not certain we can say software is totally to blame for the Knight Capital issue, but this is a thought-provoking piece in the mainstream media. Although I am certain those of you who have read Daemon are unimpressed. Rich: Gunnar’s interview with Jason Chan of Netflix security. A gold mine for cloud security nuggets. Project Quant Posts Malware Analysis Quant: Index of Posts. Research Reports and Presentations Understanding and Selecting Data Masking Solutions Evolving Endpoint Malware Detection: Dealing with Advanced and Targeted Attacks. Implementing and Managing a Data Loss Prevention Solution. Defending Data on iOS. Malware Analysis Quant Report. Report: Understanding and Selecting a Database Security Platform. Vulnerability Management Evolution: From Tactical Scanner to Strategic Platform. Watching the Watchers: Guarding the Keys to the

Share:
Read Post

Incite 8/15/2012: Fear (of the Unknown)

FDR was right. We have nothing to fear, but fear itself. Of course, that doesn’t help much when you face the unknown and are scared. XX1 started middle school on Monday, so as you can imagine she was a bit anxious on Sunday night. The good news is that she made it through the first day. She even had a good attitude when her bus was over an hour late because of some issue at the high school. She could have walked the 3 miles home in a lot less time. But when she was similarly anxious on Monday night, even after a successful first day, it was time for a little chat with Dad. She asked if I was scared when I started middle school. Uh, I can’t remember what I had for breakfast yesterday, so my odds of remembering an emotion from 30+ years ago are pretty small. But I did admit to having a little anxiety before a high-profile speech or meeting with folks who really know what they’re talking about. It’s basically fear of the unknown. You don’t know what’s going to happen, and that can be scary. I found this quote when looking at Flickr for the image above, and I liked it: Fear is a question: What are you afraid of, and why? Just as the seed of health is in illness, because illness contains information, your fears are a treasure house of self-knowledge if you explore them. ~ Marilyn Ferguson Of course that’s a bit deep for a 11 (almost 12) year old, so I had to take a different approach. We chatted about a couple strategies to deal with the anxiety, not let it make her sick, and allow her to function – maybe even function a bit better with that anxiety-fueled edge. First we played the “What’s the worst that could happen?” game. So she gets lost. Or forgets the combination to her locker. Or isn’t friends with every single person in all her classes. We went through a couple things, and I kept asking, “What’s the worst thing that could happen?” That seemed to help, as she realized that whatever happens, it will be okay. Then we moved on to “You’re not alone.” Remember, when you’re young and experiencing new feelings, you think you might be the only person in the world who feels that way. Turns out most of her friends were similarly anxious. Even the boys. Then we discussed the fact that whatever she’s dealing with will be over before she knows it. I reiterated that I get nervous sometimes before a big meeting. But then I remember that before I know it, it’ll be over. And sure enough it is. We have heard from pretty much everyone that it takes kids about two weeks to adjust to the new reality of middle school. To get comfortable dealing with 7 teachers, in 7 different classrooms, with 7 different teaching styles. It takes a while to not be freaked out in the Phys Ed locker room, where they have to put on their gym uniforms. It may be a few weeks before she makes some new friends and finds a few folks she’s comfortable with. She knows about a third of the school from elementary school. But that’s still a lot of new people to meet. Of course she’ll adjust and she’ll thrive. I have all the confidence in the world. That doesn’t make seeing her anxious any easier, just like it was difficult back when I helped her deal with mean people and setbacks and other challenges in elementary school. But those obstacles make us the people we become, and she will eventually look back at her anxiety and laugh. Most likely within a month. And it will be great. At least for a few years until she goes off to high school. Then I’m sure the anxiety engine will kick into full gear again. Wash, rinse, repeat. That’s life, folks. –Mike Photo credits: The Question of Fear originally uploaded by elycefeliz Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Endpoint Security Management Buyer’s Guide Ongiong Controls – Device Control Periodic Controls The ESM Lifecycle Pragmatic WAF Management Policy Management The WAF Management Process New Series: Pragmatic WAF Management Incite 4 U Even foes can be friends: You’ve have to think the New School guys really enjoy reading reports of increased collaboration, even among serious competitors. Obviously some of the more mature (relative to security) industries have been using ISAC (Information Sharing and Analysis Center) groups for a long time. But now the folks at GA Tech are looking to build a better collaboration environment. Maybe they can help close the gap between the threat intelligence haves (with their own security research capabilities) and the have-nots (everyone else). With the Titan environment, getting access to specific malware samples should be easier. Of course folks still need to know what to do with that information, which remains a non-trivial issue. But as with OpenIOC, sharing more information about what malware does is a great thing. Just like in the playground sandbox: when we don’t share, we all lose. I guess we did learn almost everything we need to know back in kindergarten. – MR Operational effectiveness: I have done dozens of panels, seminars, and security management round tables over the past 12 years, and the question of how security folks should present their value to peers and upper management has been a topic at most of them. I am always hot to participate in these panels because I fell into the trap of positioning security as a roadblock early in my career. I was “Dr. No” long before Mordac: Preventer of Information Services was conceived. I

Share:
Read Post

Endpoint Security Management Buyer’s Guide: Ongoing Controls—Device Control

As we discussed in the Endpoint Security Management Lifecycle, there are controls you run periodically and others you need to use on an ongoing basis. We tackled the periodic controls in the previous post, so now let’s turn to ongoing controls, which include device control and file integrity monitoring. The periodic controls post was pretty long, so we decided to break ongoing controls into two pieces. We will tackle device control in this post. Device Control Device control technology provides the ability to enforce policy on what you can and can’t do with devices. That includes locking down ports to prevent copying data (primarily via removable media), as well as protecting against hardware keyloggers and ensuring any data allowed onto removable media is encrypted. Early in this technology’s adoption cycle we joked that the alternative to device control involves supergluing the USB ports shut. Which isn’t altogether wrong. Obviously superglue doesn’t provide sufficient granularity in the face of employees’ increasing need to collaborate and share data using removable media, but it would at least prevent many breaches. So let’s get a bit more specific with device control use cases: Data leakage: You want to prevent users from connecting their phones or USB sticks and grabbing your customer database. You would also like to allow them to connect USB sticks, but not copy email or databases, or perhaps limit them to copying 200mb per day. Don’t let your intellectual property escape on removable media. Encryption: Obviously there are real business needs for USB ports, or else we would all have stopped at superglue. If you need to support moving data to removable media, make sure it’s encrypted. If you think losing a phone is easy, USB sticks are even easier – and if one has unencrypted and unprotected sensitive data, you will get a chance to dust off your customer notification process. Malware proliferation: The final use case to mention gets back to the future. Remember how the first computer viruses spread via floppy disks? Back in the day sneakernet was a big problem, and this generation’s sneakernet is the found USB stick that happens to carry malware. You will want to protect against that attack without resorting to superglue. Device Control Process As we have mentioned throughout this series, implementing technology controls for endpoint security management without the proper underlying processes never works well, so let’s quickly offer a reasonable device control process: Define target devices: Which devices pose a risk to your environment? It’s probably not all of them, so start by figuring out which devices need to be protected. Build threat models: Next put on your attacker hat and figure out how those devices are likely to be attacked. Are you worried about data leakage? Malware? Build models to represent how you would attack your environment. Then take the threat models to the next level. Maybe the marketing folks should be able to share big files via their devices, but folks in engineering (with access to source code) shouldn’t. You can get pretty granular with your policies, so you can do the same with threat models. Define policies: With the threat models you can define policies. Any technology you select should be able to support the policies you need. Discovery: Yes, you will need to keep an eye on your environment, checking for new devices and managing the devices you already know about. There is no reason to reinvent the wheel, so you are likely to rely on an existing asset repository (within the endpoint security management platform, or perhaps a CMDB). Enforcement: Now we get to the operational part of endpoint security management: deploying agents and enforcing policies on devices. Reporting: We security folks like to think we implement these controls to protect our environments, but don’t forget that at least a portion of our tools are funded by compliance. So we need some reports to demonstrate that we’re protecting data and compliant. Technology Considerations Now that you have the process in place, you need some technology to implement the controls. Here are some things to think about when looking at these tools: Device support: Obviously the first order of business is to make sure the vendor supports the devices you need to protect. That means ensuring operating system support, as well as the media types (removable storage, DVD/CDs, tape drives, printers, etc.) you want to define policies for. Additionally make sure the product supports all ports on your devices, including USB, FireWire, serial, parallel, and Bluetooth. Some offerings can also implement policies on data sent via the network driver, though that begins to blur into endpoint DLP, which we will discuss later. Policy granularly: You will want to make sure your product can support different policies by device. For example, this allows you to set a policy to let an employee download any data to an IronKey but only non-critical data onto an iPhone. You will also want to be able to set up different policies for different classes of users and groups, as well as by type of data (email vs. spreadsheets vs. databases). You may want to limit the amount of data that can be copied by some users. This list isn’t exhaustive, but make sure your product can support the policies you need. Encryption algorithm support: If you are going to encrypt data on removable media, make sure your product supports your preferred encryption algorithms and/or hooks to your central key management environment. You may also be interested in certifications such as EAL (Common Criteria), FIPS 140-2, etc. Small footprint agent: To implement device control you will need to implement an agent on each protected device. You’ll need sufficient platform support (discussed above), as well as some kind of tamper resistance for the agent. You don’t want an attacker to turn off or compromise the agent’s ability to enforce policies. Hardware keylogger protection: It’s old school, but from time to time we still see hardware keyloggers which use plug into a device port.

Share:
Read Post

Pragmatic WAF Management: Policy Management

To get value out of your WAF investment – which means blocking threats, keeping unwanted requests and malware from hitting applications, and virtually patching known vulnerabilities in the application stack – the WAF must be tuned regularly. As we mentioned in our introduction, WAF is not a “set and forget” tool – it’s a security platform which requires adjustment for new and evolving threats. To flesh out the process presented in the WAF Management Process, let’s dig into policy management – specifically how to tune policies to defend your site. But first it’s worth discussing the different types of polices at your disposal. Policies fall into two categories, blacklists of stuff you don’t want – attacks you know about – and whitelists of activities that are permissible for specific applications. These negative and positive security models complement one another to fully protect applications. Negative Security Negative security models should be familiar – at least from Intrusion Prevention Systems. The model works by detecting patterns of known malicious behavior. Things like site scraping, injection attacks, XML attacks, suspected botnets, Tor nodes, and even blog spam, are universal application attacks that affect all sites. Most of these policies come “out of the box” from vendors, who research and develop signatures for their customers. Each signature explicitly describes an attack, and they are typically used to identify attacks such as SQL injection and buffer overflows. The downside of this method is its fragility – any variation of the attack will no longer match the signature, and will thus bypass the WAF. So signatures are only suitable when you can reliably and deterministically describe an attack, and don’t expect the signature to immediately become invalid. So vendors provide a myriad of other detection options, such as heuristics, reputation scoring, detection of evasion techniques, and several proprietary methods used to qualitatively detect attacks. Each method has its own strengths and weaknesses, and use cases for which it is more or less well suited. They can be combined with each other to provide a risk score for incoming requests, in order to block requests that look too suspicious. But the devil is in the details, there are literally thousands of attack variations, and figuring out how to apply policies to detect and stop attacks is quite difficult. Finally, fraud detection, business logic attack detection, and data leakage policies need to be adapted to the specific use models of your web applications to be effective. The attacks are designed to find flaws in the way application developers code, targeting gaps in the ways they enforce process and transaction state. Examples include issuing order and cancellation requests in rapid succession to confuse the web server or database into revealing or altering shopping cart information, replaying attacks, and changing the order of events. You generally need to develop your own fraud detection policies. They are constructed with the same analytic techniques, but rather than focusing on the structure and use of HTTP and XML grammars, they examine user behavior as it relates to the type of transaction being performed. These policies require an understanding of how your web application works, as well as appropriate detection techniques. Positive Security The other side of this coin is the positive security model: ‘whitelisting’. Yes, this is the metaphor implemented in firewalls. First catalog legitimate application traffic, ensure you do not include any attacks in your ‘clean’ baseline, and set up policies to block anything not on the list of valid behaviors. The good news is that this approach is very effective at catching malicious requests you have never seen before (0-day attacks) without having to explicitly code signatures for everything. This is also an excellent way to pare down the universe of all threats into a smaller, more manageable subset of specific threats to account for with a blacklist – basically ways authorized actions such as GET and POST can be gamed. The bad news is that applications are dynamic and change regularly, so unless you update your whitelist with each application update, the WAF will effectively disable new application features. Regardless, you will use both approaches in tandem – without both approaches workload goes up and security suffers. People Manage Policies There is another requirement that must be addressed before adjusting polices: assigning someone to manage them. In-house construction of new WAF signatures, especially at small and medium businesses, is not common. Most organizations depend on the WAF vendor to do the research and update policies accordingly. It’s a bit like anti-virus: companies could theoretically write write their own AV signatures, but they don’t. They don’t monitor CERT advisories or other source for issues to protect applications against. They rarely have the in-house expertise to write these policies even if they wanted to. And if you want your WAF to perform better than AV, which generally addresses about 30% of viruses encountered, you need to adjust your policies to your environment. So you need someone who can understand the rule ‘grammars’ and how web protocols work. That person must also understand what type of information should not leave the company, what constitutes bad behavior, and the risks your web applications pose to the business. Having someone skilled enough to write and manage WAF policies is a prerequisite for success. It could be an employee or a third party, or you might even pay the vendor to assist, but you need a skilled resource to manage WAF policies on an ongoing basis. There is really no shortcut here – either you have someone knowledgable and dedicated to this task, or you depend on the canned policies that come with the WAF, and they just aren’t good enough. So the critical success factor in managing policies is to find at least one person who can manage the WAF, get them training if need be, and give them enough time to keep the policies up to date. What does this person need to do? Let’s break it down: Baseline Application Traffic The first step

Share:
Read Post

Friday Summary: August 10, 2012

This Summary is a short rant on how most firms appear baffled about how to handle mobile and cloud computing. Companies tend to view the cloud and mobile computing as wonderful new advancements, but unfortunately without thinking critically about how customers want to use these technologies – instead they tend to project their own desires onto the technology. Just as I imagine early automobiles were saddled with legacy holdovers from horse-drawn carriages, when they were in fact something new. We are in that rough transition period, where people are still adjusting to these new technologies, and thinking of them in old and outmoded terms. My current beef is with web sites that block users who appear to be coming from cloud services. Right – why on earth would legitimate users come from a cloud? At least that appears to be their train of thought. How many of you ran into a problem buying stuff with PayPal when connected through a cloud provider like Amazon, Rackspace, or Azure? PayPal was blocking all traffic from cloud provider IP addresses. Many sites simply block all traffic from cloud service providers. I assume it’s because they think no legitimate user would do this – only hackers. But some apps route traffic through their cloud services, and some users leverage Amazon as a web proxy for security and privacy. Chris Hoff predicted in The Frogs Who Desired a King that attackers could leverage cloud computing and stolen credit cards to turn “The mechanical Turk into a maniacal jerk”, but there are far more legitimate users doing this than malicious ones. Forbidding legitimate mobile apps and users from leveraging cloud proxies is essentially saying, “You are strange and scary, so go away.” Have you noticed how may web sites, if they discover you are using a mobile device, screw up their web pages? And I mean totally hose things up. The San Jose Mercury News is one example – after a 30-second promotional iPad “BANG – Get our iPad app NOW” page, you get locked into an infinite ‘django-SJMercury’ refresh loop and you can never get to the actual site. The San Francisco Chronicle is no better – every page transition gives you two full pages of white space sandwiching their “Get our App” banner, and somewhere after that you find the real site. That is if you actually scroll past their pages of white space instead of giving up and just going elsewhere. Clearly they don’t use these platforms to view their own sites. Two media publications that cover Silicon Valley appear incapable of grasping media advances that came out of their own back yard. I won’t even go into how crappy some of their apps are (looking at you, Wall Street Journal) at leveraging the advantages of the new medium – but I do need to ask why major web sites think you can only use an app on a mobile device or a browser from a PC? Finally, I have a modicum of sympathy for Mat Honan after attackers wiped out his data, and I understand my rant in this week’s Incite rubbed some the wrong way. I still think he should have taken more personal responsibility and done less blame-casting. I think Chris Hoff sees eye to eye with me on this, but Chris did a much better job of describing how the real issues in this case were obfuscated by rhetoric and attention seeking. I’d go one step further, to say that cloud and mobile computing demonstrate the futility of passwords. We have reached a point where we need to evolve past this primitive form of authentication for mobile and cloud computing. And the early attempts, a password + a mobile device, are no better. If this incident was not proof enough that passwords need to be dead, wait till Near Field Payments from mobile apps hit – cloned and stolen phones will be the new cash machines for hackers. I could go on, but I am betting you will notice if you haven’t already how poorly firms cope with cloud and mobile technologies. Their bumbling does more harm than good. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in CIO. Adrian in SIEM replacement video available this week. Rich’s Dark Reading post on Black Hat’s Future Rich quoted on iOS Security. Favorite Securosis Posts Adrian Lane: The TdF edition of the Friday Summary.. Just because that would be friggin’ awesome! Mike Rothman: Friday Summary, TdF Edition. It’s not a job, it’s an adventure. As Rich experienced with his Tour de France trip. But the message about never getting complacent and working hard, even when no one is looking, really resonated with me. Rich: Mike slams the media. My bet is this is everyone’s favorite this week. And not only because we barely posted anything else. Other Securosis Posts Endpoint Security Management Buyer’s Guide: Periodic Controls. Incite 8/8/2012: The Other 10 Months. Pragmatic WAF Management: the WAF Management Process. Favorite Outside Posts Adrian Lane: Software Runs the World. I’m not certain we can say software is totally to blame for the Knight Capital issue, but this is a thought-provoking piece in mainstream media. I am certain those of you who have read Daemon are not impressed. Mike Rothman: NinjaTel, the hacker cellphone network. Don’t think you can build your own cellular network? Think again – here’s how the Ninjas did it for Defcon. Do these folks have day jobs? Rich: How the world’s larget spam botnet was brought down. I love these success stories, especially when so many people keep claiming we are failing. Project Quant Posts Malware Analysis Quant: Index of Posts. Malware Analysis Quant: Metrics – Monitor for Reinfection. Malware Analysis Quant: Metrics – Remediate. Malware Analysis Quant: Metrics – Find Infected Devices. Malware Analysis Quant: Metrics – Define Rules and Search Queries. Malware Analysis Quant: Metrics – The Malware Profile. Malware Analysis Quant: Metrics – Dynamic Analysis. Research Reports and Presentations Evolving Endpoint Malware

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.