Securosis

Research

Less Innovation Please

It happens every time we have a series of breaches. The ‘innovators’ get press coverage with some brand-new idea for how to stop hackers and catch malicious employees trying to steal data. We are seeing yet another cycle right now, which Rich discussed yesterday in FireStarter: Now What? The sheer idiocy of Wired Magazine’s Paranoia Meter made me laugh out loud. Not that monitoring should not be done, but the concept of monitoring users’ physical traits to identify bad behavior is a lot more effort and is also error-prone. Looking at posture, mouse movements, and keystrokes to judge state of mind, then using that to predict data theft? Who could believe in that? It baffles me. User behavior in the IT realm does not need to be measured in terms of eye movement, typing speed, or shifting in one’s seat – if it did, we would need to round up all the 3rd graders in the world because we’d have a serious problem. Worse, the demand is clearly a marketing attempt to capitalize on WikiLeaks and HBGary – the whole thing reminds me more than a little of South Park’s ‘It’. Behavior analysis of resource usage is quite feasible without spy cameras and shoving probes where they don’t belong. We can collect just about every action a user takes on the network, and if we choose from endpoint and applications as well – all of which is simpler, more reliable, and cheaper than adding physical sensors or interpreting their output. It’s completely feasible to analyze actual (electronic) user actions – rather than vague traits with unclear meaning – in order to identify behavioral patterns indicating known attacks and misuse. Today we mostly see attribute-based analysis (time, location, document type, etc.), but behavioral profiles can be derived to use as a template for identifying good or bad acts, and used to validate current activity. How well this all works depends more on your requirements and available time than the capabilities of particular tools. What angers me here the complete lack of discussion of SIEM, File Activity Monitoring, Data Loss Prevention, or Database Activity Monitoring – all four technologies exist today and don’t rely upon bizarre techniques to collect data or pseudoscience to predict crime. Four techniques with flexible analysis capabilities on tangible metrics. Four techniques that have been proven to detect misuse in different ways. We don’t really need more ‘innovative’ security technologies as Wired suggests. We need to use what we have. Often we need it to be easier to use, but we can already have good capabilities for solving these problems. Many of these tools have been demonstrated to work. The impediments are cost and effort – not lack of capabilities. Share:

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Collecting Data Systematically

Once you have figured out what you want to count (security metrics), the next question is how to collect the data. Remember we look for metrics that are a) consistently and objectively measurable, and b) cheap to gather. That means some things we want to count may not be feasible. So let’s go through each bucket of metrics and list out the places we can get that data. Quantitative Metrics These metrics are pretty straightforward to collect (under the huge assumption that you are already using some management tool to handle the function). That means some kind of consoles for things like patching, vulnerabilities, configurations, and change management. Without one, aggregating metrics (and benchmarking relative to other companies) is likely too advanced and too much effort. Walk before you run, and automate/manage these key functions before you worry about counting. Incident Management: These metrics tend to be generated as part of the post-mortem/Quality Assurance step after closing the incident. Any post-mortem should be performed by a team, with the results communicated up the management stack, so you should have consensus/buy-in on metrics like incident cost, time to discover, and time to recover. We are looking for numbers with official units (like any good metric). Vulnerability, Patch, Configuration, and Change Management: These kinds of metrics should be stored by whatever tool you use for the specific function. The respective consoles should provide reports that can be exported (usually in XML or CSV format). Unless you use a metrics/benchmarking system that integrates with your tool, you’ll need to map its output into a format you can normalize, and use for reporting and comparing to peers. But make sure each console gets a full view of the entire process, including remediation. Be sure that every change, scan, and patch is logged in the system, so you can track the (mean) time to perform each function. Application Security: The metrics for application security tend to be a little more subjective than we’d prefer (like % of critical applications), but ultimately things like security test coverage can be derived from whatever tools are used to implement the application security process. This is especially true for web application security scanning, QA, and other processes that tend to be tool driven – as opposed to more amorphous functions such as threat modeling and code review. Financial: Hopefully you have a good relationship with your CFO and finance team, because they will have metrics on what you spend. You can gather direct costs such as software and personnel, but indirect costs are more challenging. Depending on the sophistication of your internal cost allocation, you may have very detailed information on how to allocate shared overhead, but more likely you will need to work with the finance team to estimate. Remember that precision is less important than consistency. As long as you estimate the allocations consistently, you can get valid trend data; if you’re comparing to peers you’ll need to be a bit more careful about your definitions. For the other areas we mentioned, including identity, network security, and endpoint protection, this data will be stored in the respective management consoles. As a rule of thumb, the more mature the product (think endpoint protection and firewalls), the more comprehensive the data. And most vendors have already had requests to export data, or built in more sophisticated management reporting/dashboards, for large scale deployments. But that’s not always the case – some consoles make it harder than others to export data to different analysis tools. These management consoles – especially the Big IT management stacks – are all about aggregating information from lots of places, not necessarily integrating with other analysis tools. That means as your metrics/benchmarking efforts mature, a key selection criterion will be the presence of an open interface to get data both in and out. Qualitative Metrics As discussed in the last post, qualitative metrics are squishy by definition and cannot meet the definition of a “good” metric. The numbers on awareness metrics should reside somewhere, probably in HR, but it’s not clear they are aggregated. And percentage of incidents due to employee error is clearly subjective, assessed as part of the incident response process, and stored for later collection. We recommend including that judgement as part of the general incident reporting process. Attitude is much squishier – basically you ask your users what they think of your organization. The best way to do that is an online survey tool. Tons of companies offer online services for that (we use SurveyMonkey, but there are plenty). Odds are your marketing folks already have one you can piggyback on, but they aren’t expensive. You’ll want to survey your employees at least a couple times a year and track the trends. The good news is they all make it very easy to get the data out. Systematic Collection This is the point in the series where we remind you that gathering metrics and benchmarking are not one-time activities. They are an ongoing adventure. So you need to scope out the effort as a repeatable process, and make sure you’ve got the necessary resources and automation to ensure you can collect this data over time. Collecting metrics on an ad hoc basis defeats their purpose, unless you are just looking for a binary (yes/no) answer. You need to collect data consistently and systematically to get real value from them. Without getting overly specific about data repository designs and the like, you’ll need a central place to store the information. That could be as simple as a spreadsheet or database, a more sophisticated business intelligence/analysis tool, or even an online service designed to collect metrics and present data. Obviously the more specific a tool is to security metrics, the less customization you’ll need to generate the dashboards and reports needed to use these metrics as a management tool. Now that you have a system in place for metrics collection we get to the meat of the series: benchmarking your metrics to a peer group. Over the next couple posts we’ll dig into exactly what that means, including how to

Share:
Read Post

FireStarter: Now What?

I have always believed that security – both physical and digital – is a self-correcting system. No one wants to invest any more into security than they need to. Locks, passwords, firewalls, well-armed ninja – they all take money, time, and effort we’d rather spend getting our jobs done, with our families, or on personal pursuits. Only the security geeks and the paranoid actually enjoy spending on security. So the world only invests the minimum needed to keep things (mostly) humming. Then, when things get really bad, the balance shifts and security moves back up the list. Not forever, not necessarily in the right order, and not usually to the top, but far enough that the system corrects itself enough to get back to business as usual. Or, far more frequently, until people perceive that the system has corrected itself – even if the cancer at the center merely moves or hides. Security never wins or loses – it merely moves up or down relative to an arbitrary line we call ‘acceptable’. Usually just below, and sometimes far below. We never fail as a whole – but sometimes we don’t succeed as well as we should in that moment. Over the past year we have gotten increasing visibility into a rash of breaches and incidents that have actually been going on for at least 5 years. From RSA and Comodo, to Epsilon, Nasdaq, and WikiLeaks. Everyone – from major governments, to trading platforms, to banks, to security companies, to grandma – has made the press. Google, Facebook, NASA, and HBGary Federal. We are besieged from China, Eastern Europe, and Anonymous mid-life men pretending to be teenage girls on 4chan. So we need to ask ourselves: Now what? The essential question we as security professionals need to ask is: is the quantum dot on the wave function of security deviating far enough from acceptable that we can institute the next round of changes? We know we can do more, and security professionals always believe we should do more, but does the world want us to do more? Will they let us? Because this is not a decision we ever get to make ourselves. The first big wave in modern IT security hit with LOVELETTER, Code Red, and Slammer. Forget the occasional website defacement – it was mass malware, and the resulting large-scale email and web outages, that drove our multi-billion-dollar addiction to firewalls and antivirus. Up and down the ride we started. The last time we were in a similar position was right around the time many of the current trends originated. Thanks to California SB1386, ChoicePoint became the first company to disclose a major breach back in 2005. This was followed by a rash of organizations suddenly losing laptops and backup tapes, and the occasional major breach credited to Albert Gonzales. PCI deadlines hit, HIPAA made a big splash (in vendor presentations), and the defense industry started quietly realizing they might be in a wee bit of trouble as those in the know noticed things like plans for top secret weapons and components leaking out. And there were many annual predictions that this year we’d see the big SCADA hack. The combined result was a more than incremental improvement in security. And a more than incremental increase in the capabilities of the bad guys. Never underestimate the work ethic of someone too lazy to get a legitimate job. In the midst of the current public rash of incidents, we have also seen far more than an incremental increase in the cost and complexity of the tools we use – not that they necessarily deliver commensurate value. And everyone still rotates user passwords every 90 days, without one iota of proof that any of the current breaches would have been stymied if someone had added another ! to the end of their kid’s birthday. 89 days ago. Are we deep into the next valley? Have things swung so far from acceptable that it will shift the market and our focus? My gut suspicion is that we are close, but the present is unevenly distributed — never mind the future. Share:

Share:
Read Post

Quick Wins with DLP Light: The Process

The objective of the Quick Wins process is to get results and show value as quickly as possible, while setting yourself up for long-term success. Quick Wins for DLP Light is related to the Quick Wins for DLP process, but heavily modified to deal both with the technical differences and the different organizational goals we see in DLP Light projects. Keep this process in perspective – many of you will already be pretty far down your DLP Light path and might not need all these steps. Take what you need and ignore the rest. Prepare There are two preparatory steps before kicking off the project: Establish Your Process Nearly every DLP customer we talk with discovers actionable offenses committed by employees as soon as they turn the tool on. Some of these require little more than contacting a business unit to change a bad process, but quite a few result in security guards escorting people out of the building, or even legal action. Even if you aren’t planning on moving straight to enforcement mode, you need a process in place to manage the issues that will crop up once you activate your tool. You should set up two different processes to handle the three common incident categories: Business Process Failures: DLP violations often result from poor business processes, such as retaining sensitive customer data and emailing unencrypted healthcare information to insurance providers. This process is about working with the business unit to fix the problem. Employee Violations: These are often accidental, but most DLP deployments result in identification of some malicious activity. Your process should focus on education to avoid future accidents; as well as working with business unit managers, HR, and legal to handle malicious activity. Security Incidents: Traditional security incidents, usually from an external source, which require response and investigation. Determine Existing DLP Capabilities The next step is to determine which DLP Light capabilities you have in-house, even if the project is driven by a particular tool. You might find you already have more capability than you realize. Check for existing DLP features in the main technology areas covered in our last post. It’s also worth reviewing whether you are current on product versions, as DLP features might be cheap or even free if you upgrade (discounting upgrade costs, of course). Build a list of the DLP Light tools and features you have available, with the following information: The tool/feature Where it’s deployed Protected “channels”: Network protocols, storage locations, endpoints, etc. Content analysis capabilities/categories Workflow capabilities: DLP-specific vs. general-purpose; ability to integrate with SIEM and other management tools This shouldn’t take long and will help you choose the best path for implementation. Determine Objective The next step is to determine your goal. Are you more concerned with protecting a specific type of data? Or do you want to look more broadly at overall information usage? While the full-DLP Quick Wins process is always focused on information gathering vs. enforcement, this isn’t necessarily the case in a DLP Light project. No matter you specific motivation, we find that individual projects then sift into three main categories: Focused Monitoring: The goal is to track usage of, and generate alerts on, a specific kind of information. This is most often credit card numbers, healthcare data, or other personally identifiable information. Focused Enforcement: You concentrate on the same limited data types as above, but instead of merely alerting you plan to enforce policies and block activity. General Information Gathering: Rather than focusing on a single type of data, you use tools to get a better sense of information usage throughout the organization. You turn on as many policies to monitor information of interest as possible. Choose Deployment Type This is a three-step process for making the final decisions required to deploy: Map desired coverage channels: Determine where you want to monitor and/or enforce – email, endpoints (USB), etc. List every place you want to cover vs. what you know you already can cover with your existing capabilities. This also needs to map to your objective, and content analysis requirements. Match desired to existing coverage: Now figure out what you have and where the gaps are. Fill the gaps: Obtain any additional products or licenses so that your project can meet your objectives. Your entire project might be as simple as, “we want to catch credit card numbers in email using our existing tool”, in which case this entire process up to now probably took about 10 seconds. But if you need a little more guidance, this will help. Implement and Monitor Now it’s time to integrate the product (if needed), turn it on, and collect results. The steps are: Select content analysis policies: For a focused deployment, this will only include the policy that targets the specific data you want to protect, although if you use multiple products that aren’t integrated you will use the most appropriate policies in each tool. For a general deployment you turn on every policy of interest (without wrecking performance – check with your vendor). Install (if needed) Integrate with other tools/workflow: If you need to integrate multiple components, or with a central workflow or incident management tool, do that now. Turn on monitoring We have a few hints to improve your chance of success: Don’t enable enforcement yet – even if enforcement is your immediate goal, start with monitoring. Understand how the tool will can impact workflow first, as we will discuss next. Don’t try to handle every incident at first. You will likely need to tune policies and educate users over time before you have the capacity to handle every incident – depending on your focus. Handle the most egregious events now and accept that you will handle the rest later. Leverage user education. Users often don’t know they are violating policies. One excellent way to reduce your incident volume is to send them automated notifications based on policy violations. This has the added advantage of helping you identify the egregious violators later on. Analyze At this point you have focused your project, picked your tools, set your policies, and started monitoring. Now it’s

Share:
Read Post

Fool us once… EMC/RSA Buys NetWitness

To no one’s surprise (after NetworkWorld spilled the beans two weeks ago), RSA/EMC formalized its acquisition of NetWitness. I guess they don’t want to get fooled again the next time an APT comes to visit. Kidding aside, we have long been big fans of full packet capture, and believe it’s a critical technology moving forward. On that basis alone, this deal looks good for RSA/EMC. Deal Rationale APT, of course. Isn’t that the rationale for everything nowadays? Yes, that’s a bit tongue in cheek (okay, a lot) but for a long time we have been saying that you can’t stop a determined attacker, so you need to focus on reacting faster and better. The reality remains that the faster you figure out what happened and remediate (as much as you can), the more effectively you contain the damage. NetWitness gear helps organizations do that. We should also tip our collective hats to Amit Yoran and the rest of the NetWitness team for a big economic win, though we don’t know for sure how big a win. NetWitness was early into this market and did pretty much all the heavy lifting to establish the need, stand up an enterprise class solution, and show the value within a real attack context. They also showed that having a llama at a conference party can work for lead generation. We can’t minimize the effect that will have on trade shows moving forward. So how does this help EMC/RSA? First of all, full packet capture solves a serious problem for obvious targets of determined attackers. Regardless of whether the attack was a targeted phish/Adobe 0-day or Stuxnet type, you need to be able to figure out what happened, and having the actual network traffic helps the forensics guys put the pieces together. Large enterprises and governments have figured this out and we expect them to buy more of this gear this year than last. Probably a lot more. So EMC/RSA is buying into a rapidly growing market early. But that’s not all. There is a decent amount of synergy with the rest of RSA’s security management offerings. Though you may hear some SIEM vendors pounding their chests as a result of this deal, NetWitness is not SIEM. Full packet capture may do some of the same things (including alert on possible attacks), but it analysis is based on what’s in the network traffic – not logs and events. More to the point, the technologies are complimentary – most customers pump NetWitness alerts into a SIEM for deeper correlation with other data sources. Additionally some of NetWitness’ new visualization and malware analysis capabilities supplement the analysis you can do with SIEM. Not coincidentally, this is how RSA positioned the deal in the release, with NetWitness and EnVision data being sent over to Archer for GRC (whatever that means). Speaking of EnVision, this deal may take some of the pressure off that debacle. Customers now have a new shiny object to look at, while maybe focusing a little less on moving off the RSA log aggregation platform. It’s no secret that RSA is working on the next generation of the technology, and being able to offer NetWitness to unhappy EnVision customers may stop the bleeding until the next version ships. A side benefit is that the sheer amount of network traffic to store will drive some back-end storage sales as well. For now, NetWitness is a stand-alone platform. But it wouldn’t be too much of a stretch to see some storage/archival integration with EMC products. EMC wouldn’t buy technology like NetWitness just to drive more storage demand, but it won’t hurt. Too Little, Too Late (to Stop the Breach) Lots of folks drew the wrong conclusion, that RSA bought NetWitness because of their recent breach. But these deals doesn’t happen overnight, so this acquisition has been in the works for quite a while. But what could better justify buying a technology than helping to detect a major breach? I’m sure EMC is pretty happy to control that technology. The trolls and haters focus on the fact that the breach still happened, so the technology couldn’t work that well, right? Actually, the biggest issue is that EMC didn’t have enough NetWitness throughout their environment. They might have caught the breach earlier if they had the technology more widely deployed. Then again, maybe not, because you never know how effective any control will be at any given time against any particular attack, but EMC/RSA can definitely make the case that they could have reacted faster if they had NetWitness everywhere. And now they likely will. Competitive Impact The full packet capture market is still very young. There are only a handful of direct competitors to NetWitness, all of whom should see their valuations skyrocket as a result of this deal. Folks like Solera Networks are likely grinning from ear to ear today. We also expect a number of folks in adjacent businesses (such as SIEM) to start dipping their toes into this water. Speaking of SIEM, NetWitness did have partnerships with the major SIEM providers to send them data, and this deal is unlikely to change much in the short term. But we expect to see a lot more integration down the road between NetWitness, EnVision Next, and Archer, which could create a competitive wedge for RSA/EMC in large enterprises. So we expect the big SIEM players to either buy or build this capability over the next 18 months to keep pace. Not that they aren’t all over the APT marketing already. Bottom Line This is a good deal for RSA/EMC – acquiring NetWitness provides a strong, differentiated technology in what we believe will be an important emerging market. But with RSA’s mixed results in leveraging acquired technology, it’s not clear that they will remain the leader in two years. But if they provide some level of real integration in that timeframe, they will have a very compelling set of products for security/compliance management. This is also a good

Share:
Read Post

Quick Wins with DLP Light: Technologies and Architectures

DLP Light tools cover a wide range of technologies, architectures, and integration points. We can’t highlight them all, so here are the core features and common architectures. We have organized them by key features and deployment location (network, endpoint, etc.): Content Analysis and Workflow Content analysis support is the single defining element for Data Loss Prevention – “Light” or otherwise. Without content analysis we don’t consider a tool or feature DLP, even if it helps to “prevent data loss”. Most DLP Light tools start with some form of rule/pattern matching – usually regular expressions, often with some additional contextual analysis. This base feature covers everything from keywords to credit card numbers. Most customers don’t want to build their own rules, so the tools come with pre-built policies, which are sometimes updated as part of a maintenance contract or license renewal. The most common policies identify credit card data for PCI compliance, because that drives a large portion of the market. We also see plenty of PII detection, followed by healthcare/HIPAA data discovery – both to meet clear compliance requirements. DLP Light tools and features may or may not have their own workflow engine and user interface for managing incidents. Most don’t provide dedicated workflow for DLP, instead integrating policy alerts into whatever existing console and workflow the tool uses for its primary function. This isn’t necessarily better or worse – it depends on your requirements. Network Features and Integration DLP features are increasingly integrated into existing network security tools, especially email security gateways. The most common examples are: Email Security Gateways: These were the first non-DLP tools to include content analysis, and tend to offer the broadest policy/category coverage. Many of you already deploy some level of content-based email filtering. Email gateways are also one of the main integration points with full DLP solutions: all the policies and workflow are managed on the DLP side, but analysis and enforcement are integrated with the gateway directly rather than requiring a separate mail hop. Depending on your specific tool, internal email may or may not be covered. Web Security Gateways: Some web gateways now directly enforce DLP policies on the content they proxy, for example preventing files with credit card numbers from being uploaded to webmail and social networking services. Web proxies are the second most common integration point for DLP solutions because, as we described in the Technical Architecture section, they proxy web and FTP traffic and make a perfect filtering and enforcement point. These are also the tools you will use to reverse proxy SSL connections to monitor those encrypted communications, which is a necessity for scanning and blocking inbound malicious content. Unified Threat Management: UTMs provide broad network security coverage, including at least firewall and IPS capabilities, but also usually web filtering, an email security gateway, remote access, and web content filtering (antivirus). These provide a natural location for adding network DLP coverage. Intrusion Detection and Prevention Systems: IDS/IPS tools already perform content inspection, and so are a natural location for additional DLP analysis. This is usually basic analysis integrated into existing policy sets, rather than a new full content analysis engine. SIEM and Log Management: All major SIEM tools can accept alerts from DLP solutions, and many can correlate them with other collected activity. Some SIEM tools also offer DLP features, depending on what kinds of activity they can collect for content analysis. We have placed this in the network section because that’s what they most commonly integrate with, but they can also work with other DLP deployment locations. Log management tools tend to be more passive, but increasingly include some basic DLP-like features for analyzing data. Endpoint Features and Integration DLP features have appeared in various endpoint tools aside from dedicated DLP products since practically before there was a DLP market. This presence continues to expand, especially as interest grows in controlling USB usage without unacceptable business impact. Endpoint Protection Platforms: EPP is the term for comprehensive endpoint suites that start with anti-virus, and may also include portable device control, intrusion prevention, anti-spam, remote access, Network Admission Control, application whitelisting, etc. Many EPP vendors have added basic DLP features – most often for monitoring local files or storage transfers of sensitive information, and some with support for network monitoring and enforcement. USB/Portable Device Control: Some of these tools offer basic DLP capabilities, and we are seeing others evolve to offer somewhat extensive endpoint DLP coverage – with multiple detection techniques, multivariate policies, and even dedicated workflow. When evaluating this option, keep in mind that some tools position themselves as offering DLP capabilities but lack any content analysis – instead relying on metadata or other context. ‘Non-Antivirus’ EPP: Some endpoint security platforms are dedicated to more than just portable device control, but are not designed around antivirus like other EPP tools. This category covers a range of tools, but the features offered are generally comparable to other offerings. Overall, most people deploying DLP features on an endpoint (without a dedicated DLP solution) are focused on scanning the local hard drive and/or monitoring/filtering file transfers to portable storage. But as we described earlier you might also see anything from network filtering to application control integrated into endpoint tools. Storage Features and Integration We don’t see nearly as much DLP Light in storage as in networking and endpoints – in large part because there aren’t as many clear security integration points. Fewer organizations have any sort of storage security monitoring, whereas nearly every organization performs network and endpoint monitoring of some sort. But while we see less DLP Light, as we have already discussed, we see extensive integration on the DLP side for different types of storage repositories. Database Activity Monitoring and Vulnerability Assessment: DAM products, many of which now include or integrate with Database Vulnerability Assessment tools, now sometimes include content analysis capabilities. Vulnerability Assessment: Some vulnerability assessment tools can scan for basic DLP policy violations if they include the ability to passively monitor network traffic or scan storage. Content Classification, Forensics, and Electronic Discovery: These tools aren’t dedicated to

Share:
Read Post

White Paper: Network Security in the Age of *Any* Computing

We all know about the challenges for security professionals posed by mobile devices, and by the need to connect to anything from anywhere. We have done some research on how to start securing those mobile devices, and have broadened that research with a network-centric perspective on these issues. Let’s set the stage for this paper: Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time of day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who demand the ability to choose their computers, mobile devices, and applications. Even better, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? In this paper we focus on the network architectures and technologies that can help you protect critical corporate data, given your requirements to provide users with access to critical and sensitive information on any device, from anywhere, at any time. A special thanks to ForeScout for sponsoring the research. Find it in the research library or download the PDF directly: Network Security in the Age of Any Computing: Risks and Options to Control Mobile, Wireless, and Endpoint Devices. Share:

Share:
Read Post

Friday Summary: April 1, 2011

Okay folks – raise your hands for this one. How many of you get an obvious spam message from a friend or family member on a weekly basis? For me it’s more like monthly, but it sure is annoying. The problem is that when I get these things I have a tendency to try and run them down to figure out exactly what was compromised. Do the headers show it came from their computer? Or maybe their web-based email account? Or is it just random spoofing from a botnet… which could mean any sort of compromise? Then, assuming I can even figure that part out, I email or call them up to let them know they’ve been hacked. Which instantly turns me into their tech support. This is when things start to suck. Because, for the average person, there isn’t much they can do. They expect their antivirus to work and the initial reaction is usually “I ran a scan and it says I’m clean”. Then I have to tell them that AV doesn’t always work. Which goes over great, as they tell me how much they spent on it. Depending on what I can pick up from the email headers we then get to cover the finer points of changing webmail passwords, checking for silent forwards, and setting recovery accounts. Or maybe I tell them their computer is owned for sure and they need to nuke it from orbit (backup data, wipe it, reinstall, scan data, restore data). None of that is remotely possible for most people, which means they may have to spend more than their PoS is worth paying the Geek Squad to come out, steal their drunken naked pictures, and lose the rest of their data. After which I might still get spam, if the attacker sniffed their address book and shoveled onto some zombie PC(s). Or they ignore me. I had a lawyer friend do that once. On a computer used sometimes for work email. Sigh. There’s really no good answer unless you have a ton of spare time to spend hunting down the compromise… which technically might not be them anyway (no need to send the spam from the person you compromised if another name in the social network might also do the trick). For immediate family I will go fairly deep to run things down (including getting support from vendor friends on occasion), but I have trained most of them. For everyone else? I limit myself to a notification and some basic advice. Then I add them to my spam filter list, because as long as they can still read email and access Facebook they don’t really care. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on metrics in Dark Reading. Adrian quoted in ComputerWorld on McAfee’s acquisition of Sentrigo. Favorite Securosis Posts Rich: PROREALITY: Security is rarely a differentiator. There’s a bare minimum line you need to keep customer trust. Anything more than that rarely matters. Adrian Lane: Captain Obvious Speaks: You Need Layers. Mike Rothman: File Activity Monitoring: Index. You’ll be hearing a lot about FAM in the near future. And you heard it here first. Other Securosis Posts White Paper: Network Security in the Age of Any Computing. Incite 3/30/2011: The Silent Clipper. Comments on Ponemon’s “What Auditors think about Crypto”. Quick Wins with DLP Light. FAM: Policy Creation, Workflow, and Reporting. FAM: Selection Process. Security Benchmarking, Going Beyond Metrics: Introduction. Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet). Favorite Outside Posts Rich: Errata Security: “Cybersecurity” and “hacker”: I’m taking them back. If I try to describe what I do (security analyst) they think I’m from Wall St. If I say “cybersecurity analyst” they get it right away. To be honest, I really don’t know why people in the industry hate “cyber”. You dislike Neuromancer or something? Adrian Lane: The 93,000 Firewall Rule Problem. Mike Rothman: The New Corporate Perimeter. If you missed this one, read it. Now. GP is way ahead on thinking about how security architecture must evolve in this mobile/cloud reality. The world is changing, folks – disregard it and I’ve got a front end processor to sell you. Rich: BONUS LINK: The writing process. Oh my. Oh my my my. If you ever write on deadline and word count, you need to read this. Research Reports and Presentations Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts European Parliament computer network breached. BP loses laptop with private info on 13,000 people. BP Spills Data Too. The DataLossDB project welcomes Dissent! As we mentioned in the intro, you should support this project. GoGrid Security Breach. Restaurant chain fined under Mass privacy law. Mass SQL Injection Attack. NSA Investigates NASDAQ Hack. Dozens of exploits released for popular SCADA programs. Twitter, JavaScript Defeat NYT’s $40m Paywall. Blog Comment of the Week For the past couple years we’ve been donating to Hackers for Charity, but in honor of Dissent joining the DataLossDB project we are directing this week’s donation ($100) to The Open Security Foundation. This week’s best comment goes to SomeSecGuy, in response to PROREALITY: Security is rarely a differentiator. TJ Maxx’s revenues went UP after their big breach. What mattered more to its customers than security? A good deal on clothes, I guess. There probably is a market segment that cares more about security than other factors but I don’t know what it is. Price is typically the primary driver even for business decisions. Share:

Share:
Read Post

PROREALITY: Security is rarely a differentiator

I’ve been in this business a long time – longer than most, though not as long as some. That longevity provides perspective, and has allowed me to observe the pendulum swinging back and forth more than once. This particular pendulum is the security as an enabler concept – you know, positioning security not as an overhead function but as a revenue driver (either direct or indirect). Jeremiah’s post earlier this week, PROTIP: Security as a Differentiator, brought back that periodic (and ultimately fruitless) discussion. His general contention is that security can differentiate an offering, ultimately leading to security being a vehicle that drives revenue. So before we start down this path again, let me squash it like the cockroach it is. First we examine one of Jeremiah’s contentions: When security is made visible (i.e. help customers be and feel safe), the customer may be more inclined to do business with those who clearly take the matter seriously over others who don’t. That’s not entirely false. But the situations (or in marketing speak, segments) where that is true are very limited. Banks have been telling me for years that churn increases after a breach is publicized, and the one which say they are secure gain customers. I still don’t buy it, mostly because the data always seems to come from some vendor pushing their product to protect bank customer data. The reality is that words do not follow behavior when it comes to security. Whether you sit on the vendor side or the user side you know this. When you ask someone if they are worried about security, of course they say yes. Every single time. But when you ask them to change their behavior – or more specifically not do something they want to because it’s a security risk – you see the reality. The vast majority of people don’t care about security enough to do (or not do) anything. Jeremiah is dreaming – if he were describing reality, everyone related to the security business would benefit. Unfortunately it’s more of a PRODREAM than a PROTIP. Or maybe even a PROHALLUCINATION. He’s not high on peyote or anything. Jer is high on the echo chamber. When you hang around all day with people who care about security, you tend to think the echo chamber reflect the mass market. It doesn’t – not by a long shot. So spending a crapload of money on really being secure is a good thing to do. To be clear, I would like you to do that. But don’t do it to win more business – you won’t, and you’ll be disappointed – or your bosses will be disappointed in you for failing to deliver. Invest in security it because it’s the right thing to do. For your customers and for the sustainability of your business. You may not get a lot of revenue upside from being secure, but you can avoid revenue downside. I believe this to be true for most businesses, but not all. Cloud service providers absolutely can differentiate based on security. That will matter to some customers and possibly ease their migration to the cloud. There are other examples of this as well, but not many. I really wish Jeremiah was right. It would be great for everyone. But I’d be irresponsible if I didn’t point out the cold, hard reality. Photo credit: “3 1 10 Bearman Cartoon Cannabis Skunk Hallucinations” originally uploaded by Bearman2007 Share:

Share:
Read Post

On Preboot Authentication and Encryption

I am working on an encryption project – evaluating an upcoming product feature for a vendor – and the research is more interesting than I expected. Not that the feature is uninteresting, but I thought I knew all the answers going into this project. I was wrong. I have been talking with folks on the Twitters and in private interviews, and have discovered that far more organizations than I suspected are configuring their systems to automatically skip preboot authentication and simply boot up into Windows or Mac OS X (yes, for real, a bunch are using disk encryption on Macs). For those of you who don’t know, with most drive encryption you have a mini operating system that boots first, so you can authenticate the user. Then it decrypts and loads the main operating system (Windows, Mac OS X, Linux, etc.). Skipping the mini OS requires you to configure it to automatically authenticate and load the operating system without a password prompt. Organizations tend to do this for a few reasons: So users don’t have to log in twice. So you don’t have to deal with managing and synchronizing two sets of credentials (preboot and OS). To reduce support headaches. But the convenience factor is the real reason. The problem with skipping preboot authentication is that you then rely completely on OS authentication to protect the device. My pentester friends tell me they can pretty much always bypass the OS encryption. This may also be true for a running/sleeping/hibernating system, depending on how you have encryption configured (and how your product works). In other words – if you skip preboot, the encryption generally adds no real security value. In the Twitter discussion about advanced pen testering, our very own David Mortman asked: @rmogull Sure but how many lost/stolen laptops are likely to be attacked in that scenario vs the extra costs of pre-boot? Which is an excellent point. What are the odds of an attacker knowing how to bypass the encryption when preboot isn’t used? And then I realized that in that scenario, the “attacker” is most likely someone picking up a “misplaced” laptop and even basic (non-encryption) OS security is good enough. Which leads to the following decision tree: Are you worried about attackers who can bypass OS authentication? If so, encrypt with preboot authentication; if not, continue to step 2. Do you need to encrypt only for compliance (meaning security isn’t a priority)? If so, encrypt and disable preboot; if not, continue to step 3. Encrypt with preboot authentication. In other words, encrypt if you worry about data loss due to lost media or are required by compliance. If you encrypt for compliance and don’t care about data loss, then you can skip preboot. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.