Securosis

Research

The Myth of the Security-Smug Mac User

I still consider myself a relative newcomer to the Mac community. Despite being the Security Editor at TidBITS and an occasional contributor to Macworld (print and online), and having spoken at Macworld Expo a couple times, I only really switched to Macs back in 2005. To keep this in perspective, TidBITS has been published electronically since 1990. Coming from the security world I had certain expectations of the Mac community. I thought they were naive and smug about security, and living in their own isolated world. That couldn’t have been further from the truth. Over the past 7 years, especially the past 5+ since I left Gartner and could start writing for Mac publications, I have learned that Mac users care about security every bit as much as Windows users. I haven’t met a single Mac pundit who ever dismissed Mac security issues or the potential for malware, or who thought their Mac ‘immune’. From Gruber, to Macworld, to TidBITS, and even The Macalope (a close personal friend when he isn’t busy shedding on my couch, drinking my beer out of the cat’s water bowl, or ripping up my drapes with his antlers) not one person I’ve met or worked with has expressed any of the “security smugness” attributed to them by articles like the following: Are MACS Safer then PCs Flashback Mac Trojan Shakes Apple Rep of Invulnerability Widespread Virus Proves Macs Are No Longer Safe From Hackers Expert: Mac users more vulnerable than Windows users And countless tweets and other articles. Worse yet, the vast majority of Mac users worry about security. When I first started getting out into the Mac community people didn’t say, “Well, we don’t need to worry about security.” They asked, “What do I need to worry about?” Typical Mac users from all walks of life knew they weren’t being exploited on a daily basis, but were generally worried that there might be something they were missing. Especially relatively recent converts who had spent years running Windows XP. This is anecdotal, and I don’t have survey numbers to back it up, but I’ve been probably the most prominent writer on Mac security for the past 5 years, and talk to a ton of people in person and over email. Nearly universally Mac users are and have been, concerned about security and malware. So where does this myth come from? I think it’s 3 sources: An overly vocal minority who fill up the comments on blog posts and news articles. Yep – a big chunk of them are trolls and asshats. There are zealots like this for every technology, cause, and meme on the face of the planet. They don’t represent our community, no matter how many Apple stickers are on the backs of their cars and work-mandated Windows laptops. One single advertisement where Apple made fun of the sick PC. One. Single. Singular. Unique. Apple only ever made that joke once, and it was in a single “I’m a Mac” spot. And it was 100% accurate at the time – there was no significant Mac malware then. But since then we have seen countless claims that Apple is ‘misleading’ users. Did Apple downplay security issues? Certainly… but nearly exclusively during a period when people weren’t being exploited. I’m not going to apologize for Apple’s security failings (especially their patching issues, which lad to the current Flashback issue), but those are very different than actively misleading users. Okay – one of the Securosis staff believe there may have been some print references from pre-2005, but we are still talking small numbers and nothing current. Antivirus vendors. Here I need to tread cautiously here because I have many friends at these companies who do very good work. Top-tier researchers that are vital to our community. But they have a contingent, just like the Mac4EVER zealots, who think people are stupid or naive if they don’t use AV. These are the same people who want Apple to remove iOS security so they can run their AV products on your phones. Who took out full page advertisements against Microsoft when MS was going to lock down parts of the Windows kernel (breaking their products) for better security. Who issue report after report designed only to frighten you into using their products. Who have been claiming that this year really will be the the year of mobile malware (eventually they’ll be right, if we wait long enough). Here’s the thing. The very worst quotes and articles attacking smug Mac users usually use a line similar to the following: Mac users think they are immune because they don’t install antivirus. Which is a logical fallacy of the highest order. These people promote AV as providing the same immunity they say Mac zealots claim for ‘unprotected’ Macs. They gloss over the limited effectiveness of AV products. How even the AV vendors didn’t have signatures for Flashfake until weeks after the infections started. How Windows users are constantly infected despite using AV, to the point where most enterprise security pros I work with see desktop antivirus as more a compliance tool and high-level filter than a reliable security control. I’m not anti-AV. It plays a role, and some of the newer products (especially on the enterprise side) which rely less on signatures are showing better effectiveness (if you aren’t individually targeted). Plus most of those products include other security features, ranging from encryption to data loss prevention, that can be useful. I also recommend AV extensively for email and network filtering. Even on Macs, sometimes you need AV. I am far more concerned about the false sense of immunity claimed by antivirus vendors than smug Mac users. Because the security-smug Mac user community is a myth, but the claims of the pro-AV community (mostly AV vendors) are very real, and backed by large marketing budgets. Update: Andrew Jaquith nailed this issue a while ago over at SecurityWeek: Note to readers: whenever you see or hear an author voicing contempt for customers by calling them arrogant, smug, complacent, oblivious, shiny-shiny obsessed members of a cabal, “living in a false paradise,” or

Share:
Read Post

Responsible or Irresponsible Disclosure?—NFL Style

It’s funny to contrast this April to last April, at least as an NFL fan. Last year the lockout was in force, the negotiations stalled, and fans wondered how billionaires could argue with millionaires when the economy was in the crapper. Between the Peyton Manning lottery, the upcoming draft, and the Saints Bounty situation, there hasn’t been a dull moment for pro football fans since the Super Bowl ended. Speaking of the Saints, even after suspensions and fines, more nasty aspects of the story keep surfacing. Last week, we actually heard Gregg Williams, Defensive Coordinator of the Saints, implore his guys to target injured players, ‘affect’ the head, and twist ankles in the pile. Kind of nauseating. OK, very nauseating. I guess it’s true that most folks don’t want to see how the sausage is made – they just want to enjoy the taste. But the disclosure was anything but clean, Sean Pamphilon, the director who posted the audio, did not have permission to post it. He was a guest of a guest at that meeting, there to capture the life of former Saints player Steve Gleason, who is afflicted with ALS. The director argues he had the right. The player (and the Saints) insist he didn’t. Clearly the audio put the bounty situation in a different light for fans of the game. Before it was deplorable, but abstract. After listening to the tape, it was real. He really said that stuff. Really paid money for his team to intentionally hurt opponents. Just terrible. But there is still the dilemma of posting the tape without permission. Smart folks come down on both sides of this discussion. Many believe Pamphilon should have abided by the wishes of his host and not posted the audio. He wouldn’t have been there if not for the graciousness of both Steve Gleason and the Saints. But he was and he clearly felt the public had a right to know, given the history of the NFL burying audio & video evidence of wrongdoing (Spygate, anyone?). Legalities aside, this is a much higher profile example of the same responsible disclosure debate we security folks have every week. Does the public have a need to know? Is the disclosure of a zero day attack doing a public service? Or should the researcher wait until the patch goes live, when they get to enjoy a credit buried in the patch notice? Cynically, some folks disclosing zero-days are in it for the publicity. Sure, they can blame unresponsive vendors, but at the end of the day, some folks seek the spotlight by breaking a juicy zero-day. Likewise, you can make a case that Pamphilon was able to draw a lot of attention to himself and his projects (past, current, and future) by posting the audio. Obviously you can’t buy press coverage like that. Does that make it wrong – that the discloser gets the benefit of notoriety? There is no right or wrong answer here. There are just differing opinions. I’m not trying to open Pandora’s box and entertain a lot of discussion on responsible disclosure. Smart people have differing opinions and nothing I say will change that. My point was to draw the parallel between the Saints bounty tape disclosure and disclosing zero day attacks. Hopefully that provides some additional context for the moral struggles of researchers deciding whether to go public with their findings or not. Share:

Share:
Read Post

Pain Comes Instantly—Fixes Come Later

Mary Ann Davidson’s recent post Pain Comes Instantly has been generating a lot of press. It’s being miscast by some of the media outlets as trashing PCI Data Security Standard, but it’s really about the rules for vendors who want to certify commercial payment software and related products. The debate is worth considering, so I recommend giving it a read. It’s a long post, but I encourage you to read it all the way through before forming opinions, as she makes many arguments and provides some allegories along the way. In essence she challenges the PCI Council on a particular requirement in the Payment Application Vendor Release Agreement (VRA), part of each vendor’s contractual agreement with the PCI Council to get their applications certified as PCI compliant. The issue is over software vulnerability disclosure. Paraphrasing the issue at hand, let’s say Oracle becomes aware of a security bug. Under the terms of the agreement, Oracle must disseminate the information to the Council as part of the required information disclosure process. Her complaint is that the PCI Council insists on its right to leak (‘share’) this information even when Oracle has not yet provided a fix. Mary Ann argues that in this case the PCI Council is harming Oracle’s customers (who are also PCI Council customers) by making the vulnerability public. Hackers will of course exploit the vulnerability and try to breach the payment systems. The real point of contention is that the PCI Council may decide to share this information with QSAs, partners, and other organizations, so those security experts can better protect themselves and PCI customers based upon this information. Oracle’s position is that these QSAs and others who may receive information from the Council are not qualified to make use the information. And second, the more people know about the vulnerability, the more it likely it is to leak. I don’t have a problem with those points. I totally agree that if you tell thousands of people about the vulnerability, it’s as good as public knowledge. And it’s probably safe to wager that only a small percentage of Oracle customers have the initiative or knowledge to take vulnerability information and craft it into effective protection. Even if a customer has Oracle’s database firewall, they won’t be able to create a rule to protect the database from this vulnerability information. So from that perspective, I agree. But it’s a limited perspective. Just because few Oracle customers can generate a fix or a workaround doesn’t mean that a fix won’t or can’t be made available. Oracle customers have contributed workarounds in the past. Even if an individual customer can’t help themselves, others can – and have. But here’s my real problem with that post: I am having trouble finding a substantial difference between her argument and the whole responsible disclosure debate. What’s the real difference from a security researcher finding an Oracle vulnerability? The information is outside Oracle’s control in both cases, and there is a likelihood of public disclosure. It’s something a determined hacker may discover, or have already discovered. It’s in Oracle’s best interest to fix the problem fast before the rest of the world finds out. Historically the problem is that vendors, unless they have been publicly shamed into action, don’t react quickly to security issues. Oracle, among other vendors, has often been accused of siting on vulnerabilities for months – even years – before addressing them. Security researchers for years told basically the same story about Oracle flaws they found, which goes something like this: We have discovered a security flaw in Oracle. We told Oracle about it, and gave them details on how to reproduce it and some suggestions for how to fix it. Oracle a) never fixed it, b) produced a half-assed fix that causes other issues, or c) waited 9, 12, or 18 months before patching the issue – and that was only after I announced the bug to the world at the RSA/DefCon/Black Hat/OWASP conference. I gave Oracle information that anyone could discover, and did not ask for any compensation, and Oracle tried to sue me when I disclosed the vulnerability after 12 months. I’m not Oracle bashing here – it’s an industry-wide issue – but my point that with disclosure, timing matters… a lot. Since the Payment Application Vendor Release Agreement simply states you will ‘promptly’ inform the PCI Council of vulnerabilities, Oracle has a bit of leeway. Maybe ‘prompt’ means 30 days. Heck, maybe 60. That should be enough time to get a patch to those customers using certified payment products – or whatever term the PCI council uses for vetted but not guaranteed software. If a vendor is a bit tardy with getting detailed information to the PCI Council while they code and test a fix, I don’t think the PCI council will complain too much, so long as they are protected from liability. But make no mistake – timing is a critical part of this whole issue. Timing – particularly the lack of ‘prompt’ responses from Oracle – is why the security research community remains pissed-off and critical to this day. Share:

Share:
Read Post

Understanding and Selecting DSP: Administration

Today’s post focuses on the administering Database Security Platforms. Conceptually DSP is pretty simple: collect data from databases, analyze it according to established rules, and react when a rule has been violated. The administrative component of every DSP platform follows these three basic tasks: data management, policy management, and workflow management. In addition to these three basic functions, we also need to administer the platform itself, as we do with any other application platform. As we described in our earlier post on DSP technical architecture, DSP sends all collected data to a central server. The DAM precursors evolved from single servers, to two-tiered architectures, and finally into a hierarchal model, in order to scale up to enterprise environments. The good news is that system maintenance, data storage, and policy management are all available from a single console. While administration is now usually through a browser, the web application server that performs the work is built into the central management server. Unlike some other security products, not much glue code or browser tricks is required to stitch things together. System Management User Management: With access to many different databases, most filtering and reporting on sensitive data, user management is critical for security. Establishing who can make changes to policies, read collected data, or administer the platform are all specialized tasks, and these groups of users are typically kept separate. All DSP solutions offer different methods for segregating users into different groups, each with differing granularity. Most of the platforms offer integration with directory services to aid in user provisioning and assignment of roles. Collector/Sensor/Target Database Management: Agents and data collectors are managed from the central server. While data and policies are stored centrally, the collectors – which often enforce policy on the remote database – must periodically synch with the central server to update rules and settings. Some systems require the administrator to ‘push’ rules out to agents or remote servers, while others synch automatically. Systems Management: DSP is, in and of itself, and application platform. It has web interfaces, automated services, and databases like most enterprise applications. As such it requires some tweaking, patching, and configuration to perform its best. For example, the supporting database may need pruning to clear out older data, vendor assessment rules require updates, and the system may need additional resources for data storage and reports. The system management interface is provided via a web browser, but only available to authorized administrators. Data Aggregation & Correlation The one characteristic Database Activity Monitoring solutions share with log management, and even Security Information and Event Management, tools is their ability to collect disparate activity logs from a variety of database management systems. They tend to exceed the capabilities of related technologies in their ability to go “up the stack” in order to gather deeper database activity application layer data, and in their ability to correlate information. Like SIEM, DSP aggregates, normalizes, and correlates events across many heterogenous sources. Some platforms even provide an optional ‘enrichment’ capability by linking audit, identity and assessment data to event records. For example, providing both ‘before’ and ‘after’ data values for a suspect query. Despite central management and correlation features, the similarities with SIEM end there. By understanding the Structured Query Language (SQL) of each database platform, these platforms can interpret queries and understand their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is full of its own particular syntax. DSP understands the SQL for each platform is able to normalize events so the user doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DAM solution will recognize those events, regardless of platform and present a complete report, without you having to understand the SQL. A more advanced feature is to then correlate activity across different transactions and platforms, rather than looking only at single events. For example, some platforms recognize a higher than normal transaction volume by a particular user, or (as we’ll consider in policies) can link a privilege escalation event with a large SELECT query on sensitive data, which could indicate an attack. All activity is also centrally collected in a secure repository to prevent tampering or a breach of the repository itself. Since they collect massive amounts of data, DSPs must support automatic archiving. Archiving should support separate backups of system activity, configuration, policies, alerts, and case management; and encrypt under separate keys to support separation of duties. Policy Management All platforms come with sets of pre-packaged policies for security and compliance. For example, every product contains hundreds, if not thousands, of assessment policies that identify vulnerabilities. Most platforms come with pre-defined policies for monitoring standard deployments of databases behind major applications such as Oracle Financials and SAP. Built-in policies for PCI, SOX, and other generic compliance requirements are also available to help you jump-start the process and save many hours of policy building. Every single policy has the built-in capability of generating an alert if the rule is violated – usually through email, instant message or some other messaging capability. Note that every user needs to tune or customize a subset of pre-existing policies to match their environment, and create others to address specific risks to their data. They are still far better than starting from scratch. Activity monitoring policies include user/group, time of day, source/destination, and other important contextual options. And these policies should offer different analysis techniques based on attributes, heuristics, context, and content analysis. They should also support advanced definitions, such as complex multi-level nesting and combinations. If a policy violation occurs you can specify any number of alerting, event handling and reactive actions. Ideally, the platform will include policy creation tools that limit the need to write everything out in SQL or some other definition language; it’s much better if your compliance team does not need to learn SQL programming to create policies. You can’t avoid having to do some things

Share:
Read Post

How to Tell If Your Cloud Provider Can Read Your Data (Hint: They Can)

Over at TidBITS today I published a non-security-geek oriented article on how to tell if your cloud provider can read your data. Since many of you are security geeks, here’s the short version (mostly cut and paste) and some more technical info. The short version? If you don’t encrypt it and manage keys yourself, of course someone on their side can read it (99+% of the time). There are three easy indicators that your cloud provider (especially SaaS providers) can read your data: If you can see your data in a web browser after entering only your account password, the odds are extremely high that your provider can read it as well. The only way you could see your data in a web browser and still have it be hidden from your provider would require complex (fragile) JavaScript code, or a Flash/Java/ActiveX control to decrypt and display the data locally. If the service offers both web access and a desktop application, and you can access your data in both with the same account password, the odds are high that your provider can read your data. The common access indicates that your account password is probably being used to protect your data (usually your password is used to unlock your encryption key). While your provider could architect things so the same password is used in different ways to both encrypt data and allow web access, that doesn’t really happen. If you can access the cloud service from a new device or application by simply providing your user name and password, your provider can probably read your data. This is how I knew Dropbox could read my files long before that story hit the press. Once I saw that I could log in and see my files, or view them on my iPad without using an encryption key other than my account password, I knew that my data was encrypted with a key Dropbox that manages. The same goes for the enterprise-focused file sharing service Box (even though it’s hard to tell from reading their site). Of course, since Dropbox stores just files, you can apply your own encryption before Dropbox ever sees your data, as I explained last year. And iCloud? With iCloud I have a single user name and password. Apple offers a rich and well-designed web interface where I can manage individual email messages, calendar entries, and more. I can register new devices and computers with the same user name and password I use on the web site. So it has always been clear that Apple could read my content, just as Ars Technica reported recently (with quotes from me). That doesn’t mean that Dropbox, iCloud, and similar services are insecure. They generally have extensive controls – both technical and policy restrictions – to keep employees from snooping. But such services aren’t suitable for all users in all cases – especially for businesses or governmental organizations that are contractually or legally obligated to keep certain data private. Now let’s think beyond consumer services, about the enterprise side. Salesforce? Yep – of course they can read your data (unless you add an encryption proxy). SaaS services nearly always – so they can do stuff with your data. PaaS? Same deal (again, unless you do the encryption yourself). IaaS? Of course – your instance needs to boot up somehow, and if you want attached volumes to be encrypted you have to do it yourself. The main thing for Securosis readers to understand is that the vast majority of consumer and enterprise cloud services that mention encryption or offer encryption options, manage your keys for you, and have full access to your data. Why offer encryption at all then, if it doesn’t really improve security? Compliance. It wipes out one risk (lost hard drives), and reduces compliance scope for physical handling of the storage media. It also looks god on a checklist. Take Amazon S3 – Amazon is really clear that although you can encrypt data, they can still read it. I suppose the only reason I wrote this post and the article is because I’m sick of the “iWhatever service can read your data” non-stories that seem to crop up all the time. Duh. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Application Layer

In our last Vulnerability Management Evolution post we discussed scanning infrastructure, which remains an important part of vulnerability management. But we recognize that most attacks target applications directly, so we can no longer just scan the infrastructure and be done with it. We need to climb the stack and pay attention to the application layer, looking for vulnerabilities in application as well as the supporting components. But that requires us to define an ‘application’, which is surprisingly difficult. A few years ago, the definition of application was fairly straightforward. Even in an N-tier app, with a variety of application servers and data stores, you largely controlled all the components of the application. Nowadays, not so much. Pre-assembled web stacks, open source application servers, third party crypto libraries, and cloud-provided services all make for quick application development, but blur the line between your application and the supporting infrastructure. You have little visibility into what’s going on behind the curtain, but you’re still responsible for securing it. For the purposes of our vulnerability/threat management discussion, we define the app as presentation and infrastructure. The presentation layer focuses on assembling information from a number of different sources – either internal or external to your enterprise. The user of the application couldn’t care less about where the data comes from. So from a threat standpoint you need to assess the presentation code for issues that put devices at risk. But your focus on reducing attack surface of applications also requires you to pay attention to the infrastructure. That means the application servers, interfaces, and databases that assemble the data presented by the application. So you scan application servers and databases to find problems. Let’s dig into the two aspects of the application layer to assess: databases and application infrastructure. Database Layer Assessing databases is more similar to the scanning infrastructure than applications – you look for vulnerabilities in the DBMS (database management system). As with other infrastructure devices, databases can be misconfigured and might have improper entitlements, all of which pose risks to your environment. So assessment needs to focus on whether appropriate database patches have been installed, the configuration of the database, improper access control, entitlements, etc… Let’s work through the key steps in database assessment: Discovery: First you need to know where your databases are. That means a discovery process, preferably automated to find both known and unknown databases. You need to be wary of shadow IT, where lines of business and other groups build their own data stores – perhaps without the operational mojo of your data center group. You should also make sure you are continuously searching for new databases because they can pop up anywhere, at any time, just like rogue access points – and they do. Vulnerabilities: You will also look for vulnerabilities in your DBMS platform, which requires up-to-date tests for database issues. Your DB assessment provider should have a research team to keep track of the newest and latest attacks on whatever database platforms you use. Once something is found, information about exposure and workarounds & remediations, is critical for making your job easier. Configurations: Configuration checking a DBMS is slightly different – you are assessing mostly internals. Be sure to you check the database both with credentials (as an authorized user) and without credentials (which more accurately represents a typical outside attacker). Both scenarios are common in database attacks, so make sure your configuration is sufficiently locked against both of them. Access Rights and Entitlements: Aside from default accounts and passwords, focus your efforts on making sure no users (neither humans nor applications) have additional entitlements that put the database platform at risk. For example, you need to ensure credentials of de-provisioned users have been removed and that accounts which only need read access don’t have the ability to DROP TABLES. And you need to verify that users – especially administrators – cannot ‘backdoor’ the database through local system privileges. Part of this is housekeeping, but you need to pay attention – make sure your databases are configured correctly to avoid unnecessary risk. Finally, we know this research focuses more on vulnerability/threat identification and assessment, but over time you will see even tighter integration between evolved vulnerability/threat management platforms and tactics to remediate problems. We have written a detailed research report on Database Assessment, and you should track our Database Security Platform research closely so you can shorten your exposure window by catching problems and taking action more quickly. Application Layer Application assessment (especially of web applications) is a different animal. Mostly because you have to actually ‘attack’ the application to find vulnerabilities, which might exist within the application code or the infrastructure components it is built on. Obviously you need to crawl through the app to find issues to fix issues. There are a several different types of app security testing (as discussed in Building a Web App Security Program), so we will just summarize here. Platform Vulnerabilties: This is the stuff we check for when scanning infrastructure and databases. Applications aren’t ‘stand-alone’ – they depend on infrastructure and inherit vulnerabilities from their underlying components. The clearest example is a content management system, where a web app built on Drupal inherits all the vulnerabilities of Drupal, unless they are somehow patched worked around. Static Application Security Testing (SAST): Also called “white box testing”, SAST involves developers analyzing source to identify coding errors. This is not normally handled by security teams – it is normally part of a secure development lifecycle (SDLC). Dynamic Application Security Testing (DAST): Also known as “black box testing”, DAST is the attempt to find application defects using bad inputs, using fuzzing and other techniques. This doesn’t involve access to the source code, so some security teams get involved in DAST, but it is still largely seen as a development responsibility because thorough DAST testing can be destructive to the app, and so shouldn’t be used on production applications. Web App Scanners But the technology most relevant to the evolution of vulnerability management is the web application scanner. Many of the available vulnerability management offerings offer an add-on capability to scan applications and their underlying infrastructures to identify

Share:
Read Post

Watching the Watchers: Monitor Privileged Users

As we continue our march through the Privileged User Lifecycle, we have locked down privileged accounts as tightly as needed. But that’s not the whole story, and the lifecycle ends with a traditional audit. Because verifying what the administrators do with their privileges is just as important as the other steps. Admittedly, some organizations have as large a cultural issue with granular user monitoring because they actually want to trust their employees. Silly organizations, right? But in this case there is no monitoring slippery slope – we aren’t talking about recording an employee’s personal Facebook interactions or checking out pictures of Grandma. We’re talking about capturing what an administrator has done on a specific device. Before we get into the how of privileged user monitoring, let’s look at why you would monitor admins. There are two main reasons: Forensics: In the event of a breach, you need to know what happened on the device, quickly. A detailed record of what an administrator did on a device can be instrumental to putting the pieces together – especially in the event of an inside job. Of course privileged user monitoring is not a panacea to forensics – there are a zillion other ways to get compromised – but if the breach began with administrator activity, you would have a record of what happened, and the proverbial smoking gun. Audit: Another use is to make your auditor happy. Imagine the difference between showing the auditor a policy saying how you do things, and showing a screen capture of an account being provisioned or a change being committed. Monitoring logs are powerful for showing that the controls are in place. Sold? Good, but how to you move from concept to reality? You have a couple of options, including: SIEM/Log Management: As part of your other compliance efforts, you likely send most events from sensitive devices to a central aggregation point. This SIEM/Log Management work can also be used to monitor privileged users. By setting up some reports and correlation rules for administrator activity you can effectively figure out what administrators are doing. By the way, this is one of the main use cases for SIEM and log management. Configuration Management: A similar approach is to pull data out of a configuration management platform which tracks changes on managed devices. A difference between using configuration management and a SIEM is the ability to go beyond monitoring, and actually block unauthorized changes. Screen Capture If a picture is worth a thousand words, how much would you say a video is worth? An advantage of routing your administrative sessions through a proxy is the ability to capture exactly what admins are doing on every device. With a video screen capture of the session and the associated keystrokes, there can be no question of intent – no inference of what actually happened. You’ll know what happened – you just need to watch the playback. For screen capture you can deploy an agent on the managed device or you could route sessions through a proxy. We started discussing the P-User Lifecycle by focusing on how to restrict access to sensitive devices. After discussing a number of options, we explained why proxies make a lot of sense for making sure only the right administrators access the correct devices at the right times. So it’s appropriate that we come full circle and end our lifecycle discussion back in a similar position. Let’s look at performance and scale first. Video is pretty compute intensive, and consumes a tremendous amount of storage. The good news is that an administrative session doesn’t require HD quality to catch a bad apple red-handed. So significant compression is feasible, and can save a significant chunk of storage – whether you capture with an agent or through a proxy. But there is a major difference in device impact between these approaches. An agent takes resources for screen capture from the managed device, which impacts the server’s performance – probably significantly. With a proxy, the resources are consumed by the proxy server rather than the managed device. The other issue is the security of the video – ensuring there is no tampering with the capture. Either way you can protect the video with secure storage and/or other means of making tampering evident, such as cryptographic hashing. The main question is how you get the video into secure storage. Using an agent, the system needs a secure transport between the device and the storage. Using a proxy approach, the storage could be integrated into (or very close to) the proxy device. We believe a proxy-based approach to monitoring privileged users makes the most sense, but there are certainly cases where an agent could suffice. And with that we have completed our journey through the Privileged User Lifecycle, but we aren’t done yet. This “cloud computing” thing threatens to dramatically complicate how all devices are managed, with substantial impact on how privileged users need to be managed. So in the next post we will delve into the impact of the cloud on privileged users. Share:

Share:
Read Post

Watching the Watchers: Enforce Entitlements

So far we have described the Restrict Access and Protect Credentials aspects of the Privileged User Lifecycle. So far any administrator managing a device is authorized to be there and uses strong credentials. But what happens when they get there? Do they get free reign? Should you just give them root or full Administrator rights and have done with it? What could possibly go wrong with that? Clearly you should make sure administrators only perform authorized functions on managed devices. This protects against a couple of scenarios you probably need to worry about: Insider Threat: A privileged user is the ultimate insider, as he/she has the skills and knowledge to compromise a system and take what they want, cover their tracks, etc. So it makes sense to provide a bit more specificity over what admins and groups can do, and block them from doing everything else. Separation of Duties: Related to the Insider Threat, optimally you should make sure no one person has the ability to take down your environment. So you can logically separate duties, where one group can manage the servers but not the storage. Or one admin can provision a new server but can’t move data onto it. Compromised Endpoints: You also can’t assume any endpoint isn’t compromised. So even an authenticated and authorized user may not be who you think they are. You can protect yourself from this scenario by restricting what the administrator can do. So even in the worst case, where an intruder is in your system as an admin, they can’t wreck everything. Smaller organizations may lack the resources to define administrator roles with real granularity. But the more larger enterprises can restrict administrators to particular functions the harder it becomes for a bad apple to take everything down. Policy Granularity You need to define roles and responsibilities – what administrators can and can’t do – with sufficient granularity. We won’t go into detail on the process of setting policies, but you will either adopt a whitelist approach: defining legitimate commands and blocking everything else; or block specific commands (a blacklist), such as restricting folks in the network admin group from deleting or snapshotting volumes in the data center. Depending on your needs, you could also define far more granular polices, similar to the policy options available for controlling access to the password vault. For example you might specify that a sysadmin can only add user accounts to devices during business hours, but can add and remove volumes at any time. Or you could define specific types of commands authorized to flow from an application to the back-end database to prevent unauthorized data dumps. But granularly brings complexity. In a rapidly changing environment it can be hard to truly nail down a legitimate set of allowable actions for specific administrators. So getting too granular is a problem too – similar to the issues with application whitelisting. And the higher up the application stack you go, the more integration is required, as homegrown and highly customized applications need to be manually integrated into the privileged user management system. Location, Location, Location As much fun as it is to sit around and set up policies, the reality is that nothing is protected until the entitlements are enforced. There are two main approaches to actually enforcing entitlements. The first involves implementing a proxy in between the admin and the system, which acts as a man in the middle to interpret and then either allow or block each command. Alternatively, entitlements can be enforced on the end devices via agents that intercept commands and enforce policy locally. We aren’t religious about either approach, and each has pros and cons. Specifically, the proxy implementation is simpler – you don’t need to install agents on every device, so you don’t have to worry about OS compatibility (as long as the command syntax remains consistent) or deal with incompatibilities every time an underlying OS is updated. Another advantage is that unauthorized commands are blocked before reaching the managed device, so even if the attacker has elevated privileges, management commands can only come through the proxy. On the other hand the proxy serves as a choke point, which may introduce a single point of failure. Similarly, an agent-based approach offers advantages such as preventing attackers from back-dooring devices by defeating the proxy or gaining physical access to the devices. The agent runs on each device, so even being at the keyboard doesn’t kill it. But agents require management, and consume processing resources on the managed systems. Pick the approach that makes the most sense for your environment, culture, and operational capabilities. At this point in the lifecycle privileged users should be pretty well locked down. But as a card-carrying security professional you don’t trust anything. Keep an eye on exactly what the admins are doing – we will cover privileged user monitoring next. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Infrastructure

As we discussed in the Vulnerability Management Evolution introduction, traditional vulnerability scanners, focused purely on infrastructure devices, do not provide enough context to help organizations prioritize their efforts. Those traditional scanners are the plumbing of threat management. You don’t appreciate the scanner until your proverbial toilet is overflowing with attackers and you have no idea what are they targeting. We will spend most of this series on the case for transcending device scanning, but infrastructure scanning remains a core component of any evolved threat management platform. So let’s look at some key aspects of a traditional scanner. Core Features As a mature technology, pretty much all the commercial scanners have a core set of functions that work well. Of course different scanners have different strengths and weaknesses, but for the most part they all do the following: Discovery: You can’t protect something (or know it’s vulnerable) if you don’t know it exists. So the first key feature is discovery. The enemy of a security professional is surprise, so you want to make sure you know about new devices as quickly as possible, including rogue wireless access points and other mobile devices. Given the need to continuously perform discovery, passive scanning and/or network flow analysis can be an interesting and useful complement to active device discovery. Device/Protocol Support: Once you have found a device, you need to figure out its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess the varieties of network and security devices running in your environment, as well as servers on all relevant operating systems. Of course databases and applications are important too, but we’ll discuss those later in this series. And be careful scanning brittle systems like SCADA, as knocking down production devices doesn’t make any friends in the Ops group. Inside/Out and Outside/In: You can’t assume adversaries are only external or internal, so you need the ability to assess your devices from both inside and outside your network. So some kind of scanner appliance (which could be virtualized) is needed to scan the innards of your environment. You’ll also want to monitor your IP space from the outside to identify new Internet facing devices, find open ports, etc. Accuracy: Unless you enjoy wild goose chases, you’ll come to appreciate a scanner that minimizes false positives by focusing on accuracy. Accessible Vulnerability Information: With every vulnerability found, decisions must be made on the severity of the issue, so it’s very helpful to have information from either the vendor’s research team or other third parties on the vulnerability, directly within the scanning console. Appropriate Scale: Adding capabilities to the evolved platform makes scale a much more serious issue. But first things first: the scanner must be able to scan your environment quickly and effectively, whether that is 200 or 200,000 devices. The point is to ensure the scanner is extensible to what you’ll need as you add devices, databases, apps, virtual instances, etc. over time. We will discuss platform technical architectures later in this series, but for now suffice it to say there will be a lot more data in the vulnerability management platform, and the underlying platform architecture needs to keep up. New & Updated Tests: Organizations face new attacks constantly and attacks evolve constantly. So your scanner needs to keep current to test for the latest attacks. Exploit code based on patches and public vulnerability disclosures typically appears within a day so time is of the essence. Expect your platform provider to make significant investments in research to track new vulnerabilities, attacks, and exploits. Scanners need to be updated almost daily, so you will need the ability to transparently update them with new tests – whether running on-premises or in the cloud. Additional Capabilities But that’s not all. Today’s infrastructure scanners also offer value-added functions that have become increasingly critical. These include: Configuration Assessment: There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for device compromise. For example, a patched firewall with an any-to-any policy doesn’t protect much – completely aside from any vulnerability defects. But unfortunately the industry’s focus on vulnerabilities means this capability is usually considered a scanner add-on. Over time these distinctions will fade away, as we expect both vulnerability scanning and configuration assessment to emerge as critical components of the platform. Further evolution will add the ability to monitor for system file changes and integrity – it is the same underlying technology. Patch Validation: As we described in Patch Management Quant, validating patches is an integral part of the process. With some strategic integration between patch and configuration management, the threat management platform can (and should) verify installed patches to confirm that the vulnerability has been remediated. Further integration involves sending information to and from IT Ops systems to close the loop between security and Operations. Cloud/Virtualization Support: With the increasing adoption of virtualization in data centers, you need to factor in the rapid addition and removal of virtual machines. This means not only assessing hypervisors as part of your attack surface, but also integrating information from the virtualization management console (vCenter, etc.) to discover what devices are in use and which are not. You’ll also want to verify the information coming from the virtualization console – you learned not to trust anything in security pre-school, didn’t you? Leveraging Collection So what’s the difference with all of these capabilities from what you already have? It’s all about making 1 + 1 = 3 by integrating data to derive information and drive priorities. We have seen some value-add capabilities (configuration assessment, patch validation, etc.) further integrated into infrastructure scanners to good effect. This positions the vulnerability/threat management platform as another source of intelligence for security professionals. And we are only getting started – there are plenty of other data types to incorporate into this discussion. Next we will climb the proverbial stack and evaluate how database and application scanning play into the evolved platform story. Share:

Share:
Read Post

Friday Summary: April 6, 2012

Rich here… Normally I like to open the Summary with a bit of something from my personal life. Some sort of anecdote with a message. In other words, I blatantly ripped off Mike’s format for the Security Incite… long before he took over half the company. (With Mike, even a partnership can probably be defined as a hostile takeover, based solely on his gruff voice and honesty of opinion). Heck, I can’t even remember any good anecdotes from the CCSK cloud security class Adrian and I taught last week in San Jose. Even when we hooked up with Richard Baker and our own James Arlen for dinner, I think half the conversation was about my and Jamie’s recent family trips to dinner. And that stripmall Thai place is probably better than the fanciest one here in Phoenix. I don’t even have any good workout anecdotes. I’m back on the triathlon wagon and chugging along. Although I did get a really cool new heart rate monitor/GPS that I’m totally in love with. (The Garmin 910XT, which is friggin’ amazing). I probably need to pick a race to prep for, but am otherwise enjoying being healthy and relatively uninjured, and not getting run over by cars on my bike rides. The kids are still cute and the older one is finally getting addicted to the iPad (which I encourage, although it is making normal computers really frustrating for her to use). They talk a lot, are growing too fast, and are far more interesting than anything else in my life. By nope, no major life lessons in the past few weeks that I can remember. Although there are some clear analogies between having kids and advanced persistent threats. Especially if you have daughters. And work? The only lesson there is to be careful what you wish for, as I fail, on a daily basis, to keep up with my inbox. Never mind my actual projects. But business is good, some very cool research is on the way, and it’s nice to have a paycheck. And I swear the Nexus isn’t vaporware. It’s actually all torn apart as we hammer in a ton of updates based on the initial beta feedback. In other words… life doesn’t suck. I actually enjoy it, and am amazed I get to write this on my iPad while sitting outside in perfect weather at a local restaurant. Besides, this is a security blog – if you’re reading it for life messages you need to get out more. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted by Ars Technica on iCloud privacy and security. Rich, again over at Ars, but this time on iPhone forensics. Favorite Securosis Posts Adrian Lane: iOS Data Security: Managed Devices. Both the post and the banter are quality. Mike Rothman: Defining Your iOS Data Security Strategy. Really liked this series by Rich. Great work and very timely. BYOD and other mobile security issues are the #1 concern of the folks I’m talking to during my travels. Rich: Vulnerability Management Evolution: Scanning the Infrastructure. Yes, we still have to deal with this stuff in 2012. Other Securosis Posts Incite 4/4/2012: Travel the Barbarian. Watching the Watchers: Protect Credentials. Vulnerability Management Evolution: Introduction. iOS Data Security: Securing Data on Partially-Managed Devices. Understanding and Selecting DSP: Core Features. Understanding and Selecting DSP: Extended Features. Favorite Outside Posts Adrian Lane: Hash Length Extension Attacks. Injection attack on MAC check. Interesting. Mike Rothman: Choosing Between Making Money and Doing What You Love. The answer? Both. Even if you can’t make your passion a full time gig, working at it a little every day seems to make folks happy. Good to know. Dave Lewis: Too many passwords? Just one does the trick. Rich: DNS Changer. Possibly the most important thing you’ll read this year. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts VMware High-Bandwidth Backdoor ROM Overwrite Privilege Elevation. Wig Wam Bam. & Citrix and CloudStack Citrix intends to join and contribute to Apache Software Foundation. This isn’t security specific, but it is big. Global Payments: Rumor and Innuendo. GPN is saying there was no POS or merchant account hacking, so this was a breach of their systems. Flashback Trojan Compromises Macs. Dear FBI, Who Lost $1 Billion? Oh my goodness, does Adam nail it with this one. Major VMWare vulnerability. Incredible research here. An only semi-blatant advertisement for our friend Mr. Mortman at EnStratus. ZeuS botnet targets USAirways passengers. (No, not while they’re on the plane… yet). Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ryan, in response to iOS Data Security: Managed Devices. Is it nicer to say “captive network” or “traffic backhauling”? That said, nice post, and definitely part of a strategy I’ve seen work, although the example that leaps to mind is actually a security products company Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.