Securosis

Research

Watching the Watchers: Protect Credentials

As we continue our march through the Privileged User Lifecycle, we have provisioned the privileged users and restricted access to only the devices they are authorized to manage. The next risk to address is the keys or credentials of these privileged users (P-Users) falling into the wrong hands. The best access and entitlements security controls fail if someone can impersonate a P-User. But the worst risk isn’t even compromised credentials. It’s not having unique credentials in the first place. You must have seen the old admin password sharing scheme, right? It was used, mostly out of necessity, many moons ago. Administrators needed access to the devices they managed. But at times they needed help, so they asked a buddy to take care of something, and just gave him/her the credentials. What could possibly go wrong? We covered a lot of that in the Keys to the Kingdom. Shared administrative credentials open Pandora’s box. Once the credentials are in circulation you can’t get them back – which is a problem when an admin leaves the company or no longer has those particular privileges. You can’t deprovision shared credentials so you need to change them. PCI, as the low bar for security (just ask Global Payments), recognizes the issues with sharing IDs, so Requirement 8 is all about making sure anyone with access to protected data uses a unique ID, and that their use is audited – so you can attribute every action to a particular user. But that’s not all! (in my best infomercial voice). What about the fact that some endpoints could be compromised? Even administrative endpoints. So sending admin credentials to that endpoint might not be safe. And what happens when developers hard-code credentials into an applications? Why go through the hassle of secure coding – just embed the password right into the application! That password never changes anyway, so what’s the risk? So we need to protect credentials, as much as whatever they control. Credential Lockdown How can we protect these credentials? Locking the credentials away in a vault meets many of the requirements described above. First, if the credentials are stored in a vault, it harder for admins to share them. Let’s not put the cart before the horse, but this makes it pretty easy (and transparent) to change the password after every access, eliminating the sticky-note-under-keyboard risk. Going through the vault for every administrative credential access means you have an audit trail of who used which credentials (and presumably which specific devices they were managing) and when. That kind of stuff makes auditors happy. Depending on the deployment of the vault, the administrator may never even see the credentials, as they can be automatically entered on the server if you use a proxy approach to restricting access. And this also provides single sign-on to all managed devices, as the administrator authenticates (presumably using multiple factors) to the proxy, which interfaces directly to the vault again, transparently to the user. So even an administration device teeming with malware cannot expose critical credentials. Similarly, an application can make a call to the vault, rather than hard-coding credentials into the app. Yes, the credentials still end up on the application server, but that’s still much better than hard-coding the password. So are you sold yet? If you worry about credentials being access and misused, a password vault provides a good mechanism for protecting them. Define Policies As with most things in security, using a vault involves both technology and process. We will tackle the process first, because without a good process even the best technology has no chance. So before you implement anything you need to define the rules of (credential) engagement. You need to answer some questions. Which systems and devices need to be involved in the password management system? This may involve servers (physical and/or virtual), network & security devices, infrastructure services (DNS, Directory, mail, etc.), databases, and/or applications. Ideally your vault will natively support most of your targets, but broad protection is likely to require some integration work on your end. So make sure any solution you look at has some kind of API to facilitate this integration. How does each target use the vault? Then you need to decide who (likely by group) can access each target, how long they are allowed to use the credentials (and manage the device), and whether they need to present additional authentication factors to access the device. You’ll also define whether multiple administrators can access managed devices simultaneously and whether to change the password after each check-in/check-out cycle. Finally, you may need to support external administrators (for third party management or business partner integration), so keep that in mind as you work through these decisions. What kind of administrator experience makes sense? Then you need to figure out the P-User interaction with the system. Will it be via a proxy login, where the user never sees the credentials, or will there be a secure agent on the device to receive and protect the credential? Figure out how the vault supports application-to-database and application-to-application interaction, as those are different than supporting human admins. You’ll also want to specify which activities are audited and how long audit logs are kept. Securing the Vault If you are putting the keys to the kingdom in this vault, make sure it’s secure. You probably will not bring a product in and set your application pen-test ninjas loose on it, so you are more likely to rely on what we call the sniff test. Ask questions to see whether the vendor has done their homework to protect the vault. You should understand the security architecture of the vault. Yes, you may have to sign a non-disclosure agreement to see the details, but it’s worth it. You need to know how they protect things. Discuss the threat model(s) the vendor uses to implement that security architecture. Make sure they didn’t miss any obvious attack vectors. You also need to poke around their development process a bit and make sure they have a proper SDLC and actually test for security defects before

Share:
Read Post

Vulnerability Management Evolution: Introduction

Back when The Pragmatic CSO was published in 2007, I put together a set of tips for being a better CISO. In fact you can still get the tips (sent one per day for five days) if you register on the Pragmatic CSO site. Not to steal any thunder, but Tip #2 is Prioritize Fiercely. Let’s take a look at what I wrote back then. Tip #2 is all about the need to prioritize. The fact is you can’t get everything done. Not by a long shot. So you have a choice. You can just not get to things and hope you don’t end up overly exposed. Or you can think about what’s important to your business and act to protect those systems first. Which do you think is the better approach? The fact is that any exposure can create problems. But you dramatically reduce the odds of a career-limiting incident if you focus most of your time on the highest profile systems. Maybe it’s not good old Pareto’s 80/20 rule, but you should be spending a bulk of your time focused on the systems that are most important to your business. Or hope the bad guys don’t know which is which. 5 years later that tip still makes perfect sense. No organization, including the biggest of the big, has enough resources. Which means you must make tough choices. Things won’t be done when they need to be. Some things won’t get done at all. So how do you choose? Unfortunately most organizations don’t choose at all. They do whatever is next on the list, without much rhyme or reason determining where things land on it. It’s the path of least resistance for a tactically oriented environment. Oil the squeakiest wheel. Keep your job. It’s all very understandable, but not very effective. Optimally, resources are allocated and priorities set based upon value to the business. In a security context, that means the next thing done should reduce the most risk to your organization. Of course calculating that risk is where things get sticky. Regardless of your specific risk quantification religion, we can all agree that you need data to accurately evaluate these risks and answer the prioritization question. Last year we did a project called Fact-Based Network Security: Metrics and the Pursuit of Prioritization which dealt with one aspect of this problem: how to make decisions based on network metrics. But the issue is bigger than that. Network exposure is only one factor in the decision-making process. You need to factor in a lot of other data – including vulnerability scans, device configurations, attack paths, application and database posture, security intelligence, benchmarks, and lots of other stuff – to get a full view of the environment, evaluate the risk, and make appropriate prioritization decisions. Historically, vulnerability scanners haves provided a piece of that data, telling you which devices were vulnerable to what attacks. The scanners didn’t tell you whether the devices were really at risk – only whether they were vulnerable. From Tactical to Strategic Organizations have traditionally viewed vulnerability scanners as a tactical product, largely commoditized, and only providing value around audit time. How useful is a 100-page vulnerability report to an operations person trying to figure out what to fix next? Though the 100-page report did make the auditor smile, as it provides a nice listing of all the audit deficiencies to address in the findings of fact. At the recent RSA Conference 2012, we definitely saw a shift from largely compliance-driven messaging to a more security-centric view. It’s widely acknowledged that compliance provides a low (okay – very low) bar for security, and it just isn’t high enough. So more strategic security organizations need better optics. They need the ability to pull in a lot of threat-related data, reference it with an understanding of what is vulnerable, and figure out what is at risk. Yesterday’s vulnerability scanners are evolving to meet this need, and are emerging as a much more strategic component of an organization’s control set than in the past. So we are starting a new series to tackle this evolution – we call it Vulnerability Management Evolution. As with last year’s SIEM Replacement research, we believe it is now time to revisit your threat management/vulnerability scanning strategy. Not necessarily to swap out products, services, or vendors, but to enssure your capabilities map to what you need now and in the future. We will start by covering the traditional scanning technologies and then quickly go on to some advanced capabilities you will need to start leveraging these platforms for decision support. Yes, decision support is the fancy term for helping you prioritize. Platform Emergence As we’ve discussed, you need more than just a set of tactical scans to generate a huge list of things you’ll never get to. You need information that helps you decide how to allocate resources and prioritize efforts. We believe what used to be called a “vulnerability scanner” is evolving into a threat management platform. Sounds spiffy, eh? When someone says platform, that usually indicates use of a common data model as the foundation, with a number of different applications riding on top, to deliver value to customers. You don’t buy a platform per se. You buy applications that leverage a platform to provide value to solve the problems you have. That’s exactly what we are talking about here. But traditional scanning technology isn’t a platform in any sense of the word. So this vulnerability management evolution requires a definite technology evolution. We are talking about growth from single-purpose product into multi-function platform. This evolved platform encompasses a number of different capabilities. Starting with the tried and true device scanner, to include database and application scanning and risk scoring. But we don’t want to spoil the fun today – we will describe not just the core technology that enables the platform, but the critical enterprise integration points and bundled value-added technologies (such as attack path analysis, automated pen testing, benchmarking, et al) that differentiate between a tactical product decision to a strategic platform deployment. We will also talk about the enterprise features you need from a platform, including

Share:
Read Post

Incite 3/28/2012: Gone Tomorrow

A recent Tweet from Shack was pretty jarring. Old friend from college died today. Got some insane rare lung disease out of nowhere, destroyed them. Terrifying. 37 years old. :/ Here today. Gone tomorrow. It’s been a while since I have ranted about the importance of enjoying (most) every day. About spending time with the people who matter to you. People who make you better, not break you down. Working at something you like, not something you tolerate. Basically making the most of each day, which most of us don’t do very well. Myself included. This requires a change in perspective. Enjoying not just the good days but also the bad ones. I know the idea of enjoying a bad day sounds weird. It’s kind of like sales. Great sales folks have convinced themselves that every no is one step closer to a yes. Are they right? Inevitably, at some point they will sell something to someone, so they are in fact closer to a ‘yes’ with every ‘no’. So a bad day means you are closer to a good day. That little change in perspective can have a huge impact on your morale. The challenge is that you have to live through bad days to appreciate good days. It takes a few cycles thorugh the ebbs and flows to realize that this too shall pass. Whatever it is. It’s hard to have that patience when you are young. Everything is magnified. The highs are really high. And the lows, well, you know. You tend to remember the lows a lot longer than the highs. So a decade passes and you wonder what happened? You question all the time you wasted. The decisions you made. The decisions you didn’t. How did you turn 30? Where did the time go? The time is gone. And it gets worse. My 30s were a blur. 3 kids. Multiple jobs. A relocation. I was so busy chasing things I didn’t have, I forgot to enjoy the things I did. I’m only now starting to appreciate the path I’m on. To realize I needed the hard times. And to enjoy the small victories and have a short memory about the minor defeats. I was a guest speaker at Kennesaw State yesterday, talking to a bunch of students studying security. There were some older folks there. You know, like 30. But mostly I saw kids, just starting out. I didn’t spend a lot of time talking about perspective because kids don’t appreciate experience. They still think they know it all. Most kids anyway. These kids need to screw up a lot of things. And soon. They need to get on with bungling anything and everything. I didn’t say that, but I should have. Because actually all these kids have is time. Time to gain the experience they’ll need to realize they don’t know everything. Dave’s college friend doesn’t have any more time. He’s gone. If you are reading this you are not. Enjoy today, even if it’s a crappy day. Because the crappy days make you appreciate the good days to come. –Mike Photo credits: “Free Beer Tomorrow Neon Sign” originally uploaded by Lore SR Heavy Research We’re back at work on a variety of our blog series. So here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in it’s unabridged glory. Defending iOS Data Securing Data on Partially-Managed Devices Watching the Watchers (Privileged User Management) The Privileged User Lifecycle Restrict Access Understanding and Selecting DSP Technical Architecture Incite 4 U This sounds strangely familiar… It seems our friend Richard Bejtlich spent some time on Capital Hill recently, and had a Groundhog Day experience. You know, the new regime asking him questions he answered back in 2007. Like politicians are going to remember anything from 2007. Ha! They can’t even remember their campaign promises from two years ago (yup, I’ll be here all week). So he went back into the archives to remind everyone what he’s been saying for years. You know, reduce attack surface by identifying all egress points and figure out which ones need to be protected. And monitor both those egress paths and allegedly friendly networks. Though I think over the past 5 years we have learned that no networks are friendly. Not for long, anyway. Finally, Richard also recommended a Federal I/R team be established. All novel ideas. None really implemented. But on the good news front, the US Government spends a lot of money each year on security products. – MR Perverse economics: I’m going to go out on a limb and make a statement about vulnerability disclosure. After years of watching, and sometimes participating, in the debate, I finally think I have the answer. There is only one kind of responsible disclosure, and the economics are so screwed up that it might as well be a cruddy plot device in a bad science fiction novel. Researchers should disclose vulnerabilities privately to vendors. Vendors are then responsible for creating timely patches. Users are then responsible for patching their systems within a reasonable period. Pretty much anything else screws at minimum users, and likely plenty of other folks. (And this doesn’t apply if something is already in the wild). But as Dennis Fisher highlights, the real world never works that way. Today it’s more economically viable for researchers to sell their exploits to governments, which will use them against some other country, if not their own citizens. It’s more economically viable for vendors to keep vulnerabilities quiet so they don’t have to patch. And users? Well, no one seems to care much about them, but scrambling to patch sure isn’t in their economic interest. It seems ‘responsible’ means ‘altruistic’, and we all know where human nature takes us from there. – RM Scoring credit: Hackers have been stealing credit reports and financial data from – where else? – credit scoring agencies and selling the data to the highest bidder. Shocking, I know. Seems they are abusing the sooper-secure credit score user validation system; asking “which bank holds

Share:
Read Post

iOS Data Security: Securing Data on Partially-Managed Devices

Our last two posts covered iOS data security options on unmanaged devices; now it’s time to discuss partially managed devices. Our definition is: Devices that use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agree to some level of corporate management. The following policies are typically deployed onto partially-managed devices via Exchange ActiveSync: Enforce passcode lock. Disable simple passcode. Enable remote wipe. This, in turn, enables Data Protection on supporting hardware (including all models currently for sale). In addition, you can also add the following using iOS configuration profiles – which can also enforce all the previous policies except remote wiping, unless you also use a remote wipe server tool: On-demand VPN for specific domains (not all traffic, but all enterprise traffic). Manual VPN for access to corporate resources. Digital certificates for access to corporate resources (VPN or SSL). Installation of custom enterprise applications. Automatic wipe on failed passcode attempts (the number of attempts can be specified, unlike the user setting which is simply ON/OFF for wipe after 10 failures, in the Settings app). The key differences between partially and a fully managed devices are a) the user can still install arbitrary applications and make settings changes, and b) not all traffic is routed through a mandatory full-time VPN. One key point to administering managed policies on a user-owned device is to ensure that you obtain the user’s consent and notify them of what will happen. The user should sign a document saying they understand that although they own the device, by accessing corporate resources they are allowing management, which may include remote wiping a lost or stolen device. And that the user is responsible for their own backups of personal data. Enhanced security for existing options Most of the previous options we have discussed are significantly enhanced when digital certificate, passcode, and Data Protection policies are enforced. This is especially true of all the sandboxed app options – and, in fact, many vendors in those categories generally don’t support use of their tools without a configuration profile to require at least a passcode. Managed Exchange ActiveSync (or equivalent) Microsoft’s ActiveSync protocol, despite its name, is separate from the Exchange mail server and included with alternate products, including some that compete with Exchange. iOS natively supports it, so it is the backbone for managed email on iDevices when a sandboxed messaging app isn’t used. By setting the policies listed above, all email is encrypted to under user’s passcode using Data Protection. Other content is not protected, but remote wipe is supported. Custom enterprise sandboxed application Now that you can install an enterprise digital certificate onto the device and guarantee Data Protection is active, you can also deploy custom enterprise applications that leverage this built-in encryption. This option allows you to use the built-in iOS document viewer within your application’s sandbox, which enables you to fairly easily deploy a custom application that provides fully sandboxed and encrypted access to enterprise documents. Combine it with an on-demand VPN tied to the domain name of the server or a manual VPN, and you have data encrypted both in transit and in storage. Today a few vendors provide toolkits to build this sort of application. Some are adding document annotation for PDF files, and based on recent announcements we expect to see full editing capabilities also added for MS Office document formats. Share:

Share:
Read Post

Watching the Watchers: Restrict Access

As we discussed in the Privileged User Lifecycle post, there are a number of aspects to Watching the Watchers. Our first today is Restricting Access. This is first mostly because it reduces your attack surface. We want controls to ensure administrators only access devices they are authorization to manage. There are a few ways to handle restriction: Device-centricy (Status Quo): Far too many organizations rely on their existing controls, which include authentication and other server-based access control mechanisms. Network-based Isolation: Tried and true network segmentation approaches enable you to isolate devices (typically by group) and only allow authorized administrators access to the networks on which they live. PUM Proxy: This entails routing all management communications through a privileged user management proxy server or service which enforces access policies. The devices only accept management connections from the proxy server, and do not allow direct management access. There are benefits and issues to each approach, so ultimately you’ll be making some kind of compromise. So let’s dig into each approach and highlight what’s good and what’s not so good. Device-centricity (Status Quo) There are really two levels of status quo; the first is common authentication, which we understand in this context is not really “restricting access” effectively. Obviously you could do a bit to make the authentication more difficult, including strong passwords and/or multi-factor authentication. You would also integrate with an existing identity management platform (IDM) to keep entitlements current. But ultimately you are relying on credentials as a way to keep unauthorized folks from managing your critical devices. And basic credentials can be defeated. Many other organizations use server access control capabilities, which are fairly mature. This involves loading an agent onto each managed device and enforcing the access policy on the device. The agent-based approach offers rather solid security – the risk becomes compromise of the (security) agent. Of course there is management overhead to distribute and manage the agents, as well as the additional computational load imposed by the agent. But any device-based approach is in opposition to one of our core philosophies: “If you can’t see it, it’s much harder to compromise.” Device-centric access approaches don’t affect visibility at all. This is suboptimal because in the real world new vulnerabilities appear every month on all operating systems – and many of them can be exploited via zero-day attacks. And those attacks provide a “back door” into servers, giving attackers control without requiring legitimate credentials – regardless of agentry on the device. So any device-based method fails if the device is rooted somehow. Network Segmentation This entails using network-layer technologies such as virtual LANs (VLANs) and network access control (NAC) to isolate devices and restrict access based on who can connect to specific protected networks. The good news is that many organizations (especially those subject to PCI) have already implemented some level of segmentation. It’s just a matter of building another enclave, or trust zone, for each group of servers to protect. As mentioned, it’s much harder to break something you can’t see. Segmentation requires the attacker to know exactly what they are looking for and where it resides, and to have a mechanism for gaining access to the protected segment. Of course this is possible – there have been way to defeat VLANs for years – but vendors have closed most of the very easy loopholes. More problematic to us is that this relies on the networking operations team. Managing entitlements and keeping devices on the proper segment in a dynamic environment, such as your data center, can be challenging. It is definitely possible, but it’s also difficult, and it puts direct responsibility for access restriction in the hands of the network ops team. That can and does work for some organizations, but organizationally this is complicated and somewhat fragile. The other serious complication for this approach is cloud computing – including both private and public clouds. The cloud is key and everybody is jumping on the bandwagon, but unfortunately it largely removes visibility at the physical layer. If you don’t really know where specific instances are running, this approach becomes difficult or completely unworkable. We will discuss this in detail later in the series, when we discuss the cloud in general. PUM Proxy This approach routes all management traffic through a proxy server. Administrators authenticate to the PUM proxy, presumably using strong authentication. The authenticated administrator gets a view of the devices they can manage, and establishes a management session directly to the device. Another possible layer of security involves loading a lightweight agent on every managed devices to handle the handshake & mutual authentication with the PUM proxy, and to block management connections from unauthorized sources. This approach is familiar to anyone who has managed cloud computing resources via vCenter (in VMware land) or a cloud console such as Amazon Web Services. You log in and see the devices/instances you can manage, and proceed accordingly. This fits our preference for providing visibility to only those devices that can legitimately be managed. It also provides significant control over granular administrative functions, as commands can be blocked in real-time (it is a man in the middle, after all). Another side benefit is what we call the deterrent effect: administrators know all their activity is running through a central device and typically heavily monitored – as we will discuss in depth later. But any proxy presents issues, including a possible single point of failure, and additional latency for management sessions. Some additional design & architecture work is required to ensure high availability and reasonable efficiency. It’s a bad day for the security team if ops can’t do their jobs. And periodic latency testing is called for, to make sure the proxy doesn’t impair productivity. And finally: as with virtualization and cloud consoles, if you own the proxy server, you own everything in the environment. So the security of the proxy is paramount. All these approaches are best in different environments, and each entails its own compromises. For those just starting to experiment with privileged user management, a PUM proxy is typically the path of least

Share:
Read Post

Friday Summary: March 23, 2012

This should not matter: The Square Register. But it does. What do I mean by that? Check out the picture: There’s something catchy and slick about the set-up of an iPad cash register and the simple Square device. It looks like something Apple would produce. It seems right at home with – almost a natural extension of – the iPad. I run into small shop owners and independent business people who are using Square everywhere. It’s at Target, right next to the Apple products, and the salesperson said they have been flying off the shelves. People say “Wow, that’s cool.” And that’s how Square is going to win this part of the burgeoning personal payment space. The new competitor, PayPal’s Here, is marketing the superiority of their device, better service, and lower costs. Much of that ‘superiority’ is in the device’s security features – such as encrypting data inside the device – which early Square devices currently deployed do not. That’s a significant security advantage. But it won’t matter – next to its competitor, ‘Here’ looks about as modern and relevant as a Zip drive. Being in the field of security, and having designed mobile payment systems and digital wallets in the past, I care a great deal about the security of these systems. So I hate to admit that marketing the security of Here is doomed to fail. Simplicity, approachability, and ease of use are more important to winning the customers Square and PayPal are targeting. The tiny cost savings offered by Paypal do not matter to small merchants, and they’re not great enough to make a difference to many mid-sized merchants. A fast, friendly shopping experience is. I’m sure Paypal’s position in the market will help a lot to drag along sales, but they need to focus more on experience and less on technical features if they want to win in this space. While I’m sharing my stream of consciousness, there’s something else I want to share with readers that’s not security related. As someone who writes for a living these days, I appreciate good writers more than ever. Not just skilled use of English, but styles of presentation and the ability to blend facts, quality analysis, and humor. When I ran across Bill Simmons’ post on How to Annoy Fans in 60 Easy Steps on the Grantland web site I was riveted to the story. I confess to being one of the long-suffering fans he discusses – in fact it was the Run TMC Warriors teams, circa 1992, that started my interest in sports. But even if you’re not a Warriors fan, this is a great read for anyone who likes basketball. If you’re a statistician you understand what a special kind of FAIL it is when you consistently snatch defeat from the jaws of victory – for 35 years. It’s a great piece – like a narration of a train wreck in slow motion – and highly recommended. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on the 2012 DBIR report. Rich quoted in IT Security News. Favorite Securosis Posts Adrian Lane: Incite 3/21/2012: Wheel Refresh. I’ve been there. Twice. My wife was so frustrated with my waffling that she bought me a car. Mike Rothman: Last week’s Friday Summary. Rich shows he’s human, and not just a Tweetbot automaton. Kidding aside, anyone with kids will understand exactly where Rich is coming from. Rich: Watching the Watchers: The Privileged User Lifecycle. Mike’s new series is on Privileged User Management – which is becoming a major issue with the increasing complexity of our environments. Not that it wasn’t a big issue before. Other Securosis Posts How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR). Understanding and Selecting DSP: Technical Architecture. iOS Data Security: Protecting Data on Unmanaged Devices. iOS Data Security: Secure File Apps for Unmanaged Devices. Talkin’ Tokenization. Favorite Outside Posts Dave Lewis: Too many passwords? Just one does the trick. Adrian Lane: The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say). There is so much interesting stuff in this article that I don’t know where to start. Great read. Mike Rothman: Give it five minutes. This is great advice from 37Signals’ Jason Fried. People rarely remember you because of how smart you are. But they definitely remember you if you are an know-it-all, and not in a good way. Rich: Verizon DBIR 2012: Automated large-scale attacks taking down SMBs. Mike Mimoso’s article on the DBIR. He provides a little more context, and the report is a must-read. Project Quant Posts Malware Analysis Quant: Metrics–Monitor for Reinfection. Malware Analysis Quant: Metrics–Remediate. Malware Analysis Quant: Metrics–Find Infected Devices. Malware Analysis Quant: Metrics–Define Rules and Search Queries. Malware Analysis Quant: Metrics–The Malware Profile. Malware Analysis Quant: Metrics–Dynamic Analysis. Malware Analysis Quant: Metrics–Static Analysis. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Google Hands Out $4500 in Rewards for Chrome 17.0.963.83. Adam’s analysis of 1Password findings in Secure Password Managers report. Report: Hacktivists Out-Stole Cybercriminals in 2011. Three times during my career I have heard “20XX was the year of the breach.” And for 2011 that again looks like a legitimate statement. Bredolab Botmaster ‘Birdie’ Still at Large via Krebs. Microsoft Donates Software To Protect Exploited Children. NSA Chief Denies Domestic Spying But Whistleblowers Say Otherwise. Confirm nothing, deny everything, and make counter-accusations. When you see this from a government, you know you hit the nail on the head. BBC attacked by Iran? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ryan, in response to iOS Data Security: Secure File Apps for Unmanaged Devices. Great post, Rich. Another thing to note about mobile EDRM is that the better solutions will allow you to

Share:
Read Post

Watching the Watchers: The Privileged User Lifecycle

As we described in the Introduction to this series, organizations can’t afford ignore the issue of privileged users (P-Users) any more. A compromised P-user (PUPwned) can cause all sorts of damage, and so needs to be actively managed. In the last post we presented the business drivers and threats – now let’s talk about solutions. As most analysts favor some kind of model to describe something, we’ll call ours the Privileged User Lifecycle. In this post we will describe each aspect of the lifecycle at a high level. But before the colorful lifecycle diagram, let’s scope the effort. Our lifecycle starts when the privileged user receives escalated privileges, and ends when they are no longer privileged or leave the organization, whichever comes first. So here is the whole lifecycle: Provisioning Entitlements The Privileged User Management lifecycle starts when you determine someone gets escalated privileges. That means you need both control and an audit trail for granting these entitlements. Identity Management is a science all by itself, so this series won’t tackle it in any depth – we will just point out the connections between (de-)provisioning escalated privileges, and the beginning and end of the lifecycle. And keep in mind that these privileged users have the keys to the kingdom, so you need tight controls over their provisioning process, including separation of duties and a defined workflow which includes adequate authorization. Identity management is repository-centric, so any controls you implement throughout the lifecycle need native integration with the user repository. It doesn’t work well to store user credentials multiple times in multiple places. Another aspect of this provisioning process involves defining the roles and entitlements for each administrator, or more likely for groups of administrators. We favor a default deny model, which basically denies any management capabilities to administrators, assigns capabilities by an explicit authorization to manage device(s), and defines what they can do on each specific device. Although the technology to enforce entitlements can be complicated (we will get to that later in this series), defining the roles and assigning administrators to the proper groups can be even more challenging. This typically involves gaining a significant consensus among the operations team (which is always fun), but is on the critical path for P-User management. Now we get to the fun stuff: actively managing what specific administrators can do. In order to gain administrative rights to a device, an attacker (or rogue administrator) needs access, entitlements, and credentials. So the next aspects of our lifecycle address these issues. Restrict Access Let’s first tackle restricting access to devices. The key is to allow administrators access only to devices they are entitled to manage. Any other device should be blocked to that specific P-User. That’s what default deny means in this context. This is one of the oldest network defense tactics: segmentation. If a P-User can’t logically get to a device, they can’t manage it nefariously. There are quite a few ways to isolate devices, both physically and logically, including proxy gateways and device-based agents. We will discuss a number of these tactics later in the series. When restricting access, you also need to factor in authentication, as logging into a proxy gateway and/or managing particularly sensitive devices should require multiple factors. Obviously integrating private and public cloud instances into the P-User mangement environment requires different tactics, as you don’t necessarily have physical access to the network to govern access. But the attractiveness of the cloud mean you cannot simply avoid it. We will also delve into tactics to restrict access to cloud-specific and hybrid environments later. Protect Credentials Once a P-User has network access to a device, they still need credentials to manage it. Thus administrator credentials need appropriate protection. The next step in the lifecycle typically involves setting up a password vault to store administrator credentials and provide a system for one-time use. There are a number of architectural decisions involved in vaulting administrator passwords that impact the other controls in place: restricting access and enforcing entitlements. Enforce Entitlements If an administrator has access and the credentials, the final aspect of controls involve determining what they can do. Many organizations opt for a carte blanche policy, providing root access and allowing P-Users to do whatever they want. Others take a finer-grained approach, defining the specific commands the P-User can perform on any class of device. For instance, you may allow the administrator to update the device or load software, but not delete a logical volume or load an application. As we mentioned above, the granularity enforced here depends on the granularity you use to provision the entitlements. Technically, this approach requires some kind of agent capability on the managed device, or running sessions through a proxy gateway which can intercept and block commands as necessary. We will discuss architectures later in the series when we dig into this control. Privileged User Monitoring Finally, keep a close eye on what all the P-Users do when they access devices. That’s why we call this series “Watching the Watchers”, as the lifecycle doesn’t end after implementing the controls. Privileged User Monitoring can mean a number of different things, from collecting detailed audit logs on every transaction to actually capturing video of each session. There are multiple benefits to detailed monitoring, including forensics and compliance. We should also mention the deterrent benefits of privileged user monitoring. Human nature dictates that people are more diligent when they know someone is watching. So Rich can be happy that human nature hasn’t changed. Yet. When administrators know they are being watched they are more likely to behave properly – not just from a security standpoint but also from an operational standpoint. No Panacea Of course this privileged user lifecycle is not a panacea. A determined attacker will find a path to compromise your systems, regardless of how tightly you manage privileged users. No control is foolproof, and there are ways to gain access to protected devices, and to defeat password vaults. So we will examine the weaknesses in each of these tactics later in this series. As with

Share:
Read Post

How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR)

Verizon just published their excellent 2012 Data Breach Investigations Report, and as usual, it’s full of statistical goodness. (We will link to it once it’s formally released – we are writing this based on our preview copy). As we did last year, we will focus on how to read the DBIR, what it teaches us, and how should it change what you do – we’ll leave the headline fodder for others to rehash. If you happen to check back to our old post you might notice a bit of cut and paste, because once we reach the advice section, many things are unchanged since last year. I also decided to stick with the structure I used last year because it got a lot of positive feedback. How to read the DBIR Before jumping into the trends, there are five key points to keep in mind while reading the report (which covers 855 incidents): This is a breach report, not a generic cybercrime or attack report. The DBIR only includes data from incidents where data was stolen. If no data was exfiltrated it doesn’t count and was not included. All those LOIC attacks DDoSing your servers aren’t in here. Definitions matter. Throughout the DBIR the authors try to be extremely clear on how they define aspects of the data they analyze, such as direct vs. participatory factors. These are really important to understand. Know where the data comes from. The 2012 report includes data from 855 incidents investigated by Verizon, the US Secret Service, the Dutch National High Tech Crime Unit, the Australian Federal Police, the Irish Reporting & Information Security Service, and the Police Central e-Crime Unit of the London Metropolitan Police. In some places only Verizon data is used (and the authors are clear when they do this). There is definitely some sample bias, but that doesn’t reduce the value of this report in any way. For example, if we correlate these findings with the Mandiant M-Trends report (registration, unfortunately, required) we see consistency in trends. This is despite the differences in client base, focus, and investigative techniques. Verizon finally broke out large vs. small organizations. This was always my biggest wish, and for many of the numbers we can compare between organizations of more than 1,000 employees and smaller ones. (I actually consider 1,000 to be mid-sized, but it’s still a useful demarcation). And now for my subjective interpretation of the top trends in the report: The industrialization of attacks continues: The majority of breaches targeted smaller organizations, used automated tools, and targeted credit cards. This doesn’t mean these were the most harmful breaches, but they certainly constituted the greatest volume. Hactivism and mega breaches are back, and target larger organizations: Of the 174 million records lost, 100 million were the result of hactivism against large organizations. This was only 21% of breaches against large organizations, but accounted for 61% of records lost. Larger organizations may be better at security, but still get breached: A variety of statistics through the report seem to show that large organizations are less prone to compromise by industrialized, automated attacks… but they are also more likely to be targeted by serious attackers. Remote services are the biggest vector for small organizations, and web applications for large ones: This is on page 32, and should set off alarm bells. Malware is everywhere: 61% of incidents involved malware + hacking, 69% of incidents included malware alone, but that accounted for 95% of lost records. Here are some additional highlights and areas to pay special attention to, in no particular order: Ignore the massive increase in records lost. This is really hard to accurately quantify, and a few outliers always have a big impact. Besides, knowing how many records were lost doesn’t help you defend yourself in any way! Focus on the attack and defense trends, not the incident sizes. Besides, if anything, this trend is a regression to the mean (see page 45). Ignore the fact that 96% of breached organizations weren’t PCI compliant. Most of those were level 4 merchants. This shows a change in targets, not necessarily a change in the value (or lack thereof) of PCI. Outsourcers are a major contributing factor, especially for smaller organizations. There are endless low-end IT services companies, and very few of them appear to follow good security practices, even when PCI compliance is involved. Small businesses don’t run their own payment systems, and these are still being heavily compromised via poorly secured remote access software. I’m sure pcAnywhere being totally pwned had nothing to do with this 🙂 Page 25 provides a good sense of how large organizations face a more diverse range of attacks. This is likely due to both being more targeted, and having better perimeter defenses against automated attacks. It’s hard to have an unsecured remote access server facing the Internet when you are required to get quarterly vulnerability scans (even cheap ones). Attackers always use the minimum effort necessary! If they don’t need to take a lot of time and burn an 0day, why bother? They don’t become bad guys because of a strong work ethic. So the breach statistics naturally skew towards simpler attack techniques. This is particularly important because big data sets like this don’t necessarily reflect either the defenses or attack techniques in sophisticated situations. Larger organizations are better at managing default passwords, but experience higher levels of phishing and credential compromises. This, again, makes a lot of sense. Smaller companies, especially those relying on service providers, are less likely to look for or have processes in place to manage default credentials. Since larger organizations tend to knock off this low-hanging fruit, the bad guys move up a level and focus on attacking the larger employee population to compromise credentials. Small organizations are more likely to be the direct victims of phone-based social engineering (page 33). I have personally received some of these calls and can see how someone could fall for it. Servers are compromised more often than endpoints (user devices), and when endpoints are compromised it’s to jump off and attack servers. Take a look

Share:
Read Post

Understanding and Selecting DSP: Technical Architecture

One of the key strengths of DSP is its ability to scan and monitor multiple databases running on multiple database management systems (DBMSs) across multiple platforms (Windows, Unix, etc.). The DSP tool aggregates information from multiple collectors to a secure central server. In some cases the central server/management console also collects information while in other cases it serves merely as a repository for data from collectors. This creates three options for deployment, depending on organizational requirements: Single Server/Appliance: A single server, appliance, or software agent serves as both the sensor/collection point and management console. This mode is typically used for smaller deployments. Two-tier Architecture: This option consists of a central management server and remote collection points/sensors. The central server does no direct monitoring, but aggregates information from remote systems, manages policies, and generates alerts. It may also perform assessment functions directly. The remote collectors may use any of the collection techniques. Hierarchical Architecture: Collection points/sensors/scanners aggregate to business-level or geographically distributed management servers, which in turn report to an enterprise management server. Hierarchical deployments are best suited for large enterprises, which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between the tiers to manage large volumes of information or maintain unit/geographic privacy, and to satisfy policy requirements. This can be confusing because each server or appliance can manage multiple assessment scanners, network collectors, or agent-based collectors may also perform some monitoring directly. But a typical deployment includes a central management server (or cluster) handling all the management functions, with collectors spread out to handle activity monitoring on the databases. Blocking architecture options There are two different ways to block queries, depending on your deployment architecture and choice of collection agents. Agent-based Blocking: The software agent is able to directly block queries – the actual technique varies with the vendor’s agent implementation. Agents may block inbound queries, returned results, or both. Proxy-based Blocking: Instead of connecting directly to the database, all connections are to a local or network-based proxy (which can be a separate server/appliance or local software). The proxy analyzes queries before passing them to the database, and can block by policy. We will go into more detail on blocking later in this series, but the important point is that if you want to block, you need to either deploy some sort of software agent or proxy the database connection. Next we will recap the core features of DAM and show the subtle additions to DSP. Share:

Share:
Read Post

Incite 3/21/2012: Wheel Refresh

It seems like a lifetime ago. June of 1999. Actually it was more than XX1’s lifetime ago. The Boss and I still lived in Northern Virginia. I was close to the top of the world. I started a software company, we raised a bunch of VC money, and the Internet Revolution was booming. The lease on my crappy 1996 Pathfinder was up, and I wanted some spiffy new wheels. Given my unadulterated arrogance at that time in my life, I’m surprised I didn’t go buy a 911, since that’s always been my dream car. But in a fit of logic, I figured there was plenty of time for fancy cars and planes once we took the company public. But I did want something a bit sportier than a truck, so I bought a 1999 Acura TL. It had 225 horses, lots of leather, and cool rims. In fact, I still feel pretty good about it almost 13 years later. I’m still driving my trusty TL. Well, I guess the term driving is relative. I drive about 7,500 miles a year. Maybe. With three kids, we don’t take trips in the TL any more, so basically I use it to go to/from Starbucks and the airport. At almost 100,000 miles, it’s starting to show its age. It’s all dented up from some scrapes with my garage (thanks Grandma!) and countless nights spent in an airport parking lot. But I can’t complain – it’s been a great car. But the TL is at the end of the road and my spidey sense is tingling. That model is notorious for transmission failures. So far I’ve been lucky, but I fear my luck is about to run out. The car just doesn’t feel right, which means it’s probably time for a pre-emptive strike to refresh my wheels. What to buy? I’m not a car guy, but my super-ego (the proverbial devil on my shoulder) looks longingly at a 911 Carrera Convertible. That’s sweet. Or maybe a BMW or Lexus gunship. A man of my stature, at least in my own mind, deserves some hot wheels like that. Then my practical side kicks in (the angel on my other shoulder) and notes that I frequently need to put the 3 kids in the car, and the kids aren’t getting smaller. No SmartCar for me. I also want something that gets decent gas mileage, since it’s clear that gas prices aren’t coming down anytime soon. But it’s so boring and lame to be practical, says the Devil on my shoulder. We know how that ended up for Pinto in Animal House, but what will happen with me? I can’t really pull off the sports car right now, so maybe I should get an ass kicking truck. One of those huge trucks with the Yosemite Sam mud flaps and a gun rack. It will come in handy when I need to cart all that mulch from Home Depot back to my house. Oh right, I don’t cart mulch. My landscaper does that. Again, the practical side kicks in – reminding me that folks needing to make obvious statements about their a badassitude usually have major self-esteem problems. What happened to me? Years ago, this decision would have been easy. I’d get the sports car or the truck and not think twice. Until I got my gas bill or had to tie one of the kids to the roof to get anywhere. But that’s not the way I’m going. I’m (in all likelihood) going to get a Prius V. Really. A hybrid station wagon, and I’ll probably get the wood paneling stickers, just to make the full transformation into Clark Griswold. Though if I tied Grandma to the roof, I wouldn’t be too popular in my house. Even better, the Prius will make a great starter car when XX1 starts to drive 4-5 years from now. That will work out great, as by then it’ll be time for my mid-life crisis and the 911 convertible… -Mike Photo credits: “porsche 911 hot wheels” originally uploaded by Guillermo Vasquez Heavy Research We’re back at work on a variety of blog series. Here is the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Defending iOS Data Introduction iOS Security and Data Protection Data Flow on iOS Protecting Data on Unmanaged Devices Secure File Apps for Unmanaged Devices Watching the Watchers (Privileged User Management) Access to the Keys (to the Kingdom) Understanding and Selecting DSP Data and Event Collection Incite 4 U Assuming the worst is not new: It’s pretty funny that our pals at Dark Reading are now talking about Security’s New Reality: Assuming the Worst – meaning you need to assume compromise and act accordingly. Duh. Gosh, I’ve been talking about Reacting Faster since early 2007 (I actually checked and the term first appeared on Security Incite in December of 2006. Praise the Google.), and it’s not like I have been the only one, but it is pretty cool to see everyone else jumping on the you’re screwed bandwagon. I was talking to a freelance writer Monday, and she asked what kind of skills I thought people getting into security need to work on, and I said forensics. Obviously there are a lot of fundamentals that need to be in place to understand how to figure out something is wrong, but it’s clear that capable incident responders will be in high demand for a long time. And even incapable incident responders will be busy, as companies in the middle of coping with breaches can’t afford to be too picky. – MR Password Manager Kinda-fail: Elcomsoft conducted a security review of 17 different personal password managers, examining their encryption and key management. The full report (PDF) contains most of the interesting information. The problem is that the report is not very well written. The attacks they discuss all depend on having physical access to the device, or being able to gain access to the device backups – a power-station hack on

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.