Securosis

Research

Incite 3/28/2012: Gone Tomorrow

A recent Tweet from Shack was pretty jarring. Old friend from college died today. Got some insane rare lung disease out of nowhere, destroyed them. Terrifying. 37 years old. :/ Here today. Gone tomorrow. It’s been a while since I have ranted about the importance of enjoying (most) every day. About spending time with the people who matter to you. People who make you better, not break you down. Working at something you like, not something you tolerate. Basically making the most of each day, which most of us don’t do very well. Myself included. This requires a change in perspective. Enjoying not just the good days but also the bad ones. I know the idea of enjoying a bad day sounds weird. It’s kind of like sales. Great sales folks have convinced themselves that every no is one step closer to a yes. Are they right? Inevitably, at some point they will sell something to someone, so they are in fact closer to a ‘yes’ with every ‘no’. So a bad day means you are closer to a good day. That little change in perspective can have a huge impact on your morale. The challenge is that you have to live through bad days to appreciate good days. It takes a few cycles thorugh the ebbs and flows to realize that this too shall pass. Whatever it is. It’s hard to have that patience when you are young. Everything is magnified. The highs are really high. And the lows, well, you know. You tend to remember the lows a lot longer than the highs. So a decade passes and you wonder what happened? You question all the time you wasted. The decisions you made. The decisions you didn’t. How did you turn 30? Where did the time go? The time is gone. And it gets worse. My 30s were a blur. 3 kids. Multiple jobs. A relocation. I was so busy chasing things I didn’t have, I forgot to enjoy the things I did. I’m only now starting to appreciate the path I’m on. To realize I needed the hard times. And to enjoy the small victories and have a short memory about the minor defeats. I was a guest speaker at Kennesaw State yesterday, talking to a bunch of students studying security. There were some older folks there. You know, like 30. But mostly I saw kids, just starting out. I didn’t spend a lot of time talking about perspective because kids don’t appreciate experience. They still think they know it all. Most kids anyway. These kids need to screw up a lot of things. And soon. They need to get on with bungling anything and everything. I didn’t say that, but I should have. Because actually all these kids have is time. Time to gain the experience they’ll need to realize they don’t know everything. Dave’s college friend doesn’t have any more time. He’s gone. If you are reading this you are not. Enjoy today, even if it’s a crappy day. Because the crappy days make you appreciate the good days to come. –Mike Photo credits: “Free Beer Tomorrow Neon Sign” originally uploaded by Lore SR Heavy Research We’re back at work on a variety of our blog series. So here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in it’s unabridged glory. Defending iOS Data Securing Data on Partially-Managed Devices Watching the Watchers (Privileged User Management) The Privileged User Lifecycle Restrict Access Understanding and Selecting DSP Technical Architecture Incite 4 U This sounds strangely familiar… It seems our friend Richard Bejtlich spent some time on Capital Hill recently, and had a Groundhog Day experience. You know, the new regime asking him questions he answered back in 2007. Like politicians are going to remember anything from 2007. Ha! They can’t even remember their campaign promises from two years ago (yup, I’ll be here all week). So he went back into the archives to remind everyone what he’s been saying for years. You know, reduce attack surface by identifying all egress points and figure out which ones need to be protected. And monitor both those egress paths and allegedly friendly networks. Though I think over the past 5 years we have learned that no networks are friendly. Not for long, anyway. Finally, Richard also recommended a Federal I/R team be established. All novel ideas. None really implemented. But on the good news front, the US Government spends a lot of money each year on security products. – MR Perverse economics: I’m going to go out on a limb and make a statement about vulnerability disclosure. After years of watching, and sometimes participating, in the debate, I finally think I have the answer. There is only one kind of responsible disclosure, and the economics are so screwed up that it might as well be a cruddy plot device in a bad science fiction novel. Researchers should disclose vulnerabilities privately to vendors. Vendors are then responsible for creating timely patches. Users are then responsible for patching their systems within a reasonable period. Pretty much anything else screws at minimum users, and likely plenty of other folks. (And this doesn’t apply if something is already in the wild). But as Dennis Fisher highlights, the real world never works that way. Today it’s more economically viable for researchers to sell their exploits to governments, which will use them against some other country, if not their own citizens. It’s more economically viable for vendors to keep vulnerabilities quiet so they don’t have to patch. And users? Well, no one seems to care much about them, but scrambling to patch sure isn’t in their economic interest. It seems ‘responsible’ means ‘altruistic’, and we all know where human nature takes us from there. – RM Scoring credit: Hackers have been stealing credit reports and financial data from – where else? – credit scoring agencies and selling the data to the highest bidder. Shocking, I know. Seems they are abusing the sooper-secure credit score user validation system; asking “which bank holds

Share:
Read Post

iOS Data Security: Securing Data on Partially-Managed Devices

Our last two posts covered iOS data security options on unmanaged devices; now it’s time to discuss partially managed devices. Our definition is: Devices that use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agree to some level of corporate management. The following policies are typically deployed onto partially-managed devices via Exchange ActiveSync: Enforce passcode lock. Disable simple passcode. Enable remote wipe. This, in turn, enables Data Protection on supporting hardware (including all models currently for sale). In addition, you can also add the following using iOS configuration profiles – which can also enforce all the previous policies except remote wiping, unless you also use a remote wipe server tool: On-demand VPN for specific domains (not all traffic, but all enterprise traffic). Manual VPN for access to corporate resources. Digital certificates for access to corporate resources (VPN or SSL). Installation of custom enterprise applications. Automatic wipe on failed passcode attempts (the number of attempts can be specified, unlike the user setting which is simply ON/OFF for wipe after 10 failures, in the Settings app). The key differences between partially and a fully managed devices are a) the user can still install arbitrary applications and make settings changes, and b) not all traffic is routed through a mandatory full-time VPN. One key point to administering managed policies on a user-owned device is to ensure that you obtain the user’s consent and notify them of what will happen. The user should sign a document saying they understand that although they own the device, by accessing corporate resources they are allowing management, which may include remote wiping a lost or stolen device. And that the user is responsible for their own backups of personal data. Enhanced security for existing options Most of the previous options we have discussed are significantly enhanced when digital certificate, passcode, and Data Protection policies are enforced. This is especially true of all the sandboxed app options – and, in fact, many vendors in those categories generally don’t support use of their tools without a configuration profile to require at least a passcode. Managed Exchange ActiveSync (or equivalent) Microsoft’s ActiveSync protocol, despite its name, is separate from the Exchange mail server and included with alternate products, including some that compete with Exchange. iOS natively supports it, so it is the backbone for managed email on iDevices when a sandboxed messaging app isn’t used. By setting the policies listed above, all email is encrypted to under user’s passcode using Data Protection. Other content is not protected, but remote wipe is supported. Custom enterprise sandboxed application Now that you can install an enterprise digital certificate onto the device and guarantee Data Protection is active, you can also deploy custom enterprise applications that leverage this built-in encryption. This option allows you to use the built-in iOS document viewer within your application’s sandbox, which enables you to fairly easily deploy a custom application that provides fully sandboxed and encrypted access to enterprise documents. Combine it with an on-demand VPN tied to the domain name of the server or a manual VPN, and you have data encrypted both in transit and in storage. Today a few vendors provide toolkits to build this sort of application. Some are adding document annotation for PDF files, and based on recent announcements we expect to see full editing capabilities also added for MS Office document formats. Share:

Share:
Read Post

Watching the Watchers: Restrict Access

As we discussed in the Privileged User Lifecycle post, there are a number of aspects to Watching the Watchers. Our first today is Restricting Access. This is first mostly because it reduces your attack surface. We want controls to ensure administrators only access devices they are authorization to manage. There are a few ways to handle restriction: Device-centricy (Status Quo): Far too many organizations rely on their existing controls, which include authentication and other server-based access control mechanisms. Network-based Isolation: Tried and true network segmentation approaches enable you to isolate devices (typically by group) and only allow authorized administrators access to the networks on which they live. PUM Proxy: This entails routing all management communications through a privileged user management proxy server or service which enforces access policies. The devices only accept management connections from the proxy server, and do not allow direct management access. There are benefits and issues to each approach, so ultimately you’ll be making some kind of compromise. So let’s dig into each approach and highlight what’s good and what’s not so good. Device-centricity (Status Quo) There are really two levels of status quo; the first is common authentication, which we understand in this context is not really “restricting access” effectively. Obviously you could do a bit to make the authentication more difficult, including strong passwords and/or multi-factor authentication. You would also integrate with an existing identity management platform (IDM) to keep entitlements current. But ultimately you are relying on credentials as a way to keep unauthorized folks from managing your critical devices. And basic credentials can be defeated. Many other organizations use server access control capabilities, which are fairly mature. This involves loading an agent onto each managed device and enforcing the access policy on the device. The agent-based approach offers rather solid security – the risk becomes compromise of the (security) agent. Of course there is management overhead to distribute and manage the agents, as well as the additional computational load imposed by the agent. But any device-based approach is in opposition to one of our core philosophies: “If you can’t see it, it’s much harder to compromise.” Device-centric access approaches don’t affect visibility at all. This is suboptimal because in the real world new vulnerabilities appear every month on all operating systems – and many of them can be exploited via zero-day attacks. And those attacks provide a “back door” into servers, giving attackers control without requiring legitimate credentials – regardless of agentry on the device. So any device-based method fails if the device is rooted somehow. Network Segmentation This entails using network-layer technologies such as virtual LANs (VLANs) and network access control (NAC) to isolate devices and restrict access based on who can connect to specific protected networks. The good news is that many organizations (especially those subject to PCI) have already implemented some level of segmentation. It’s just a matter of building another enclave, or trust zone, for each group of servers to protect. As mentioned, it’s much harder to break something you can’t see. Segmentation requires the attacker to know exactly what they are looking for and where it resides, and to have a mechanism for gaining access to the protected segment. Of course this is possible – there have been way to defeat VLANs for years – but vendors have closed most of the very easy loopholes. More problematic to us is that this relies on the networking operations team. Managing entitlements and keeping devices on the proper segment in a dynamic environment, such as your data center, can be challenging. It is definitely possible, but it’s also difficult, and it puts direct responsibility for access restriction in the hands of the network ops team. That can and does work for some organizations, but organizationally this is complicated and somewhat fragile. The other serious complication for this approach is cloud computing – including both private and public clouds. The cloud is key and everybody is jumping on the bandwagon, but unfortunately it largely removes visibility at the physical layer. If you don’t really know where specific instances are running, this approach becomes difficult or completely unworkable. We will discuss this in detail later in the series, when we discuss the cloud in general. PUM Proxy This approach routes all management traffic through a proxy server. Administrators authenticate to the PUM proxy, presumably using strong authentication. The authenticated administrator gets a view of the devices they can manage, and establishes a management session directly to the device. Another possible layer of security involves loading a lightweight agent on every managed devices to handle the handshake & mutual authentication with the PUM proxy, and to block management connections from unauthorized sources. This approach is familiar to anyone who has managed cloud computing resources via vCenter (in VMware land) or a cloud console such as Amazon Web Services. You log in and see the devices/instances you can manage, and proceed accordingly. This fits our preference for providing visibility to only those devices that can legitimately be managed. It also provides significant control over granular administrative functions, as commands can be blocked in real-time (it is a man in the middle, after all). Another side benefit is what we call the deterrent effect: administrators know all their activity is running through a central device and typically heavily monitored – as we will discuss in depth later. But any proxy presents issues, including a possible single point of failure, and additional latency for management sessions. Some additional design & architecture work is required to ensure high availability and reasonable efficiency. It’s a bad day for the security team if ops can’t do their jobs. And periodic latency testing is called for, to make sure the proxy doesn’t impair productivity. And finally: as with virtualization and cloud consoles, if you own the proxy server, you own everything in the environment. So the security of the proxy is paramount. All these approaches are best in different environments, and each entails its own compromises. For those just starting to experiment with privileged user management, a PUM proxy is typically the path of least

Share:
Read Post

Friday Summary: March 23, 2012

This should not matter: The Square Register. But it does. What do I mean by that? Check out the picture: There’s something catchy and slick about the set-up of an iPad cash register and the simple Square device. It looks like something Apple would produce. It seems right at home with – almost a natural extension of – the iPad. I run into small shop owners and independent business people who are using Square everywhere. It’s at Target, right next to the Apple products, and the salesperson said they have been flying off the shelves. People say “Wow, that’s cool.” And that’s how Square is going to win this part of the burgeoning personal payment space. The new competitor, PayPal’s Here, is marketing the superiority of their device, better service, and lower costs. Much of that ‘superiority’ is in the device’s security features – such as encrypting data inside the device – which early Square devices currently deployed do not. That’s a significant security advantage. But it won’t matter – next to its competitor, ‘Here’ looks about as modern and relevant as a Zip drive. Being in the field of security, and having designed mobile payment systems and digital wallets in the past, I care a great deal about the security of these systems. So I hate to admit that marketing the security of Here is doomed to fail. Simplicity, approachability, and ease of use are more important to winning the customers Square and PayPal are targeting. The tiny cost savings offered by Paypal do not matter to small merchants, and they’re not great enough to make a difference to many mid-sized merchants. A fast, friendly shopping experience is. I’m sure Paypal’s position in the market will help a lot to drag along sales, but they need to focus more on experience and less on technical features if they want to win in this space. While I’m sharing my stream of consciousness, there’s something else I want to share with readers that’s not security related. As someone who writes for a living these days, I appreciate good writers more than ever. Not just skilled use of English, but styles of presentation and the ability to blend facts, quality analysis, and humor. When I ran across Bill Simmons’ post on How to Annoy Fans in 60 Easy Steps on the Grantland web site I was riveted to the story. I confess to being one of the long-suffering fans he discusses – in fact it was the Run TMC Warriors teams, circa 1992, that started my interest in sports. But even if you’re not a Warriors fan, this is a great read for anyone who likes basketball. If you’re a statistician you understand what a special kind of FAIL it is when you consistently snatch defeat from the jaws of victory – for 35 years. It’s a great piece – like a narration of a train wreck in slow motion – and highly recommended. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on the 2012 DBIR report. Rich quoted in IT Security News. Favorite Securosis Posts Adrian Lane: Incite 3/21/2012: Wheel Refresh. I’ve been there. Twice. My wife was so frustrated with my waffling that she bought me a car. Mike Rothman: Last week’s Friday Summary. Rich shows he’s human, and not just a Tweetbot automaton. Kidding aside, anyone with kids will understand exactly where Rich is coming from. Rich: Watching the Watchers: The Privileged User Lifecycle. Mike’s new series is on Privileged User Management – which is becoming a major issue with the increasing complexity of our environments. Not that it wasn’t a big issue before. Other Securosis Posts How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR). Understanding and Selecting DSP: Technical Architecture. iOS Data Security: Protecting Data on Unmanaged Devices. iOS Data Security: Secure File Apps for Unmanaged Devices. Talkin’ Tokenization. Favorite Outside Posts Dave Lewis: Too many passwords? Just one does the trick. Adrian Lane: The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say). There is so much interesting stuff in this article that I don’t know where to start. Great read. Mike Rothman: Give it five minutes. This is great advice from 37Signals’ Jason Fried. People rarely remember you because of how smart you are. But they definitely remember you if you are an know-it-all, and not in a good way. Rich: Verizon DBIR 2012: Automated large-scale attacks taking down SMBs. Mike Mimoso’s article on the DBIR. He provides a little more context, and the report is a must-read. Project Quant Posts Malware Analysis Quant: Metrics–Monitor for Reinfection. Malware Analysis Quant: Metrics–Remediate. Malware Analysis Quant: Metrics–Find Infected Devices. Malware Analysis Quant: Metrics–Define Rules and Search Queries. Malware Analysis Quant: Metrics–The Malware Profile. Malware Analysis Quant: Metrics–Dynamic Analysis. Malware Analysis Quant: Metrics–Static Analysis. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Google Hands Out $4500 in Rewards for Chrome 17.0.963.83. Adam’s analysis of 1Password findings in Secure Password Managers report. Report: Hacktivists Out-Stole Cybercriminals in 2011. Three times during my career I have heard “20XX was the year of the breach.” And for 2011 that again looks like a legitimate statement. Bredolab Botmaster ‘Birdie’ Still at Large via Krebs. Microsoft Donates Software To Protect Exploited Children. NSA Chief Denies Domestic Spying But Whistleblowers Say Otherwise. Confirm nothing, deny everything, and make counter-accusations. When you see this from a government, you know you hit the nail on the head. BBC attacked by Iran? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ryan, in response to iOS Data Security: Secure File Apps for Unmanaged Devices. Great post, Rich. Another thing to note about mobile EDRM is that the better solutions will allow you to

Share:
Read Post

Watching the Watchers: The Privileged User Lifecycle

As we described in the Introduction to this series, organizations can’t afford ignore the issue of privileged users (P-Users) any more. A compromised P-user (PUPwned) can cause all sorts of damage, and so needs to be actively managed. In the last post we presented the business drivers and threats – now let’s talk about solutions. As most analysts favor some kind of model to describe something, we’ll call ours the Privileged User Lifecycle. In this post we will describe each aspect of the lifecycle at a high level. But before the colorful lifecycle diagram, let’s scope the effort. Our lifecycle starts when the privileged user receives escalated privileges, and ends when they are no longer privileged or leave the organization, whichever comes first. So here is the whole lifecycle: Provisioning Entitlements The Privileged User Management lifecycle starts when you determine someone gets escalated privileges. That means you need both control and an audit trail for granting these entitlements. Identity Management is a science all by itself, so this series won’t tackle it in any depth – we will just point out the connections between (de-)provisioning escalated privileges, and the beginning and end of the lifecycle. And keep in mind that these privileged users have the keys to the kingdom, so you need tight controls over their provisioning process, including separation of duties and a defined workflow which includes adequate authorization. Identity management is repository-centric, so any controls you implement throughout the lifecycle need native integration with the user repository. It doesn’t work well to store user credentials multiple times in multiple places. Another aspect of this provisioning process involves defining the roles and entitlements for each administrator, or more likely for groups of administrators. We favor a default deny model, which basically denies any management capabilities to administrators, assigns capabilities by an explicit authorization to manage device(s), and defines what they can do on each specific device. Although the technology to enforce entitlements can be complicated (we will get to that later in this series), defining the roles and assigning administrators to the proper groups can be even more challenging. This typically involves gaining a significant consensus among the operations team (which is always fun), but is on the critical path for P-User management. Now we get to the fun stuff: actively managing what specific administrators can do. In order to gain administrative rights to a device, an attacker (or rogue administrator) needs access, entitlements, and credentials. So the next aspects of our lifecycle address these issues. Restrict Access Let’s first tackle restricting access to devices. The key is to allow administrators access only to devices they are entitled to manage. Any other device should be blocked to that specific P-User. That’s what default deny means in this context. This is one of the oldest network defense tactics: segmentation. If a P-User can’t logically get to a device, they can’t manage it nefariously. There are quite a few ways to isolate devices, both physically and logically, including proxy gateways and device-based agents. We will discuss a number of these tactics later in the series. When restricting access, you also need to factor in authentication, as logging into a proxy gateway and/or managing particularly sensitive devices should require multiple factors. Obviously integrating private and public cloud instances into the P-User mangement environment requires different tactics, as you don’t necessarily have physical access to the network to govern access. But the attractiveness of the cloud mean you cannot simply avoid it. We will also delve into tactics to restrict access to cloud-specific and hybrid environments later. Protect Credentials Once a P-User has network access to a device, they still need credentials to manage it. Thus administrator credentials need appropriate protection. The next step in the lifecycle typically involves setting up a password vault to store administrator credentials and provide a system for one-time use. There are a number of architectural decisions involved in vaulting administrator passwords that impact the other controls in place: restricting access and enforcing entitlements. Enforce Entitlements If an administrator has access and the credentials, the final aspect of controls involve determining what they can do. Many organizations opt for a carte blanche policy, providing root access and allowing P-Users to do whatever they want. Others take a finer-grained approach, defining the specific commands the P-User can perform on any class of device. For instance, you may allow the administrator to update the device or load software, but not delete a logical volume or load an application. As we mentioned above, the granularity enforced here depends on the granularity you use to provision the entitlements. Technically, this approach requires some kind of agent capability on the managed device, or running sessions through a proxy gateway which can intercept and block commands as necessary. We will discuss architectures later in the series when we dig into this control. Privileged User Monitoring Finally, keep a close eye on what all the P-Users do when they access devices. That’s why we call this series “Watching the Watchers”, as the lifecycle doesn’t end after implementing the controls. Privileged User Monitoring can mean a number of different things, from collecting detailed audit logs on every transaction to actually capturing video of each session. There are multiple benefits to detailed monitoring, including forensics and compliance. We should also mention the deterrent benefits of privileged user monitoring. Human nature dictates that people are more diligent when they know someone is watching. So Rich can be happy that human nature hasn’t changed. Yet. When administrators know they are being watched they are more likely to behave properly – not just from a security standpoint but also from an operational standpoint. No Panacea Of course this privileged user lifecycle is not a panacea. A determined attacker will find a path to compromise your systems, regardless of how tightly you manage privileged users. No control is foolproof, and there are ways to gain access to protected devices, and to defeat password vaults. So we will examine the weaknesses in each of these tactics later in this series. As with

Share:
Read Post

How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR)

Verizon just published their excellent 2012 Data Breach Investigations Report, and as usual, it’s full of statistical goodness. (We will link to it once it’s formally released – we are writing this based on our preview copy). As we did last year, we will focus on how to read the DBIR, what it teaches us, and how should it change what you do – we’ll leave the headline fodder for others to rehash. If you happen to check back to our old post you might notice a bit of cut and paste, because once we reach the advice section, many things are unchanged since last year. I also decided to stick with the structure I used last year because it got a lot of positive feedback. How to read the DBIR Before jumping into the trends, there are five key points to keep in mind while reading the report (which covers 855 incidents): This is a breach report, not a generic cybercrime or attack report. The DBIR only includes data from incidents where data was stolen. If no data was exfiltrated it doesn’t count and was not included. All those LOIC attacks DDoSing your servers aren’t in here. Definitions matter. Throughout the DBIR the authors try to be extremely clear on how they define aspects of the data they analyze, such as direct vs. participatory factors. These are really important to understand. Know where the data comes from. The 2012 report includes data from 855 incidents investigated by Verizon, the US Secret Service, the Dutch National High Tech Crime Unit, the Australian Federal Police, the Irish Reporting & Information Security Service, and the Police Central e-Crime Unit of the London Metropolitan Police. In some places only Verizon data is used (and the authors are clear when they do this). There is definitely some sample bias, but that doesn’t reduce the value of this report in any way. For example, if we correlate these findings with the Mandiant M-Trends report (registration, unfortunately, required) we see consistency in trends. This is despite the differences in client base, focus, and investigative techniques. Verizon finally broke out large vs. small organizations. This was always my biggest wish, and for many of the numbers we can compare between organizations of more than 1,000 employees and smaller ones. (I actually consider 1,000 to be mid-sized, but it’s still a useful demarcation). And now for my subjective interpretation of the top trends in the report: The industrialization of attacks continues: The majority of breaches targeted smaller organizations, used automated tools, and targeted credit cards. This doesn’t mean these were the most harmful breaches, but they certainly constituted the greatest volume. Hactivism and mega breaches are back, and target larger organizations: Of the 174 million records lost, 100 million were the result of hactivism against large organizations. This was only 21% of breaches against large organizations, but accounted for 61% of records lost. Larger organizations may be better at security, but still get breached: A variety of statistics through the report seem to show that large organizations are less prone to compromise by industrialized, automated attacks… but they are also more likely to be targeted by serious attackers. Remote services are the biggest vector for small organizations, and web applications for large ones: This is on page 32, and should set off alarm bells. Malware is everywhere: 61% of incidents involved malware + hacking, 69% of incidents included malware alone, but that accounted for 95% of lost records. Here are some additional highlights and areas to pay special attention to, in no particular order: Ignore the massive increase in records lost. This is really hard to accurately quantify, and a few outliers always have a big impact. Besides, knowing how many records were lost doesn’t help you defend yourself in any way! Focus on the attack and defense trends, not the incident sizes. Besides, if anything, this trend is a regression to the mean (see page 45). Ignore the fact that 96% of breached organizations weren’t PCI compliant. Most of those were level 4 merchants. This shows a change in targets, not necessarily a change in the value (or lack thereof) of PCI. Outsourcers are a major contributing factor, especially for smaller organizations. There are endless low-end IT services companies, and very few of them appear to follow good security practices, even when PCI compliance is involved. Small businesses don’t run their own payment systems, and these are still being heavily compromised via poorly secured remote access software. I’m sure pcAnywhere being totally pwned had nothing to do with this 🙂 Page 25 provides a good sense of how large organizations face a more diverse range of attacks. This is likely due to both being more targeted, and having better perimeter defenses against automated attacks. It’s hard to have an unsecured remote access server facing the Internet when you are required to get quarterly vulnerability scans (even cheap ones). Attackers always use the minimum effort necessary! If they don’t need to take a lot of time and burn an 0day, why bother? They don’t become bad guys because of a strong work ethic. So the breach statistics naturally skew towards simpler attack techniques. This is particularly important because big data sets like this don’t necessarily reflect either the defenses or attack techniques in sophisticated situations. Larger organizations are better at managing default passwords, but experience higher levels of phishing and credential compromises. This, again, makes a lot of sense. Smaller companies, especially those relying on service providers, are less likely to look for or have processes in place to manage default credentials. Since larger organizations tend to knock off this low-hanging fruit, the bad guys move up a level and focus on attacking the larger employee population to compromise credentials. Small organizations are more likely to be the direct victims of phone-based social engineering (page 33). I have personally received some of these calls and can see how someone could fall for it. Servers are compromised more often than endpoints (user devices), and when endpoints are compromised it’s to jump off and attack servers. Take a look

Share:
Read Post

Understanding and Selecting DSP: Technical Architecture

One of the key strengths of DSP is its ability to scan and monitor multiple databases running on multiple database management systems (DBMSs) across multiple platforms (Windows, Unix, etc.). The DSP tool aggregates information from multiple collectors to a secure central server. In some cases the central server/management console also collects information while in other cases it serves merely as a repository for data from collectors. This creates three options for deployment, depending on organizational requirements: Single Server/Appliance: A single server, appliance, or software agent serves as both the sensor/collection point and management console. This mode is typically used for smaller deployments. Two-tier Architecture: This option consists of a central management server and remote collection points/sensors. The central server does no direct monitoring, but aggregates information from remote systems, manages policies, and generates alerts. It may also perform assessment functions directly. The remote collectors may use any of the collection techniques. Hierarchical Architecture: Collection points/sensors/scanners aggregate to business-level or geographically distributed management servers, which in turn report to an enterprise management server. Hierarchical deployments are best suited for large enterprises, which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between the tiers to manage large volumes of information or maintain unit/geographic privacy, and to satisfy policy requirements. This can be confusing because each server or appliance can manage multiple assessment scanners, network collectors, or agent-based collectors may also perform some monitoring directly. But a typical deployment includes a central management server (or cluster) handling all the management functions, with collectors spread out to handle activity monitoring on the databases. Blocking architecture options There are two different ways to block queries, depending on your deployment architecture and choice of collection agents. Agent-based Blocking: The software agent is able to directly block queries – the actual technique varies with the vendor’s agent implementation. Agents may block inbound queries, returned results, or both. Proxy-based Blocking: Instead of connecting directly to the database, all connections are to a local or network-based proxy (which can be a separate server/appliance or local software). The proxy analyzes queries before passing them to the database, and can block by policy. We will go into more detail on blocking later in this series, but the important point is that if you want to block, you need to either deploy some sort of software agent or proxy the database connection. Next we will recap the core features of DAM and show the subtle additions to DSP. Share:

Share:
Read Post

Incite 3/21/2012: Wheel Refresh

It seems like a lifetime ago. June of 1999. Actually it was more than XX1’s lifetime ago. The Boss and I still lived in Northern Virginia. I was close to the top of the world. I started a software company, we raised a bunch of VC money, and the Internet Revolution was booming. The lease on my crappy 1996 Pathfinder was up, and I wanted some spiffy new wheels. Given my unadulterated arrogance at that time in my life, I’m surprised I didn’t go buy a 911, since that’s always been my dream car. But in a fit of logic, I figured there was plenty of time for fancy cars and planes once we took the company public. But I did want something a bit sportier than a truck, so I bought a 1999 Acura TL. It had 225 horses, lots of leather, and cool rims. In fact, I still feel pretty good about it almost 13 years later. I’m still driving my trusty TL. Well, I guess the term driving is relative. I drive about 7,500 miles a year. Maybe. With three kids, we don’t take trips in the TL any more, so basically I use it to go to/from Starbucks and the airport. At almost 100,000 miles, it’s starting to show its age. It’s all dented up from some scrapes with my garage (thanks Grandma!) and countless nights spent in an airport parking lot. But I can’t complain – it’s been a great car. But the TL is at the end of the road and my spidey sense is tingling. That model is notorious for transmission failures. So far I’ve been lucky, but I fear my luck is about to run out. The car just doesn’t feel right, which means it’s probably time for a pre-emptive strike to refresh my wheels. What to buy? I’m not a car guy, but my super-ego (the proverbial devil on my shoulder) looks longingly at a 911 Carrera Convertible. That’s sweet. Or maybe a BMW or Lexus gunship. A man of my stature, at least in my own mind, deserves some hot wheels like that. Then my practical side kicks in (the angel on my other shoulder) and notes that I frequently need to put the 3 kids in the car, and the kids aren’t getting smaller. No SmartCar for me. I also want something that gets decent gas mileage, since it’s clear that gas prices aren’t coming down anytime soon. But it’s so boring and lame to be practical, says the Devil on my shoulder. We know how that ended up for Pinto in Animal House, but what will happen with me? I can’t really pull off the sports car right now, so maybe I should get an ass kicking truck. One of those huge trucks with the Yosemite Sam mud flaps and a gun rack. It will come in handy when I need to cart all that mulch from Home Depot back to my house. Oh right, I don’t cart mulch. My landscaper does that. Again, the practical side kicks in – reminding me that folks needing to make obvious statements about their a badassitude usually have major self-esteem problems. What happened to me? Years ago, this decision would have been easy. I’d get the sports car or the truck and not think twice. Until I got my gas bill or had to tie one of the kids to the roof to get anywhere. But that’s not the way I’m going. I’m (in all likelihood) going to get a Prius V. Really. A hybrid station wagon, and I’ll probably get the wood paneling stickers, just to make the full transformation into Clark Griswold. Though if I tied Grandma to the roof, I wouldn’t be too popular in my house. Even better, the Prius will make a great starter car when XX1 starts to drive 4-5 years from now. That will work out great, as by then it’ll be time for my mid-life crisis and the 911 convertible… -Mike Photo credits: “porsche 911 hot wheels” originally uploaded by Guillermo Vasquez Heavy Research We’re back at work on a variety of blog series. Here is the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Defending iOS Data Introduction iOS Security and Data Protection Data Flow on iOS Protecting Data on Unmanaged Devices Secure File Apps for Unmanaged Devices Watching the Watchers (Privileged User Management) Access to the Keys (to the Kingdom) Understanding and Selecting DSP Data and Event Collection Incite 4 U Assuming the worst is not new: It’s pretty funny that our pals at Dark Reading are now talking about Security’s New Reality: Assuming the Worst – meaning you need to assume compromise and act accordingly. Duh. Gosh, I’ve been talking about Reacting Faster since early 2007 (I actually checked and the term first appeared on Security Incite in December of 2006. Praise the Google.), and it’s not like I have been the only one, but it is pretty cool to see everyone else jumping on the you’re screwed bandwagon. I was talking to a freelance writer Monday, and she asked what kind of skills I thought people getting into security need to work on, and I said forensics. Obviously there are a lot of fundamentals that need to be in place to understand how to figure out something is wrong, but it’s clear that capable incident responders will be in high demand for a long time. And even incapable incident responders will be busy, as companies in the middle of coping with breaches can’t afford to be too picky. – MR Password Manager Kinda-fail: Elcomsoft conducted a security review of 17 different personal password managers, examining their encryption and key management. The full report (PDF) contains most of the interesting information. The problem is that the report is not very well written. The attacks they discuss all depend on having physical access to the device, or being able to gain access to the device backups – a power-station hack on

Share:
Read Post

iOS Data Security: Secure File Apps for Unmanaged Devices

To finish our discussion of securing data on unmanaged devices, let’s focus on three categories of apps designed for secure file access: Sandboxed file browsers and mobile file gateways While messaging apps generally do a good job of handling email, they don’t necessarily link into file servers or integrate into enterprise encryption. Secure file management apps skip messaging and focus on access to enterprise file repositories. They support the following core features: Use of either iOS Data Protection or their own embedded encryption. A secure connection to the file repository (which may require a VPN for remote access to internal sources). Support for the iOS document viewer to view supported document types (iWork, Microsoft Office, PDF, etc.). Authentication and authorization to enable or restrict access on a per-user, per-device basis. Ability to restrict or allow “Open In…” to control file movement to other apps. There are a few different flavors. Most require server components or plugins to repositories like Microsoft SharePoint. If the tool doesn’t isolate documents by restricting the “Open In…” feature, it is not suitable for enterprise use. Sandboxed file browser: These allow connections to enterprise file shares using standard connections and store the downloaded documents in an encrypted container. Most use Data Protection rather than to their own encryption scheme. They are usually read-only, although some support annotation of PDF files. Sandboxed cloud file browser: Instead of relying on direct network connections to enterprise file stores, these apps access cloud storage repositories and are specific to their cloud service. Mobile file management gateway: This is a more refined extension of the sandboxed file browser. Rather than allowing access directly to file repositories, mobile devices connect to the gateway using a sandboxed app and are then given access to files through the gateway. These support more granular policies, monitoring, and directory integration. They often also support multiple mobile platforms (yes, there is a world outside Apple). Document management system extensions: These are similar to a mobile file management gateway, but instead of a separate server they run as plugins to an existing document management system. Users connect directly to the document management system (such as SharePoint) via the extension/plugin, which might be centrally managed. Some of these tools support commenting and annotating files (usually restricted to PDFs) but we know expanded document editing is on the roadmap. Sandboxed mobile file encryption apps Mobile computing is one of the big drivers of cloud computing, and cloud storage is, in turn, expanding use of encryption. Encryption apps extend on the sandboxed file browser by integrating with enterprise encryption. They expand on the file browser by: Maintaining file and document isolation in the sandbox. Transparently decrypting files accessed by the app (when integrated into an enterprise encryption scheme and key management server). Accepting files from other apps via “Open In…” and keeping them encrypted in private storage, then enabling protected access to such files. Support for connections to common cloud storage platforms such as Box.net and Dropbox. The big division in this category is between apps designed to open files passed to them by other applications, such as encrypted mail attachments, versus those that integrate directly into cloud storage or other file browsers. Some tools also support decryption of password protected files versus those managed using centralized enterprise keys. When integrated with enterprise key management, the entire process of accessing encrypted files on iOS is completely transparent to the user. They go into the app, which connects to the file store, and files are stored within the app’s secure data store and decrypted as needed. The documents can then be restricted so they are only usable within the app, as with our other sandboxing examples. Some apps also support encryption of files from other apps. This actually provides more protection than normal desktop encryption because it’s far easier to isolate documents and keep them within the app. Mobile Enterprise Digital Rights Management The next option for handling files securely on unmanaged devices expands on encryption into Digital Rights Management. EDRM provides more granular controls that travel with the documents, getting closer to information-centric security. The easiest way to distinguish between an encryption app and EDRM on iOS is: An encrypted document opened in a sandbox may be isolated in that app, but isn’t generally protected when accessed on other systems which also have access (such as a laptop or desktop). Protection is binary, like a lockbox – controlling only who can access the file. We rely on the sandbox app for additional controls, such as restricting movement into other apps – usually on an all-or-nothing basis). An EDRM protected document stays encrypted, but can only be opened by applications that respect the more granular controls applied to the file (including compatible mobile apps). This allows a wide range of control – including who can open the file, who can edit it, who can forward it via email, which devices can access it, and even time limits for access. Encryption is for trusted users and environments, while EDRM also supports untrusted environments. In the mobile space EDRM is better for protecting files you want to share externally and still protect – while encryption is generally only suitable for internal use, or securely transmitting documents, but unable to restrict what they can do once they have it. EDRM is very oriented towards office documents, while encryption is better for arbitrary files. Mobile EDRM requires a server or service to manage the keys. The rights themselves are embedded in the documents. There are a variety of potential deployment models, including: Mobile file gateway File server/SharePoint integration Email client integration Email server integration Microsoft Office integration To simplify this a bit: documents can either be manually protected when you create them in Office or email them, when you upload them to an EDRM-enabled file gateway/storage platform, or automatically when you save them into a protected directory or email them to a certain destination. The documents can only be read using the vendor’s proprietary solution (app), which enforces all the

Share:
Read Post

iOS Data Security: Protecting Data on Unmanaged Devices

There are a whole spectrum of options available for securing enterprise data on iOS, depending on how much you want to manage the device and the data. ‘Spectrum’ isn’t quite the right word, though, because these options aren’t on a linear continuum – instead they fall into three major buckets: Options for unmanaged devices Options for partially managed devices Options for fully managed devices Here’s how we define these categories: Unmanaged devices are fully in the control of the end user. No enterprise polices are enforced, and the user can install anything and otherwise use the device as they please. Partially managed devices use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agreed to some level of corporate management. They can install arbitrary applications and change most settings. Typical policies require them to use a strong passcode and enable remote wipe by the enterprise. They may also need to use an on-demand VPN for at least some network traffic (e.g., to the enterprise mail server and intranet web services), but the user’s other traffic goes unmonitored through whatever network connection they are currently using. Fully managed devices also use a configuration profile, but are effectively enterprise-owned. The enterprise controls what apps can be installed, enforces an always-on VPN that the user can’t disable, and has the ability to monitor and manage all traffic to and from the device. Some options fall into multiple categories, so we will start with the least protected and work our way up the hierarchy. We will indicate which options carry forward and will work in the higher (tighter) buckets. Note: This series is focused exclusively on data security. We will not discuss mobile device management in general, or the myriad of other device management options! With that reminder, let’s start with a brief discussion of your data protection options for the first bucket: Unmanaged Devices Unmanaged devices are completely under the user’s control, and the enterprise is unable to enforce any device polices. This means no configuration profiles and no Exchange ActiveSync policies to enforce device settings such as passcode requirements. User managed security with written policies Under this model you don’t restrict data or devices in any way, but institute written policies requiring users to protect data on the devices themselves. It isn’t the most secure option, but we are nothing if not comprehensive. Basic policies should include the following: Require Passcode: After n minutes Simple Passcode: OFF Erase Data: ON Additionally we highly recommend you enable some form of remote wipe – either the free Find My iPhone, Exchange ActiveSync, or a third-party app. These settings enable data protection and offer the highest level of device security possible without additional tools, but they aren’t generally sufficient for an enterprise or anything other than the smallest businesses. We will discuss policies in more detail later, but make sure the user signs a mobile device policy saying they agree to these settings, then help them get the device configured. But, if you are reading this paper, this is not a good option for you. No access to enterprise data While it might seem obvious, your first choice is to completely exclude iOS devices. Depending on how your environment is set up, this might actually be difficult. There are a few key areas you need to check, to ensure an iOS device won’t slip through: Email server: if you support IMAP/POP or even Microsoft Exchange mailboxes, if the user knows the right server settings and you haven’t implemented any preventative controls, they will be able to access email from their iPhone or iPad. There are numerous ways to prevent this (too many to cover in this post), but as a rule of thumb if the device can access the server, and you don’t have per-device restrictions, there is usually nothing to prevent them from getting email on the iDevice. File servers: like email servers, if you allow the device to connect to the corporate network and have open file shares, the user can access the content. There are plenty of file access clients in the App Store capable of accessing most server types. If you rely on username and password protection (as opposed to network credentials) then the user can fetch content to their device. Remote access: iOS includes decent support for a variety of VPNs. Unless you use certificate or other device restrictions, and especially if your VPN is based on a standard like IPSec, there is nothing to prevent the end user from configuring the VPN on their device. Don’t assume users won’t figure out how to VPN in, even if you don’t provide direct support. To put this in perspective, in the Securosis environment we allow extensive use of iOS. We didn’t have to configure anything special to support iOS devices – we simply had to not configure anything to block them. Email access with server-side data loss prevention (DLP) With this option you allow users access to their enterprise email, but you enforce content-based restrictions using DLP to filter messages and attachments before they reach the devices. Most DLP tools filter at the mail gateway (MTA) – not at the mail server (e.g., Exchange). Unless your DLP tool offers explicit support for filtering based on content and device, you won’t be able to use this option. If your DLP tool is sufficiently flexible, though, you can use the DLP tool to prevent sensitive content from going to the device, while allowing normal communications. You can either build this off existing DLP policies or create completely new device-specific ones. Sandboxed messaging app / walled garden One of the more popular options today is to install a sandboxed app for messaging and file access, to isolate and control enterprise data. These apps do not use the iOS mail client, and handle all enterprise emails and attachments internally. They also typically manage calendars and contacts, and some include access to intranet web pages. The app may use iOS Data Protection, implement its own

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.