Securosis

Research

Watching the Watchers: The Privileged User Lifecycle

As we described in the Introduction to this series, organizations can’t afford ignore the issue of privileged users (P-Users) any more. A compromised P-user (PUPwned) can cause all sorts of damage, and so needs to be actively managed. In the last post we presented the business drivers and threats – now let’s talk about solutions. As most analysts favor some kind of model to describe something, we’ll call ours the Privileged User Lifecycle. In this post we will describe each aspect of the lifecycle at a high level. But before the colorful lifecycle diagram, let’s scope the effort. Our lifecycle starts when the privileged user receives escalated privileges, and ends when they are no longer privileged or leave the organization, whichever comes first. So here is the whole lifecycle: Provisioning Entitlements The Privileged User Management lifecycle starts when you determine someone gets escalated privileges. That means you need both control and an audit trail for granting these entitlements. Identity Management is a science all by itself, so this series won’t tackle it in any depth – we will just point out the connections between (de-)provisioning escalated privileges, and the beginning and end of the lifecycle. And keep in mind that these privileged users have the keys to the kingdom, so you need tight controls over their provisioning process, including separation of duties and a defined workflow which includes adequate authorization. Identity management is repository-centric, so any controls you implement throughout the lifecycle need native integration with the user repository. It doesn’t work well to store user credentials multiple times in multiple places. Another aspect of this provisioning process involves defining the roles and entitlements for each administrator, or more likely for groups of administrators. We favor a default deny model, which basically denies any management capabilities to administrators, assigns capabilities by an explicit authorization to manage device(s), and defines what they can do on each specific device. Although the technology to enforce entitlements can be complicated (we will get to that later in this series), defining the roles and assigning administrators to the proper groups can be even more challenging. This typically involves gaining a significant consensus among the operations team (which is always fun), but is on the critical path for P-User management. Now we get to the fun stuff: actively managing what specific administrators can do. In order to gain administrative rights to a device, an attacker (or rogue administrator) needs access, entitlements, and credentials. So the next aspects of our lifecycle address these issues. Restrict Access Let’s first tackle restricting access to devices. The key is to allow administrators access only to devices they are entitled to manage. Any other device should be blocked to that specific P-User. That’s what default deny means in this context. This is one of the oldest network defense tactics: segmentation. If a P-User can’t logically get to a device, they can’t manage it nefariously. There are quite a few ways to isolate devices, both physically and logically, including proxy gateways and device-based agents. We will discuss a number of these tactics later in the series. When restricting access, you also need to factor in authentication, as logging into a proxy gateway and/or managing particularly sensitive devices should require multiple factors. Obviously integrating private and public cloud instances into the P-User mangement environment requires different tactics, as you don’t necessarily have physical access to the network to govern access. But the attractiveness of the cloud mean you cannot simply avoid it. We will also delve into tactics to restrict access to cloud-specific and hybrid environments later. Protect Credentials Once a P-User has network access to a device, they still need credentials to manage it. Thus administrator credentials need appropriate protection. The next step in the lifecycle typically involves setting up a password vault to store administrator credentials and provide a system for one-time use. There are a number of architectural decisions involved in vaulting administrator passwords that impact the other controls in place: restricting access and enforcing entitlements. Enforce Entitlements If an administrator has access and the credentials, the final aspect of controls involve determining what they can do. Many organizations opt for a carte blanche policy, providing root access and allowing P-Users to do whatever they want. Others take a finer-grained approach, defining the specific commands the P-User can perform on any class of device. For instance, you may allow the administrator to update the device or load software, but not delete a logical volume or load an application. As we mentioned above, the granularity enforced here depends on the granularity you use to provision the entitlements. Technically, this approach requires some kind of agent capability on the managed device, or running sessions through a proxy gateway which can intercept and block commands as necessary. We will discuss architectures later in the series when we dig into this control. Privileged User Monitoring Finally, keep a close eye on what all the P-Users do when they access devices. That’s why we call this series “Watching the Watchers”, as the lifecycle doesn’t end after implementing the controls. Privileged User Monitoring can mean a number of different things, from collecting detailed audit logs on every transaction to actually capturing video of each session. There are multiple benefits to detailed monitoring, including forensics and compliance. We should also mention the deterrent benefits of privileged user monitoring. Human nature dictates that people are more diligent when they know someone is watching. So Rich can be happy that human nature hasn’t changed. Yet. When administrators know they are being watched they are more likely to behave properly – not just from a security standpoint but also from an operational standpoint. No Panacea Of course this privileged user lifecycle is not a panacea. A determined attacker will find a path to compromise your systems, regardless of how tightly you manage privileged users. No control is foolproof, and there are ways to gain access to protected devices, and to defeat password vaults. So we will examine the weaknesses in each of these tactics later in this series. As with

Share:
Read Post

How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR)

Verizon just published their excellent 2012 Data Breach Investigations Report, and as usual, it’s full of statistical goodness. (We will link to it once it’s formally released – we are writing this based on our preview copy). As we did last year, we will focus on how to read the DBIR, what it teaches us, and how should it change what you do – we’ll leave the headline fodder for others to rehash. If you happen to check back to our old post you might notice a bit of cut and paste, because once we reach the advice section, many things are unchanged since last year. I also decided to stick with the structure I used last year because it got a lot of positive feedback. How to read the DBIR Before jumping into the trends, there are five key points to keep in mind while reading the report (which covers 855 incidents): This is a breach report, not a generic cybercrime or attack report. The DBIR only includes data from incidents where data was stolen. If no data was exfiltrated it doesn’t count and was not included. All those LOIC attacks DDoSing your servers aren’t in here. Definitions matter. Throughout the DBIR the authors try to be extremely clear on how they define aspects of the data they analyze, such as direct vs. participatory factors. These are really important to understand. Know where the data comes from. The 2012 report includes data from 855 incidents investigated by Verizon, the US Secret Service, the Dutch National High Tech Crime Unit, the Australian Federal Police, the Irish Reporting & Information Security Service, and the Police Central e-Crime Unit of the London Metropolitan Police. In some places only Verizon data is used (and the authors are clear when they do this). There is definitely some sample bias, but that doesn’t reduce the value of this report in any way. For example, if we correlate these findings with the Mandiant M-Trends report (registration, unfortunately, required) we see consistency in trends. This is despite the differences in client base, focus, and investigative techniques. Verizon finally broke out large vs. small organizations. This was always my biggest wish, and for many of the numbers we can compare between organizations of more than 1,000 employees and smaller ones. (I actually consider 1,000 to be mid-sized, but it’s still a useful demarcation). And now for my subjective interpretation of the top trends in the report: The industrialization of attacks continues: The majority of breaches targeted smaller organizations, used automated tools, and targeted credit cards. This doesn’t mean these were the most harmful breaches, but they certainly constituted the greatest volume. Hactivism and mega breaches are back, and target larger organizations: Of the 174 million records lost, 100 million were the result of hactivism against large organizations. This was only 21% of breaches against large organizations, but accounted for 61% of records lost. Larger organizations may be better at security, but still get breached: A variety of statistics through the report seem to show that large organizations are less prone to compromise by industrialized, automated attacks… but they are also more likely to be targeted by serious attackers. Remote services are the biggest vector for small organizations, and web applications for large ones: This is on page 32, and should set off alarm bells. Malware is everywhere: 61% of incidents involved malware + hacking, 69% of incidents included malware alone, but that accounted for 95% of lost records. Here are some additional highlights and areas to pay special attention to, in no particular order: Ignore the massive increase in records lost. This is really hard to accurately quantify, and a few outliers always have a big impact. Besides, knowing how many records were lost doesn’t help you defend yourself in any way! Focus on the attack and defense trends, not the incident sizes. Besides, if anything, this trend is a regression to the mean (see page 45). Ignore the fact that 96% of breached organizations weren’t PCI compliant. Most of those were level 4 merchants. This shows a change in targets, not necessarily a change in the value (or lack thereof) of PCI. Outsourcers are a major contributing factor, especially for smaller organizations. There are endless low-end IT services companies, and very few of them appear to follow good security practices, even when PCI compliance is involved. Small businesses don’t run their own payment systems, and these are still being heavily compromised via poorly secured remote access software. I’m sure pcAnywhere being totally pwned had nothing to do with this 🙂 Page 25 provides a good sense of how large organizations face a more diverse range of attacks. This is likely due to both being more targeted, and having better perimeter defenses against automated attacks. It’s hard to have an unsecured remote access server facing the Internet when you are required to get quarterly vulnerability scans (even cheap ones). Attackers always use the minimum effort necessary! If they don’t need to take a lot of time and burn an 0day, why bother? They don’t become bad guys because of a strong work ethic. So the breach statistics naturally skew towards simpler attack techniques. This is particularly important because big data sets like this don’t necessarily reflect either the defenses or attack techniques in sophisticated situations. Larger organizations are better at managing default passwords, but experience higher levels of phishing and credential compromises. This, again, makes a lot of sense. Smaller companies, especially those relying on service providers, are less likely to look for or have processes in place to manage default credentials. Since larger organizations tend to knock off this low-hanging fruit, the bad guys move up a level and focus on attacking the larger employee population to compromise credentials. Small organizations are more likely to be the direct victims of phone-based social engineering (page 33). I have personally received some of these calls and can see how someone could fall for it. Servers are compromised more often than endpoints (user devices), and when endpoints are compromised it’s to jump off and attack servers. Take a look

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.