Securosis

Research

Cloud Data Security: Share (Rough Cut)

In our last post in this series, we covered the cloud implications of the Use phase of our Data Security Cycle. In this post we will move on to the Share phase. Please remember that we are only covering technologies at a high level in this series on the cycle; we will run a second series on detailed technical implementations of data security in the cloud a little later. Definition Share includes controls we use when exchanging data between users, customers, and partners. Where Use focuses on controls when a user interacts with the data as an individual, Share includes the controls once they start to exchange that data (or back-end data exchange). In cloud computing we see a major emphasis on application and logical controls, with encryption for secure data exchange, DLP/CMP to monitor communications and block policy violations, and activity monitoring to track back-end data exchanges. Cloud computing introduces two new complexities in managing data sharing: Many data exchanges occur within the cloud, and are invisible to external security controls. Traditional network and endpoint monitoring probably won’t be effective. For example, when you share a Google Docs document to another user, the only local interactions are through a secure browser connection. Email filtering, a traditional way of tracking electronic document exchanges, won’t really help. For leading edge enterprises that build dynamic data security policies using tools like DLP/CMP, those tools may not work off a cloud-based data store. If you are building a filtering policy that matches account numbers from a customer database, and that database is hosted in the cloud as an application or platform, you may need to perform some kind of mass data extract and conversion to feed the data security tool. Although the cloud adds some complexity, it can also improve data sharing security in a well-designed deployment. Especially in SaaS deployments, we gain new opportunities to employ logical controls that are often difficult or impossible to manage in our current environments. Although our focus is on cloud-specific tools and technologies, we still review some of the major user-side options that should be part of any data security strategy. Steps and Controls Control Structured/Application Unstructured Activity Monitoring and Enforcement Database Activity Monitoring Cloud Activity Monitoring/Logs Application Activity Monitoring Network DLP/CMP Endpoint DLP/CMP Encryption Network/Transport Encryption Application-Level Encryption Email Encryption File Encryption/EDRM Network/Transport Encryption Logical Controls Application Logic Row Level Security None Application Security see Application Security Domain section Activity Monitoring and Enforcement We initially covered Activity Monitoring and Enforcement in the Use phase, and many of these controls are also used in the Share phase. Our focus now switches from watching how users interact with the data, to when and where they exchange it with others. We include technologies that track data exchanges at four levels: Individual users exchanging data with other internal users within the cloud or a managed environment. Individual users exchanging data with outside users, either via connections made from the cloud directly, or data transferred locally and then sent out. Back-end systems exchanging data to/from the cloud, or within multiple cloud-based systems. Back-end systems exchanging data to external systems/servers; for example, a cloud-based employee human resources system that exchanges healthcare insurance data with a third-party provider. Database Activity Monitoring (DAM): We initially covered DAM in the Use phase. In the Share phase we use DAM to track data exchanges to other back-end systems within or outside the cloud. Rather than focusing on tracking all activity in the database, the tool is tuned to focus on these exchanges and generate alerts on policy violations (such as a new query being run outside of expected behavior), or track the activity for auditing and forensics purposes. The challenge is to deploy a DAM tool in a cloud environment, but an advantage is greater visibility into data leaving the DBMS than might otherwise be possible. Application Activity Monitoring: Similar to DAM, we initially covered this in the Use phase. We again focus our efforts on tracking data sharing, both by users and back-end systems. While it’s tougher to monitor individual pieces of data, it’s not difficult to build in auditing and alerting for larger data exchanges, such as outputting from a cloud-based database to a spreadsheet. Cloud Activity Monitoring and Logs: Depending on your cloud service, you may have access to some level of activity monitoring and logging in the control plane (as opposed to building it into your specific application). To be considered a Share control, this monitoring needs to specify both the user/system involved and the data being exchanged. Network Data Loss Prevention/Content Monitoring and Protection: DLP/CMP uses advanced content analysis and deep packet inspection to monitor network communications traffic, alerting on (and sometimes enforcing) policy violations. DLP/CMP can play multiple roles in protecting cloud-based data. In managed environments, network DLP/CMP policies can track (and block) sensitive data exchanges to untrusted clouds. For example, policies might prevent users from attaching files with credit card numbers to a cloud email message, or block publishing of sensitive engineering plans to a cloud-based word processor. DLP can also work in the other direction: monitoring data pulled from a cloud deployment to the desktop or other non-cloud infrastructure. DLP/CMP tools aren’t limited to user activities, and can monitor, alert, and enforce policies on other types of TCP data exchange, such as FTP, which might be used to transfer data from the traditional infrastructure to the cloud. DLP/CMP also has the potential to be deployed within the cloud itself, but this is only possible in a subset of IaaS deployments, considering the deployment models of current tools. (Note that some email SaaS providers may also offer DLP/CMP as a service). Endpoint DLP/CMP: We initially covered Endpoint DLP/CMP in the Use phase, where we discussed monitoring and blocking local activity. Many endpoint DLP/CMP tools also track network activity – this is useful as a supplement when the endpoint is outside the corporate network’s DLP/CMP coverage. Encryption In the Store phase we covered encryption for protecting data at rest.

Share:
Read Post

Cloud Data Security: Use (Rough Cut)

In our last post in this series, we covered the cloud implications of the Store phase of Data Security Cycle (our first post was on the Create phase). In this post we’ll move on to the Use phase. Please remember we are only covering technologies at a high level in this series – we will run a second series on detailed technical implementations of data security in the cloud a little later. Definition Use includes the controls that apply when the user is interacting with the data – either via a cloud-based application, or the endpoint accessing the cloud service (e.g., a client/cloud application, direct storage interaction, and so on). Although we primarily focus on cloud-specific controls, we also cover local data security controls that protect cloud data once it moves back into the enterprise. These are controls for the point of use – we will cover additional network based controls in the next phase. Users interact with cloud data in three ways: Web-based applications, such as most SaaS applications. Client applications, such as local backup tools that store data in the cloud. Direct/abstracted access, such as a local folder synchronized with cloud storage (e.g., Dropbox), or VPN access to a cloud-based server. Cloud data may also be accessed by other back-end servers and applications, but the usage model is essentially the same (web, dedicated application, direct access, or an abstracted service). Steps and Controls Control Structured/Application Unstructured Activity Monitoring and Enforcement Database Activity Monitoring Application Activity Monitoring Endpoint Activity Monitoring File Activity Monitoring Portable Device Control Endpoint DLP/CMP Cloud-Client Logs Rights Management Label Security Enterprise DRM Logical Controls Application Logic Row Level Security None Application Security see Application Security Domain section Activity Monitoring and Enforcement Activity Monitoring and Enforcement includes advanced techniques for capturing all data access and usage activity in real or near-real time, often with preventative capabilities to stop policy violations. Although activity monitoring controls may use log files, they typically include their own collection methods or agents for deeper activity details and more rapid monitoring. Activity monitoring tools also include policy-based alerting and blocking/enforcement that log management tools lack. None of the controls in this category are cloud specific, but we have attempted to show how they can be adapted to the cloud. These first controls integrate directly with the cloud infrastructure: Database Activity Monitoring (DAM): Monitoring all database activity, including all SQL activity. Can be performed through network sniffing of database traffic, agents installed on the server, or external monitoring, typically of transaction logs. Many tools combine monitoring techniques, and network-only monitoring is generally not recommended. DAM tools are managed externally to the database to provide separation of duties from database administrators (DBAs). All DBA activity can be monitored without interfering with their ability to perform job functions. Tools can alert on policy violations, and some tools can block certain activity. Current DAM tools are not cloud specific, and thus are only compatible with environments where the tool can either sniff all network database access (possible in some IaaS deployments, or if provided by the cloud service), or where a compatible monitoring agent can be installed in the database instance. Application Activity Monitoring: Similar to Database Activity Monitoring, but at the application level. As with DAM, tools can use network monitoring or local agents, and can alert and sometimes block on policy violations. Web Application Firewalls are commonly used for monitoring web application activity, but cloud deployment options are limited. Some SaaS or PaaS providers may offer real time activity monitoring, but log files or dashboards are more common. If you have direct access to your cloud-based logs, you can use a near real-time log analysis tool and build your own alerting policies. File Activity Monitoring: Monitoring access and use of files in enterprise storage. Although there are no cloud specific tools available, these tools may be deployable for cloud storage that uses (or presents an abstracted version of) standard file access protocols. Gives an enterprise the ability to audit all file access and generate reports (which may sometimes aid compliance reporting). Capable of independently monitoring even administrator access and can alert on policy violations. The next three tools are endpoint data security tools that are not cloud specific, but may still be useful in organizations that manage endpoints: Endpoint Activity Monitoring: Primarily a traditional data security tool, although it can be used to track user interactions with cloud services. Watching all user activity on a workstation or server. Includes monitoring of application activity; network activity; storage/file system activity; and system interactions such as cut and paste, mouse clicks, application launches, etc. Provides deeper monitoring than endpoint DLP/CMF tools that focus only on content that matches policies. Capable of blocking activities such as pasting content from a cloud storage repository into an instant message. Extremely useful for auditing administrator activity on servers, assuming you can install the agent. An example of cloud usage would be deploying activity monitoring agents on all endpoints in a customer call center that accesses a SaaS for user support. Portable Device Control: Another traditional data security tool with limited cloud applicability, used to restrict access of, or file transfers to, portable storage such as USB drives and DVD burners. For cloud security purposes, we only include tools that either track and enforce policies based on data originating from a cloud application or storage, or are capable of enforcing policies based on data labels provided by that cloud storage or application. Portable device control is also capable of allowing access but auditing file transfers and sending that information to a central management server. Some tools integrate with encryption to provide dynamic encryption of content passed to portable storage. Will eventually be integrated into endpoint DLP/CMF tools that can make more granular decisions based on the content, rather than blanket policies that apply to all data. Some DLP/CMF tools already include this capability. Endpoint DLP: Endpoint Data Loss Prevention/Content Monitoring and Filtering tools that monitor and restrict usage of data through content

Share:
Read Post

Cloud Data Security: Store (Rough Cut)

In our last post in this series, we covered the cloud implications of the Create phase of the Data Security Cycle. In this post we’re going to move on to the Store phase. Please remember that we are only covering technologies at a high level in this series on the cycle; we will run a second series on detailed technical implementations of data security in the cloud a little later. Definition Store is defined as the act of committing digital data to structured or unstructured storage (database vs. files). Here we map the classification and rights to security controls, including access controls, encryption and rights management. I include certain database and application controls, such as labeling, in rights management – not just DRM. Controls at this stage also apply to managing content in storage repositories (cloud or traditional), such as using content discovery to ensure that data is in approved/appropriate repositories. Steps and Controls Control Structured/Application Unstructured Access Controls DBMS Access Controls Administrator Separation of Duties File System Access Controls Application/Document Management System Access Controls Encryption Field Level Encryption Application Level Encryption Transparent Database Encryption Media Encryption File/Folder Encryption Virtual Private Storage Distributed Encryption Rights Management Application Logic Tagging/Labeling Tagging/Labeling Enterprise DRM Content Discovery Cloud-Provided Database Discovery Tool Database Discovery/DAM DLP/CMP Discovery Cloud-Provided Content Discovery DLP/CMP Content Discovery Access Controls One of the most fundamental data security technologies, built into every file and management system, and one of the most poorly used. In cloud computing environments there are two layers of access controls to manage – those presented by the cloud service, and the underlying access controls used by the cloud provider for their infrastructure. It’s important to understand the relationship between the two when evaluating overall security – in some cases the underlying infrastructure may be more secure (no direct back-end access) whereas in others the controls may be weaker (a database with multiple-tenant connection pooling). DBMS Access Controls: Access controls within a database management system (cloud or traditional), including proper use of views vs. direct table access. Use of these controls is often complicated by connection pooling, which tends to anonymize the user between the application and the database. A database/DBMS hosted in the cloud will likely use the normal access controls of the DBMS (e.g., hosted Oracle or MySQL). A cloud-based database such as Amazon’s SimpleDB or Google’s BigTable comes with its own access controls. Depending on your security requirements, it may be important to understand how the cloud-based DB stores information, so you can evaluate potential back-end security issues. Administrator Separation of Duties: Newer technologies implemented in databases to limit database administrator access. On Oracle this is called Database Vault, and on IBM DB2 I believe you use the Security Administrator role and Label Based Access Controls. When evaluating the security of a cloud offering, understand the capabilities to limit both front and back-end administrator access. Many cloud services support various administrator roles for clients, allowing you to define various administrative roles for your own staff. Some providers also implement technology controls to restrict their own back-end administrators, such as isolating their database access. You should ask your cloud provider for documentation on what controls they place on their own administrators (and super-admins), and what data they can potentially access. File System Access Controls: Normal file access controls, applied at the file or repository level. Again, it’s important to understand the differences between the file access controls presented to you by the cloud service, vs. their access control implementation on the back end. There is an incredible variety of options across cloud providers, even within a single SPI tier – many of them completely proprietary to a specific provider. For the purposes of this model, we only include access controls for cloud based file storage (IaaS), and the back-end access controls used by the cloud provider. Due to the increased abstraction, everything else falls into the Application and Document Management System category. Application and Document Management System Access Controls: This category includes any access control restrictions implemented above the file or DBMS storage layers. In non-cloud environments this includes access controls in tools like SharePoint or Documentum. In the cloud, this category includes any content restrictions managed through the cloud application or service abstracted from the back-end content storage. These are the access controls for any services that allow you to manage files, documents, and other ‘unstructured’ content. The back-end storage can consist of anything from a relational database to flat files to traditional storage, and should be evaluated separately. When designing or evaluating access controls you are concerned first with what’s available to you to control your own user/staff access, and then with the back end to understand who at your cloud provider can see what information. Don’t assume that the back end is necessarily less secure – some providers use techniques like bit splitting (combined with encryption) to ensure no single administrator can see your content at the file level, with strong separation of duties to protect data at the application layer. Encryption The most overhyped technology for protecting data, but still one of the most important. Encryption is far from a panacea for all your cloud data security issues, but when used properly and in combination with other controls, it provides effective security. In cloud implementations, encryption may help compensate for issues related to multi-tenancy, public clouds, and remote/external hosting. Application-Level Encryption: Collected data is encrypted by the application, before being sent into a database or file system for storage. For cloud-based applications (e.g., public or private SaaS) this is usually the recommended option because it protects the data from the user all the way down to storage. For added security, the encryption functions and keys can be separated from the application itself, which also limits the access of application administrators to sensitive data. Field-Level Encryption: The database management system encrypts fields within a database, normally at the column level. In cloud implementations you will generally want to encrypt data at the application

Share:
Read Post

There Are No Trusted Sites: New York Times Edition

Continuing our seemingly endless series on “trusted” sites that are compromised and then used to attack visitors, this week’s parasitic host is the venerable New York Times. It seems the Times was compromised via their advertising system (a common theme in these attacks) and was serving up scareware over the weekend (for more on scareware, and how to clean it, see Dancho Danchev’s recent article at the Zero Day blog). I recently had to clean up some scareware myself on my in-laws’ computer, but fortunately they didn’t actually pay for anything. Here are some of our previous entries in this series: BusinessWeek AMEX Paris Hilton McAfee Don’t worry, there are plenty more out there – these are just a few that struck our fancy. Share:

Share:
Read Post

Say Hello to the New (Old) Guys

A little over a month ago we decided to try opening up an intern and Contributing Analyst program. Somewhat to our surprise, we ended up with a bunch of competitive submissions, and we’ve been spending the past few weeks performing interviews and running candidates through the ringer. We got all mean and even made them present some research on a nebulous topic, just to see what they’d come up with. It was a really tough decision, but we decided to go with one intern and one Contributing Analyst. David Meier, better known to most of you as Windexh8r, starts today as the very first Securosis intern. Dave was a very early commenter on the blog, has an excellent IT background, and helped us create the ipfw firewall rule set that’s been somewhat popular. He blogs over at Security Stallions, and we’re pretty darn excited he decided to join us. He’s definitely a no-BS kind of guy who loves poking holes in things and looking for unique angles of analysis. We’re going to start hazing him as soon as he sends the last paperwork over (with that liability waver). We’re hoping he’s not really as good as we think, or we’ll have to promote him and find another intern to beat. David Mortman, the CSO-in-Residence of Echelon One, and a past contributor to this blog, is joining us as our first Contributing Analyst. David’s been a friend for years now, and we even split a room at DefCon. Since I owed David a serious favor after he covered the blog for me while I was out last year for my shoulder surgery, he was sort of a shoe-in for the position. He has an impressive track record in the industry, and we are extremely lucky to have him. You might also know David as the man behind the DefCon Security Jam, and he’s a heck of a bread baker (and cooker of other things, but I’ve only ever tried his bread). Dave and David (yeah, we know) can be reached at dmeier@securosis.com, and dmortman@securosis.com (and all their other email/Twitter/etc. addresses). You’ll start seeing them blogging and participating in research over the next few weeks. We’ve gone ahead and updated their bios on our About page, and listed any conflicts of interest there. (Interns and Contributing Analysts are included under our existing NDAs and confidentiality agreements, but will be restricted from activities, materials, and coverage of areas where they have conflicts of interest). Share:

Share:
Read Post

Cloud Data Security Cycle: Create (Rough Cut)

Last week I started talking about data security in the cloud, and I referred back to our Data Security Lifecycle from back in 2007. Over the next couple of weeks I’m going to walk through the cycle and adapt the controls for cloud computing. After that, I will dig in deep on implementation options for each of the potential controls. I’m hoping this will give you a combination of practical advice you can implement today, along with a taste of potential options that may develop down the road. We do face a bit of the chicken and egg problem with this series, since some of the technical details of controls implementation won’t make sense without the cycle, but the cycle won’t make sense without the details of the controls. I decided to start with the cycle, and will pepper in specific examples where I can to help it make sense. Hopefully it will all come together at the end. In this post we’re going to cover the Create phase: Definition Create is defined as generation of new digital content, either structured or unstructured, or significant modification of existing content. In this phase we classify the information and determine appropriate rights. This phase consists of two steps – Classify and Assign Rights. Steps and Controls   < div class=”bodyTable”> Control Structured/Application Unstructured Classify Application Logic Tag/Labeling Tag/Labeling Assign Rights Label Security Enterprise DRM   Classify Classification at the time of creation is currently either a manual process (most unstructured data), or handled through application logic. Although the potential exists for automated tools to assist with classification, most cloud and non-cloud environments today classify manually for unstructured or directly-entered database data, while application data is automatically classified by business logic. Bear in mind that these are controls applied at the time of creation; additional controls such as access control and encryption are managed in the Store phase. There are two potential controls: Application Logic: Data is classified based on business logic in the application. For example, credit card numbers are classified as such based on on field definitions and program logic. Generally this logic is based on where data is entered, or via automated analysis (keyword or content analysis) Tagging/Labeling: The user manually applies tags or labels at the time of creation e.g., manually tagging via drop-down lists or open fields, manual keyword entry, suggestion-assisted tagging, and so on. Assign Rights This is the process of converting the classification into rights applied to the data. Not all data necessarily has rights applied, in which cases security is provided through additional controls during later phases of the cycle. (Technically rights are always applied, but in many cases they are so broad as to be effectively non-existent). These are rights that follow the data, as opposed to access controls or encryption which, although they protect the data, are decoupled from its creation. There are two potential technical controls here: Label Security: A feature of some database management systems and applications that adds a label to a data element, such as a database row, column, or table, or file metadata, classifying the content in that object. The DBMS or application can then implement access and logical controls based on the data label. Labels may be applied at the application layer, but only count as assigning rights if they also follow the data into storage. Enterprise Digital Rights Management (EDRM): Content is encrypted, and access and use rights are controlled by metadata embedded with the content. The EDRM market has been somewhat self-limiting due to the complexity of enterprise integration and assigning and managing rights. Cloud SPI Tier Implications Software as a Service (SaaS) Classification and rights assignment are completely controlled by the application logic implemented by your SaaS provider. Typically we see Application Logic, since that’s a fundamental feature of any application – SaaS or otherwise. When evaluating your SaaS provider you should ask how they classify sensitive information and then later apply security controls, or if all data is lumped together into a single monolithic database (or flat files) without additional labels or security controls to prevent leakage to administrators, attackers, or other SaaS customers. In some cases, various labeling technologies may be available. You will, again, need to work with your potential SaaS provider to determine if these labels are used only for searching/sorting data, or if they also assist in the application of security controls. Platform as a Service (PaaS) Implementation in a PaaS environment depends completely on the available APIs and development environment. As with internal applications, you will maintain responsibility for how classification and rights assignment are managed. When designing your PaaS-based application, identify potential labeling/classification APIs you can integrate into program logic. You will need to work with your PaaS provider to understand how they can implement security controls at both the application and storage layers – for example, it’s important to know if and how data is labeled in storage, and if this can be used to restrict access or usage (business logic). Infrastructure as a Service (IaaS) Classification and rights assignments depend completely on what is available from your IaaS provider. Here are some specific examples: Cloud-based database: Work with your provider to determine if data labels are available, and with what granularity. If they aren’t provided, you can still implement them as a manual addition (e.g., a row field or segregated tables), but understand that the DBMS will not be enforcing the rights automatically, and you will need to program management into your application. Cloud-based storage: Determine what metadata is available. Many cloud storage providers don’t modify files, so anything you define in an internal storage environment should work in the cloud. The limitation is that the cloud provider won’t be able to tie access or other security controls to the label, which is sometimes an option with document management systems. Enterprise DRM, for example, should work fine with any cloud storage provider. This should give you a good idea of how to manage classification and

Share:
Read Post

Musings on Data Security in the Cloud

So I’ve written about data security, and I’ve written about cloud security, thus it’s probably about time I wrote something about data security in the cloud. To get started, I’m going to skip over defining the cloud. I recommend you take a look at the work of the Cloud Security Alliance, or skip on over to Hoff’s cloud architecture post, which was the foundation of the architectural section of the CSA work. Today’s post is going to be a bit scattershot, as I throw out some of the ideas rolling around my head from I thinking about building a data security cycle/framework for the cloud. We’ve previously published two different data/information-centric security cycles. The first, the Data Security Lifecycle (second on the Research Library page) is designed to be a comprehensive forward-looking model. The second, The Pragmatic Data Security Cycle, is designed to be more useful in limited-scope data security projects. Together they are designed to give you the big picture, as well as a pragmatic approach for securing data in today’s resource-constrained environments. These are different than your typical Information Lifecycle Management cycles to reflect the different needs of the security audience.   When evaluating data security in the context of the cloud, the issues aren’t that we’ve suddenly blasted these cycles into oblivion, but that when and where you can implement controls is shifted, sometimes dramatically. Keep in mind that moving to the cloud is every bit as much an opportunity as a risk. I’m serious – when’s the last time you had the chance to completely re-architect your data security from the ground up? For example, one of the most common risks cited when considering cloud deployment is lack of control over your data; any remote admin can potentially see all your sensitive secrets. Then again, so can any local admin (with access to the system). What’s the difference? In one case you have an employment agreement and their name, in the other you have a Service Level Agreement and contracts… which should include a way to get the admin’s name. The problems are far more similar than they are different. I’m not one of those people saying the cloud isn’t anything new – it is, and some of these subtle differences can have a big impact – but we can definitely scope and manage the data security issues. And when we can’t achieve our desired level of security… well, that’s time to figure out what our risk tolerance is. Let’s take two specific examples: Protecting Data on Amazon S3 – Amazon S3 is one of the leading IaaS services for stored data, but it includes only minimal security controls compared to an internal storage repository. Access controls (which may not integrate with your internal access controls) and transit encryption (SSL) are available, but data is not encrypted in storage and may be accessible to Amazon staff or anyone who compromises your Amazon credentials. One option, which we’ve talked about here before, is Virtual Private Storage. You encrypt your data before sending it off to Amazon S3, giving you absolute control over keys and ACLs. You maintain complete control while still retaining the benefits of cloud-based storage. Many cloud backup solutions use this method. Protecting Data at a SaaS Provider – I’d be more specific and list a SaaS provider, but I can’t remember which ones follow this architecture. With SaaS we have less control and are basically limited to the security controls built into the SaaS offering. That isn’t necessarily bad – the SaaS provider might be far more secure than you are – but not all SaaS offerings are created equal. To secure SaaS data you need to rely more on your contracts and an understanding of how your provider manages your data. One architectural option for your SaaS provider is to protect your data with individual client keys managed outside the application (this is actually a useful internal data security architectural choice). It’s application-level encryption with external key management. All sensitive client data is encrypted in the SaaS provider’s database. Keys are managed in a dedicated appliance/service, and provided temporally to the application based on user credentials. Ideally the SaaS prover’s admins are properly segregated – where no single admin has database, key management, and application credentials. Since this potentially complicates support, it might be restricted to only the most sensitive data. (All your information might still be encrypted, but for support purposes could be accessible to the approved administrators/support staff). The SaaS provider then also logs all access by internal and external users. This is only one option, but your SaaS provider should be able to document their internal data security, and even provide you with external audit reports. As you can see, just because you are in the cloud doesn’t mean you completely give up any chance of data security. It’s all about understanding security boundaries, control options, technology, and process controls. In future posts we’ll start walking through the Data Security Lifecycle and matching specific issues and control options in each phase against the SPI (SaaS, PaaS, IaaS) cloud models. Share:

Share:
Read Post

Some Follow-Up Questions for Bob Russo, General Manager of the PCI Council

I just finished reading a TechTarget editorial by Bob Russo, the General Manager of the PCI Council where he responded to an article by Eric Ogren Believe it or not, I don’t intend this to be some sort of snarky anti-PCI post. I’m happy to see Mr. Russo responding directly to open criticism, and I’m hoping he will see this post and maybe we can also get a response. I admit I’ve been highly critical of PCI in my past, but I now take the position that it is an overall positive development for the state of security. That said, I still consider it to be deeply flawed, and when it comes to payments it can never materially improve the security of a highly insecure transaction system (plain text data and magnetic stripe cards). In other words, as much as PCI is painful, flawed, and ineffective, it has also done more to improve security than any other regulation or industry initiative in the past 10 years. Yes, it’s sometimes a distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive. Mr. Russo states: It has always been the PCI Security Standards Council’s assertion that everyone in the payment chain, from (point-of-sale) POS manufacturers to e-shopping cart vendors, merchants to financial institutions, should play a role to keep payment information secure. There are many links in this chain – and each link must do their part to remain strong. and However, we will only be able to improve the security of the overall payment environment if we work together, globally. It is only by working together that we can combat data compromise and escape the blame game that is perpetuated post breach. I agree completely with those statements, which leads to my questions. In your list of the payment chain you do not include the card companies. Don’t they also have responsibility for securing payment information and don’t they technically have the power to implement the most effective changes by improving the technical foundation of transactions? You have said in the past that no PCI compliant company has ever been breached. Since many of those organizations were certified as compliant, that appears to be either a false statement, or an indicator of a very flawed certification process. Do you feel the PCI process itself needs to be improved? Following up on question 2, if so, how does the PCI Council plan on improving the process to prevent compliant companies from being breached? Following up (again) on question 2, does this mean you feel that a PCI compliant company should be immune from security breaches? Is this really an achievable goal? One of the criticisms of PCI is that there seems to be a lack of accountability in the certification process. Do you plan on taking more effective actions to discipline or drop QSAs and ASVs that were negligent in their certification of non-compliant companies? Is the PCI Council considering controls to prevent “QSA shopping” where companies bounce around to find a QSA that is more lenient? QSAs can currently offer security services to clients that directly affect compliance. This is seen as a conflict of interest in all other major audit processes, such as financial audits. Will the PCI Council consider replacing restrictions on these conflict of interest situations? Do you believe we will ever reach a state where a company that was certified as compliant is later breached, and the PCI Council will be willing to publicly back that company and uphold their certification? (I realize this relates again to question 2). I know you may not be able to answer all of these, but I’ve tried to keep the questions fair and relevant to the PCI process without devolving into the blame game. Thank you, Share:

Share:
Read Post

We Know How Breaches Happen

I first started tracking data breaches back in December of 2000 when I received my very first breach notification email, from Egghead Software. When Egghead wen bankrupt in 2001 and was acquired by Amazon, rather than assuming the breach caused the bankruptcy, I did some additional research and learned they were on a downward spiral long before their little security incident. This broke with the conventional wisdom floating around the security rubber-chicken circuit at the time, and was a fine example of the differences between correlation and causation. Since then I’ve kept trying to translate what little breach material we’ve been able to get our collective hands on into as accurate a picture as possible on the real state of security. We don’t really have a lot to work with, despite the heroic efforts of the Open Security Foundation Data Loss Database (for a long time the only source on breach statistics). As with the rest of us, the Data Loss DB is completely reliant on public breach disclosures. Thanks to California S.B. 1386 and the mishmash of breach notification laws that have developed since 2005, we have a lot more information than we used to, but anyone in the security industry knows only a portion of breaches are reported (despite notification laws), and we often don’t get any details of how the intrusions occurred. The problem with the Data Loss DB is that it’s based on incomplete information. They do their best, but more often than not we lack the real meat needed to make appropriate security and risk decisions. For example, we’ve seen plenty of vendor press releases on how lost laptops, backup tapes, and other media are the biggest source of data breaches. In reality, lost laptops and media are merely the greatest source of reported potential exposures. As I’ve talked about before, there is little or no correlation between these lost devices and any actual fraud. All those stats mean is a physical thing was lost or stolen… no more, no less, unless we find a case where we can correlate a loss with actual fraud. On the research side I try to compensate for the statistics problem by taking more of a case study approach, at best I can using public resources. Even with the limited information released, as time passes we tend to dig up more and more details about breaches, especially once cases make it into court. That’s how we know, for example, that both CardSystems and Heartland Payment Systems were breached (5 years apart) using SQL injection against a web application (the xp_cmdshell command in a poorly configured version of SQL Server, to be specific). In the past year or two we’ve gained some additional data sources, most notably the Verizon Data Breach Investigations Report which provides real, anonymized data regarding breaches. It’s limited in that it only reflects those incidents where Verizon participated in the investigation, and by the standardized information they collected, but it starts to give us better insight beyond public breach reports. Yet we still only have a fraction of the information we need to make appropriate risk management decisions. Even after 20 years in the security world (if you count my physical security work), I’m still astounded that the bad guys share more real information on means and methods than we do. We are thus extremely limited in assessing macro trends in security breaches. We’re forced to use far more anecdotal information than a skeptic like myself is comfortable with. We don’t even have a standard for assessing breach costs (as I’ve proposed, never mind more accurate crime and investigative statistics that could help craft our prioritization of security defenses. Seriously – decades into the practice of security we don’t have any fracking idea if forcing users to change passwords every 90 days provides more benefit than burden. All that said, we can’t sit on our asses and wait for the data. As unscientific as it may be, we still need to decide which security controls to apply where and when. In the past couple weeks we’ve seen enough information emerging that I believe we now have a good idea of two major methods of attack: As we discussed here on the blog, SQL injection via web applications is one of the top attack vectors identified in recent breaches. These attacks are not only against transaction processing systems, but are also used to gain a toehold on internal networks to execute more invasive attacks. Brian Krebs has identified another major attack vector, where malware is installed on insecure consumer and business PCs, then used to gather information to facilitate illicit account transfers. I’ve seen additional reports that suggest this is also a major form of attack. I’d love to back these with better statistics, but until those are available we have to rely on a mix of public disclosure and anecdotal information. We hear rumors of other vectors, such as customized malware (to avoid AV filters) and the ever-present-and-all-powerful insider threat, but there isn’t enough to validate those as a major trend quite yet. If we look across all our sources, we see a consistent picture emerging. The vast majority of cybercrime still seems to take advantage of known vulnerabilities that are can be addressed using common practices. The Verizon report certainly calls out unpatched systems, configuration errors, and default passwords as the most common breach sources. While we can’t state with complete certainty that patching systems, blocking SQL injection, removing default passwords, and enforcing secure configurations will prevent most breaches, the information we have does indicate that’s a reasonable direction. Combine that with following the Data Breach Triangle by reducing use of sensitive data (and using something like DLP to find it), and tightening up egress filtering on transaction processing networks and other sensitive data locations, and you are probably in pretty good shape. For financial institutions struggling with their clients being breached, they can add out-of-band transaction verification (phone calls or even automated text messages),

Share:
Read Post

The Ranting Roundtable, PCI Edition

Sometimes you just need to let it all out. With all the recent events around breaches and PCI, I thought it might be cathartic to pull together a few of our favorite loudmouths and spend a little time in a no-rules roundtable. There’s a little bad language, a bit of ranting, and a little more productive discussion than I intended. Joining me were Mike Rothman, Alex Hutton, Nick Selby, and Josh Corman. It runs about 50 minutes, and we mostly focus on PCI. The Ranting Roundtable, PCI. Odds are we’ll do more of these in the future. Even if you don’t like them, they’re fun for us. No goats were harmed in the making of this podcast. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.