Securosis

Research

Understanding and Selecting Database Security Platforms

We love the Totally Transparent Research process. Times like this – where we hit upon new trends, discover unexpected customer uses cases, or discover something going on behind the scenes – are when our open model really shows its value. We started a Database Activity Monitoring 2.0 series last October and suddenly halted because our research showed that platform evolution has changed from convergence to independent visions of database security, with customer requirements splintering. These changes are so significant that we need to publicly discuss them so can you understand why we are suddenly making a significant departure from the way we describe a solution we have been talking about for the past 6+ years. Especially since Rich, back in his Gartner days, coined the term “Database Activity Monitoring” in the first place. What’s going on behind the scenes should help you understand how these fundamental changes alter the technical makeup of products and require new vocabulary to describe what we see. With that, welcome to the reboot of DAM 2.0. We renamed this series Understanding and Selecting Database Security Platforms to reflect massive changes in products and the market. We will fully define why this is the case as we progress through this series, but for now suffice it to say that the market has simply expanded beyond the bounds of the Database Activity Monitoring definition. DAM is now only a subset of the Database Security Platform market. For once this isn’t some analyst firm making up a new term to snag some headlines – as we go through the functions and features you’ll see that real products on the market today go far beyond mere monitoring. The technology trends, different bundles of security products, and use cases we will present, are best reflected by the term “Database Security Platform”, which most accurately reflects the state of the market today. This series will consist of 6 distinct parts, some of which appeared in our original Database Activity Monitoring paper. Defining DSP: Our longstanding definition for DAM is broad enough to include many of the changes, but will be slightly updated to incorporate the addition of new data collection and analysis options. Ultimately the core definition does not change much, as we took into account two anticipated trends when we initially created it, but a couple subtle changes encompass a lot more real estate in the data center. Available Features: Different products enter the DSP market from different angles, so we think it best to list out all the possible major features. We will break these out into core components vs. additional features to help focus on the important ones. Data Collection: The minimum feature set for DAM included database queries, database events, configuration data, audit trails, and permission management for several years. The continuing progression of new data and event sources, from both relational and non-relational data sources, extends the reach of the security platform to include many new application types. We will discuss the implications in detail. Policy Enforcement: The addition of hybrid data and database security protection bundled into a single product. Masking, redaction, dynamically altered query results, and even tokenization build on existing blocking and connection reset options to offer better granularity of security controls. We will discuss the technologies and how they are bundled to solve different problems. Platforms: The platform bundles, and these different combinations of capabilities, best demonstrate the change from DAM to DSP. There are bundles that focus on data security, compliance policy administration, application security, and database operations. We will spend time discussing these different visions and how they are being positioned for customers. Use Cases & Market Drivers: The confluence of what companies are looking to secure mirrors adoption of new platforms, such as collaboration platforms (SharePoint), cloud resources, and unstructured data repositories. Compliance, operations management, performance monitoring, and data security requirements follow the adoption of these new platforms; which has driven the adaptation and evolution of DAM into DSP. We will examine these use cases and how the DSP platforms are positioned to address demand. A huge proportion of the original paper was influenced by the user and vendor communities (I can confirm this – I commented on every post during development, a year before I joined Securosis – Adrian). As with that first version, we strongly encourage user and vendor participation during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. We think you will enjoy this series, so we look forward to your participation! Next up: Defining DSP! Share:

Share:
Read Post

Bridging the Mobile Security Gap: The Need for Context

As we discussed in the first post of this series, consumerization and mobility will remain macro drivers of security for the foreseeable future, and force us to stare down network anarchy. We can certainly go back into the security playbook and deal with an onslaught of unwieldy devices by implementing some kind of agentry on the devices to provide a measure of control. But results of this device-centric approach have been mixed. And that’s being kind. On the other hand from a network security standpoint a device is a device is a device. Whether it’s a desktop sitting in a call center, a laptop in an airline club, or a smartphone traipsing around town, the goal of a network security professional is the same. Our network security charter is always to make sure those devices access the right stuff at the right time, and don’t have access to anything else. So we enforce segmented networks to restrict devices to certain trusted network zones. Remember: segmentation is your friend – and that model holds, up to a point. But here’s the rub in dealing with those pesky smartphones: the folks using these devices actually want to do stuff. You know, productive stuff – which requires access to data. The nerve of those folks. So just parking these devices in a proverbial Siberia on your network falls apart. Instead we have to figure out how to recognize these devices, make sure each device is properly configured, and then restrict it to only what the user legitimately needs to access. But controlling the devices is only the first layer of the onion, and as you peel back layers your eyes start to tear. Are you crying? We won’t tell. The next layer is the user. Who has this device? Do they needs access to sensitive stuff? Is it a guest who wants Internet access? Is it a contractor whose access should expire after a certain number of days? Is it a finance team member who needs to use a tablet app on a warehouse floor? Is it the CEO, who basically does whatever he or she wants? Depending on the answer you would enforce a very different network security policy. For lack of a better term, let’s call this context, and be clear that the idea of a generic network security policy no longer provides adequate granularity of protection as we move to this concept of any computing. It’s not enough to know which device the user uses – it gets down to who the user is and what they are entitled to access. Unfortunately that’s not something you can enforce exclusively on the device because it doesn’t: 1) know about the access policies within your enterprise, 2) have visibility into the network to figure out what the device is accessing, or 3) have the ability to interoperate with network security devices to enforce policies. The good news is that we have seen this before, and as good security historians we can draw parallels with how we initially embraced VPNs. But there is a big difference from the past, when we could just install a VPN agent that downloaded a VPN access policy which worked with the perimeter VPN device. With smartphones we get extremely limited access to the mobile operating systems. These new operating systems were built with security much more strongly in mind – including from us – so mobile security agents don’t have nearly as deep access into what other apps are doing – that’s largely blocked by the sandbox model embraced by mobile operating systems. Simply put, the device doesn’t see enough to be able to enforce access policies without some deep, non-public access to the operating systems. But even that is generally not the stickiest issue with supporting these devices. You cannot count on being able to install mobile security agents on mobile devices, particularly because many organizations support a BYOD (bring your own device) policy, and users may not accept security agents on their devices. Of course, you can declare they can’t access the network, which quickly becomes a Mexican stand-off. Isn’t there another way, which doesn’t require agents to implement at least basic control over which mobile devices gain access and what they can reach? In fact there is. You should be looking for a network security device that can: Identify a mobile device and enforce device configuration policies. Have some idea of the user, and be able to understand the access rights of the user + device combination. For example, the CFO may be able to get to everything from their protected laptop, but be restricted if they use an app on their smartphone. Support the segmentation approach of the enterprise network – identifying users and devices is neat but academic until it enables you to restrict them to specific network segments. And we cannot forget: we must be able to most of this without an agent on the smartphone. To bridge this mobile security gap, those are the criteria we need to satisfy. In the next post we will wrap up this series by dealing with some of the additional risk and operational issues of having multiple enforcement points to provide this kind of access control. Share:

Share:
Read Post

Implementing DLP: Final Deployment Preparations

Map Your Environment No matter which DLP process you select, before you can begin the actual implementation you need to map out your network, storage infrastructure, and/or endpoints. You will use the map to determine where to push out the DLP components. Network: You don’t need a complete and detailed topographical map of your network, but you do need to identify a few key components. All egress points. These are where you will connect DLP monitors to a SPAN or mirror port, or install DLP inline. Email servers and MTAs (Mail Transport Agents). Most DLP tools include their own MTA which you simply add as a hop in your mail chain, so you need to understand that chain. Web proxies/gateways. If you plan on sniffing at the web gateway you’ll need to know where these are and how they are configured. DLP typically uses the ICAP protocol to integrate. Also, if your web proxy doesn’t intercept SSL… buy a different proxy. Monitoring web traffic without SSL is nearly worthless these days. Any other proxies you might integrate with, such as instant messaging gateways. Storage: Put together a list of all storage repositories you want to scan. The list should include the operating system type, file shares / connection types, owners, and login credentials for remote scanning. If you plan to install agents test compatibility on test/development systems. Endpoints: This one can be more time consuming. You need to compile a list of endpoint architectures and deployments – preferably from whatever endpoint management tool you already use for things like configuration and software updates. Mapping machine groups to user and business groups makes it easier to deploy endpoint DLP by business units. You need system configuration information for compatibility and testing. As an example, as of this writing no DLP tool supports Macs so you might have to rely on network DLP or exposing local file shares to monitor and scan them. You don’t need to map out every piece of every component unless you’re doing your entire DLP deployment at once. Focus on the locations and infrastructure needed to support the project priorities you established earlier. Test and Proof of Concept Many of you perform extensive testing or a full proof of concept during the selection process, but even if you did it’s still important to push down a layer deeper, now that you have more detailed deployment requirements and priorities. Include the following in your testing: For all architectures: Test a variety of policies that resemble the kinds you expect to deploy, even if you start with dummy data. This is very important for testing performance – there are massive differences between using something like a regular expression to look for credit card numbers vs. database matching against hashes of 10 million real credit card numbers. And test mixes of policies to see how your tool supports multiple policies simultaneously, and to verify which policies each component supports – for example, endpoint DLP is generally far more limited in the types and sizes of policies it supports. If you have completed directory server integration, test it to ensure policy violations tie back to real users. Finally, practice with the user interface and workflow before you start trying to investigate live incidents. Network: Integrate out-of-band and confirm your DLP tool is watching the right ports and protocols, and can keep up with traffic. Test integration – including email, web gateways, and any other proxies. Even if you plan to deploy inline (common in SMB) start by testing out-of-band. Storage: If you plan to use any agents on servers or integrated with NAS or a document management system, test them in a lab environment first for performance impact. If you will use network scanning, test for performance and network impact. Endpoint: Endpoints often require the most testing due to the diversity of configurations in most organizations, the more-limited resources available to the DLP engine, and all the normal complexities of mucking with user’s workstations. The focus here is on performance and compatibility, along with confirming which content analysis techniques really work on endpoints (the typical sales exec is often a bit … obtuse … about this). If you will use policies that change based on which network the endpoint is on, also test that. Finally, if you are deploying multiple DLP components – such as multiple network monitors and endpoint agents – it’s wise to verify they can all communicate. We have talked with some organizations that found limitations here and had to adjust their architectures. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.