Securosis

Research

Implementing DLP: Starting Your Integration

With priorities fully defined, it is now time to start the actual integration. The first stop is deploying the DLP tool itself. This tends to come in one of a few flavors – and keep in mind that you often need to license different major features separately, even if they all deploy on the same box. This is the heart of your DLP deployment and needs to be in place before you do any additional integration. DLP Server Software: This is the most common option and consists of software installed on a dedicated server. Depending on your product this could actually run across multiple physical servers for different internal components (like a back-end database) or to spread out functions. In a few cases products require different software components running concurrently to manage different functions (such as network vs. endpoint monitoring). This is frequently a legacy of mergers and acquisitions – most products are converging on a single software base with, at most, additional licenses or plugins to provide additional functions. Management server overhead is usually pretty low, especially in anything smaller than a large enterprise, so this server often handles some amount of network monitoring, functions as the email MTA, scans at least some file servers, and manages endpoint agents. A small to medium sized organization generally only needs to deploy additional servers for load balancing, as a hot standby, or to cover remote network or storage monitoring with multiple egress points or data centers. Integration is easy – install the software and position the physical server wherever needed, based on deployment priorities and network configuration. We are still in the integration phase of deployment and will handle the rest of the configuration later. DLP Appliance: In this scenario the DLP software comes preinstalled on dedicated hardware. Sometimes it’s merely a branded server, while in other cases the appliance includes specialized hardware. There is no software to install, so the initial integration is usually a matter of connecting it to the network and setting a few basic options – we will cover the full configuration later. As with a standard server, the appliance usually includes all DLP functions (which you might still need licenses to unlock). The appliance can generally run in an alternative remote monitor mode for distributed deployment. DLP Virtual Appliance: The DLP software is preinstalled into a virtual machine for deployment as a virtual server. This is similar to an appliance but requires work: to get up and running on your virtualization platform of choice, configure the network, and then set the initial configuration options up as if it were a physical server or appliance. For now just get the tool up and running so you can integrate the other components. Do not deploy any policies or turn on monitoring yet. Directory Server Integration The most important deployment integration is with your directory servers and (probably) the DHCP server. This is the only way to tie activity back to actual users, rather than to IP addresses. This typically involves two components: An agent or connection to the directory server itself to identify users. An agent on the DHCP server to track IP address allocation. So when a user logs onto the network, their IP address is correlated against their user name, and this is passed on to the DLP server. The DLP server can now track which network activity is tied to which user, and the directory server enables it to understand groups and roles. This same integration is also required for storage or endpoint deployment. For storage the DLP tool knows which users have access to which files based on file permissions – not that they are always accurate. On an endpoint the agent knows which policies to run based on who is logged in. Share:

Share:
Read Post

Understanding and Selecting a Database Security Platform: Defining DSP

As I stated in the intro, Database Security Platform (DSP, to save us writing time and piss off the anti-acronym crowd) differs from DAM in a couple ways. Let’s jump right in with a definition of DSP, and then highlight the critical differences between DAM and DSP. Defining DSP Our old definition for Database Activity Monitoring has been modified as follows: Database Security Platforms, at a minimum, assess database security, capture and record all database activity in real time or near real time (including administrator activity); across multiple database types and platforms; and alert and block on policy violations. This distinguishes Database Security Platforms from Database Activity Monitoring in four key ways: Database security platforms support both relational and non-relational databases. All Database Security Platforms include security assessment capabilities. Database Security Platforms must have blocking capabilities, although they aren’t always used. Database Security Platforms often include additional protection features, such as masking or application security, which aren’t necessarily included in Database Activity Monitors. We are building a new definition due to the dramatic changes in the market. Almost no tools are limited to merely activity monitoring any more, and we see an incredible array of (different) major features being added to these products. They are truly becoming a platform for multiple database security functions, just as antivirus morphed into Endpoint Protection Platforms by adding everything from whitelisting to intrusion prevention and data loss prevention. Here is some additional detail: The ability to remotely audit all user permissions and configuration settings. Connecting to a remote database with user level credentials, scanning the configuration settings, then comparing captured data against an established baseline. This includes all external initialization files as well as all internal configuration settings, and may include additional vulnerability tests. The ability to independently monitor and audit all database activity including administrator activity, transactions, and data (SELECT) requests. For relational platforms this includes DML, DDL, DCL, and sometimes TCL activity. For non-relational systems this includes ownership, indexing, permissions and content changes. In all cases read access is recorded, along with the meta-data associated with the action (user identity, time, source IP, application, etc). The ability to store this activity securely outside the database. The ability to aggregate and correlate activity from multiple, heterogeneous Database Management Systems (DBMS). These tools work with multiple relational (e.g., Oracle, Microsoft, and IBM) and quasi-relational (ISAM, Terradata, and Document management) platforms. The ability to enforce separation of duties on database administrators. Auditing activity must include monitoring of DBA activity, and prevent database administrators from tampering with logs and activity records – or at least make it nearly impossible. The ability to protect data and databases – both alerting on policy violations and taking preventative measure to prevent database attacks. Tools don’t just record activity – they provide real-time monitoring, analysis, and rule-based response. For example, you can create a rule that masks query results when a remote SELECT command on a credit card column returns more than one row. The ability to collect activity and data from multiple sources. DSP collects events from the network, OS layer, internal database structures, memory scanning, and native audit layer support. Users can tailor deployments to their performance and compliance requirements, and collect data from sources best for their requirements. DAM tools have traditionally offered event aggregation but DSP requires correlation capabilities as well. DSP is, in essence, a superset of DAM applied to a broader range of database types and platforms. Let’s cover the highlights in more detail: Databases: It’s no longer only about big relational platforms with highly structured data – but now also in non-relational platforms. Unstructured data repositories, document management systems, quasi-relational storage structures, and tagged-index files are being covered. So the number of query languages being analyzed continues to grow. Assessment: “Database Vulnerability Assessment” is offered by nearly every Database Activity Monitoring vendor, but it is seldom sold separately. These assessment scans are similar to general platform assessment scanners but focus on databases – leveraging database credentials to scan internal structures and metadata. The tools have evolved to scan not only for known vulnerabilities and security best practices, but to include a full scan of user accounts and permissions. Assessment is the most basic preventative security measure and a core database protection feature. Blocking: Every database security platform provider can alert on suspicious activity, and the majority can block suspect activity. Blocking is a common customer requirement – it is only applied to a very small fraction of databases, but has nonetheless become a must-have feature. Blocking requires the agent or security platform to be deployed ‘inline’ in order to intercept and block incoming requests before they execute. Protection: Over and above blocking, we see traditional monitoring products evolving protection capabilities focused more on data and less on database containers. While Web Application Firewalls to protect from SQL injection attacks have been bundled with DAM for some time, we now also see several types of query result filtering. One of the most interesting aspects of this evolution is how few architectural changes are needed to provide these new capabilities. DSP still looks a lot like DAM, but functions quite differently. We will get into architecture later in this series. Next we will go into detail on the features that define DSP and illustrate how they all work together. Share:

Share:
Read Post

Implementing DLP: Integration Priorities and Components

It might be obvious by now, but the following charts show which DLP components, integrated with which existing infrastructure, you need based on your priorities. I have broken this out into three different images to make them more readable. Why images? Because I have to dump all this into a white paper later, and building them in a spreadsheet and taking screenshots is a lot easier than mucking with HTML-formatted charts Between this and our priorities post and chart you should have an excellent idea of where to start, and how to organize, your DLP deployment. Share:

Share:
Read Post

Friday Summary: February 3, 2012

Since Rich is vacationing working hard at a security conference in Mexico, I figure I would write this week’s Friday Summary. I am pretty jazzed about some upcoming white papers I’ll be writing on securing data and applications at scale, understanding and selecting masking technologies, and why log management is not dead! And I am having a good time researching and writing the DAM 2.0 DSP series as well. I originally intended to write about our research agenda but changed my mind. Frankly, I have spring fever. Spring fever, you ask, in the first week of February? Yep. It’s 74 degrees here and sunny. WTF? Punxsutawney Phil weighed in with his opinion, and after burning his retinas, it looks like we are going to have another six weeks of winter. I sure hope so! Another six weeks of this type of weather would be awesome. I have been on the phone with dozens of people around the country, from Boston to San Diego, and they are all experiencing fantastic weather. Even Gunnar reports highs of 48 degrees in Minnesota. I guess the cold air jet stream has been staying north of the border. For me this means my peach trees are blooming. Blooming! On freakin’ January 30th! See for yourself: And I know some of you may not care, but the warm weather means my backyard garden is almost complete. Following up on my post last October, in just a couple short months the Vegetable Fortress is built! Overbuilt? Beauty is in the eye of the beholder. I may put some solar powered laser turrets on it. You never know when Al-Qaeda might train gophers with tig welders to attack my squash. And if the DHS threat level spikes I will have a detachment of Araucana commando chickens to beat back the attack. The price of vegetables is eternal vigilance – and $3.95 for GMO free seeds. Now call in sick and go outside to enjoy the nice weather! You’ll be glad you did. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Our Research Page with every freakin’ white paper we’ve done in the last three years. Rich, Adrian, and Shimmy discuss NoSQL Security with Couchbase and Mongo founders. Other Securosis Posts Bridging the Mobile Security Gap: Operational Consistency. Malware Analysis Quant: Take the Survey (and win fancy prizes!) Incite 2/1/2012: Bored to Tears. Implementing DLP: Integration, Part 1. Understanding and Selecting Database Security Platforms. Bridging the Mobile Security Gap: The Need for Context. Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts. Implementing DLP: Final Deployment Preparations. Malware Analysis Quant: Phase 1 – The Process [Check out the paper!] Favorite Outside Posts Mike Rothman: Mr. Waledac: The Peter North of Spamming. Krebs could have written this post in Swahili and it would still be my favorite outside link. Anyone that can pull off a Peter North mention in the title of a post gets my weekly vote. And it’s even a good post! Krebs digs into the intrigue of the Russian Spam Mafia. David Mortman: BSides/RSA Conference Dust Up. And the resolution. Beneficial discussion. Rich: Firewalls and SSL: More Profitable than Facebook. Gunnar’s got a great point: Firewalls, AV, and SSL sell – and very little money gets spent on innovative products. Adrian Lane: Fascinating look at Netflix’s Ephemeral Volatile Caching in the cloud. Not security related, but a good presentation of what’s possible with cloud content distribution. Project Quant Posts Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Malware Analysis Quant: Dynamic Analysis. Malware Analysis Quant: Static Analysis. Malware Analysis Quant: Build Testbed. Research Reports and Presentations Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Top News and Posts Paget Demonstrates Wireless Credit Card Theft. Carrier IQ Concerns. WPA2 Vulnerability Analysis. Symantec patches pcAnywhere, says it’s safe. Secure Virtual Storage – the AWS way. Missed this in last week’s summary. Low Orbit Ion Cannon DDoS Analysis. Not new, but newsworthy. Android Malware Infection. Android can be a more powerful platform as you can run more powerful apps on it. This is made possible by a lax security model. That’s the tradeoff. Google to Censor Blogger Blogs on a ‘Per Country Basis’. The tradeoff is either Google blogs get banned on a ‘Per Country Basis’ or Google bans select blogs. Revenue trumps ethics every time. Blog Comment of the Week None this week. Share:

Share:
Read Post

Bridging the Mobile Security Gap: Operational Consistency

We started the Bridging the Mobile Security Gap series by accepting that we can’t control the devices that show up on our networks any more. We followed up with a diatribe on the need for context to build and enforce policies which ensure that (only) the right users get to the right stuff at the right times. To wrap up the series we need to dig deeper into enforcement, because as we all know the chain is only as strong as its weakest link. There are various places where mobile device security policies can be enforced – including on the devices themselves (via mobile device management) and on the network (firewall/VPN, IPS, network access control, etc.). There is no one right or wrong place to enforce policies. In fact the best answer is often “all of the above”. The more places you can enforce policy, the more likely your defenses will succeed at blocking attacks. Of course complexity is the obvious downside to multiple enforcement points. Complexity has a strong negative correlation with operational consistency. You need to make sure your enforcement points work together. Why? Let’s run through a few scenarios where policies are not aligned. Yeah, they do not end well. You can implement a policy forcing device to connect through the corporate VPN to receive the protection of the enterprise network – but that only works if the VPN recognizes the device and puts it in the right trust zone, with access to what the user needs. When that doesn’t happen correctly, the user is out of business – or a risk. Likewise, preventing misconfigured smartphones from accessing the network reflects good security enforcement, right? Sure, unless it belongs to the CEO who is trying to access a letter of understanding about an acquisition – even worse if you have no way to override the control. Exceptions are part of the game of managing security, so you need the ability to adapt as needed. Both those scenarios result in users being unable to access what they need, which means a bad day for you. This is why neither MDM nor any kind of network-based control can operate in a vacuum. You can take a number of steps to attain operational consistency. Coexistence The first stop on our path to policy consistency is just making the enforcement points coexist. Do enough to make sure one tool is working contrary to the others. Unfortunately this is largely a manual process. Whenever changes are made or new policies implemented, your administrators need to run through the impact of these changes. All of them. Well, all the practical ones anyway. It’s a lot of work, but necessary, given how important mobile devices have become to business productivity. Remember the good old days, when you did a similar dance when changing firewall rules. Some folks waited for the help desk to light up, and then they knew something was broken. We don’t recommend that approach. To avoid that problem vendors starting offering built-in policy checkers, and third-party firewall management tools emerged to perform these functions at higher scale and on multiple firewalls. Unfortunately those tools don’t support mobile devices (or the relevant network controls) today, so for now you are on your own. That can be problematic, since you know (even if you don’t want to admit it) that it’s difficult to maintain operational discipline – particularly in the face of the number of changes made, exceptions managed, and other fires to fight. It’s not where you want to be, but coexistence is the start. Integration at the console The next step is console integration. In this scenario alerts funnel from one management console to the other. Integration at least gives administrators a coordinated view of what’s happening. It may even be possible to click on one console and have that link to a specific event or device in the other. Very fancy, and downright useful from an operational standpoint. A little less integration your admins need to perform in their own heads improves productivity. Of course this requires cooperation between vendors and these kinds of relationships are not commonplace. But they will be – enterprise customers will demand them. Another benefit of this initial integration is more effective compliance reporting. Vendors map from a data source to the compliance report and pump the data in. That’s pretty helpful too – you know how painful getting ready for an audit remains, especially when you need to manage 5-10 different data sources to show to the auditor that you know what you’re doing. Of course this is less than full integration – you still need to deal with multiple consoles to make policy changes, and the logic to ensure a policy in one tool doesn’t adversely impact another tool is missing. But it’s progress. True integration What you really want is the ability to manage a single policy, implemented across different devices and network controls. How cool would that be? But don’t hold your breath waiting. Like most other non-standards-based integration, we will see integration initially forced by huge customers. Some Fortune 50 company using a device-centric management product will want to implement network controls. They will call everyone together, write down on a whiteboard how much they spend with each company, and make it very clear that integration will happen, and soon. It’s the proverbial offer they can’t refuse, and they usually don’t. Over time integration gives way to consolidation, and we expect MDM to be integrated into the larger IT device management stack and eventually work with network controls that way. Obviously that’s a few years down the road, but it’s the way these things work out. It’s not a matter of if but a matter of when. But without a crystal ball there isn’t much to do about that, so the best bet is to make decisions based on available integration today, and be ready to adapt for tomorrow. Losing device specificity We used to think of mobile devices as only laptops, but the pendulum has swung back the other way, to focus

Share:
Read Post

Malware Analysis Quant: Take the Survey (and win fancy prizes!)

One of the coolest things about how we work at Securosis is our Totally Transparent Research approach. We always post our work to the blog first and let you folks have at it. In many cases it gets poked and prodded, ridiculed, and broken down. It’s certainly tough on the ego, but in the end makes the work better. So we are now asking for more help as we enter Phase 2 of our Malware Analysis Quant research. As we described over the weekend, Phase 1 resulted in a nice (not so) little paper breaking down the process map for studying malware infections. Now we have to match up theory against reality. And thus the MAQ survey. As with all our surveys, we have set it up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis. By the way, unlike other folks posting surveys, we don’t know the answers before we post the survey. Click here to take the survey, and please spread the word. We know from our last few surveys that we need to consider the time you are taking to help, so we kept this one pretty short. We would be surprised if it takes you more than 10-15 minutes. We understand filling out surveys is a pain in the behind, so we are providing an incentive. We will give 3 $100 Amazon gift cards to lucky participants. You don’t need to provide an email address to take the survey, but you do to be entered into the drawing. We are also tracking where we get our responses from, so if you take the survey in response to this post, please use Securosis. as your source code. If you repost the link you can make up your own code and email it to us. We’ll let you know how many people responded to your referral. If you generate sufficient response we will be happy to send you your keycode’s slice of the data. Thanks again for your help. We’ll keep the survey open at least 2 weeks and then begin analysis. Again, here is the link: http://www.surveymonkey.com/s/MalwareAnalysisQuant-Survey Photo credit: “Survey says…” originally uploaded by hfabulous Share:

Share:
Read Post

Incite 2/1/2012: Bored to Tears

It’s unbelievable how different growing up today is. When I was in elementary school in the late 70s, Pong was state of the art and a handheld Coleco football game would keep a little kid occupied for hours. When they came up with the Head to Head innovation, two kids would be occupied for hours. That was definitely a different type of Occupy movement. We also didn’t have 300 channels on the boob tube. We had 5 channels, and the highlight of the year was Monster Week. At least for me. Most days I jumped on my bike to go play with my friends. Sometimes we played football. Okay – a lot of days we’d play football. It was easy – you didn’t need much equipment or a special field or anything. Just an even number of kids. I’m not sure what little girls did back in the day, since it was just me and my younger brother, but I’m sure it was similarly unsophisticated. We just played. Why am I getting nostalgic? Basically because I’m frustrated. Today kids don’t play. They need to be entertained. The thing that makes me cringe most is when one of my kids tells me they are bored. Bored? This usually happens after I tell them 5 hours on the iPod touch is enough over the weekend. Or that the 3 hours they watched crappy TV in the morning is more than enough. I tell them to get a book and read. I tell them to play a game. Maybe use some of the thousands of dollars of toys in the basement. Perhaps even build something with Lincoln Logs. Or break out one of the 25 different Lego contraptions we have. Mostly I tell them to get out of my hair, since I’m doing important stuff. Like reading about the Super Bowl on my iPad. But I digress. What ever happened to the 5 Best Toys of All Time? I’d add a football to that list and be good. That was my childhood in a nutshell. No more. Our kids’ minds are numbed with constant stimulation, which isn’t surprising considering that many of us are similarly numb, and it’s not helping us find happiness. Rich sent around this article over the weekend, and it’s right. We seem to have forgotten what it’s like to interact with folks, unless it’s via Words with Friends. Sometimes you need to slow down to speed up in the long run. I know you can’t stop ‘progress’. But you don’t need to just accept it either. After XX1 realized I wasn’t going to cave and let her play on the computer, she spent a few hours writing letters to her camp friends. She painstakingly colored the envelopes, and I think she even wrote English. But what she wrote isn’t the point. It’s that there was no battery, power cord, or other electronics involved. No ads were flying at her head either. Amazingly enough, she overcame her boredom and was even a little disappointed when everyone had to get ready for bed. It was a small victory, but I’ll take it. They don’t come along too often, since my kids are always right. Just ask them. -Mike Photo credits: “Mattel & Coleco H2H classics” originally uploaded by Vic DeLeon Heavy Research After a bit of a blogging hiatus we are back at it. The Heavy Research feed is hopping, so here are a couple links to our latest stuff. Please check them out and (as always) let us know what you think via comments. Implementing and Managing a Data Loss (DLP) Solution: Index of Posts: Rich will be updating this post with the latest in his ongoing series on DLP. Understanding and Selection Database Security Platforms: Rich and Adrian are updating their landmark DAM research from a few years ago. As with many things, what used to a single-purpose capability (DAM) is now a database security platform. Follow along as they explore exactly what that means. Bridging the Mobile Security Gap: The Need for Context: Got rid of those smartphones yet? No? Then you should be checking out this series on how to provision layered controls to maintain order, in light of the onslaught of all sorts of new devices. Malware Analysis Quant: Phase 1 – The Process: We have finished up Phase 1 of Malware Analysis Quant, and packaged up the process map and descriptions into a paper. Check it out, but please understand the process will continue to evolve as we keep digging into the research. We will launch the survey this week, so keep an eye out. You can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Incite 4 U Privacy and Google: Google’s new privacy policy has been making waves the last few days. For me it’s not so much about the policy – I’m nonplussed about that. Sure, I don’t like the Google’s non-anonymity posture. On the other hand it’s much easier to understand Google’s consolidated policy on privacy and the intentions behind it – for that they should be commended. Essentially it comes down to “use our stuff and we’ll use your data”, which is clear enough and completely unsurprising in light of their business model. Understand that an encrypted search provides an illusion of privacy, meaning nobody on the network you traverse should be able to see the query, but it does not mean your activity is not logged and indexed by Google (or your other search provider). Good or bad – you be the judge. The real question is what are you going to do about it? For me this is an important “rubber meets the road” milestone. And that’s too bad because I like using Google’s search engine – it is clearly the the best. Gmail is free and it works – but I don’t have an easy way to encrypt email running through my Gmail account so Google can’t read it. Which means I have to get off my lazy butt and stop using these tools, or accept that Google owns an online identity

Share:
Read Post

Implementing DLP: Integration, Part 1

At this point all planning should be complete. You have determined your incident handling process, started (or finished) cleaning up directory servers, defined your initial data protection priorities, figured out which high-level implementation process to start with, mapped our the environment so you know where to integrate, and performed initial testing and perhaps a proof of concept. Now it’s time to integrate the DLP tool into your environment. You won’t be turning on any policies yet – the initial focus is on integrating the technical components and preparing to flip the switch. Define a Deployment Architecture Earlier you determined your deployment priorities and mapped out your environment. Now you will use them to define your deployment architecture. DLP Component Overview We have covered the DLP components a bit as we went along, but it’s important to know all the technical pieces you can integrate depending on your deployment priorities. This is just a high-level overview, and we go into much more detail in our Understanding and Selecting a Data Loss Prevention Solution paper. This list includes many different possible components, but that doesn’t mean you need to buy a lot of different boxes. Small and mid-sized organizations might be able to get everything except the endpoint agents on a single appliance or server. Network DLP consists of three major components and a few smaller optional ones: Network monitor or bridge/proxy – this is typically an appliance or dedicated server placed inline or passively off a SPAN or mirror port. It’s the core component for network monitoring. Mail Transport Agent – few DLP tools integrate directly into a mail server; instead they insert their own MTA as a hop in the email chain. Web gateway integration – many web gateways support the ICAP protocol, which DLP tools use to integrate and analyze proxy traffic. This enables more effective blocking and provides the ability to monitor SSL encrypted traffic if the gateway includes SSL intercept capabilities. Other proxy integration – the only other proxies we see with any regularity are for instant messaging portals, which can also be integrated with your DLP tool to support monitoring of encrypted communications and blocking before data leaves the organization. Email server integration – the email server is often separate from the MTA, and internal communications may never pass through the MTA which only has access to mail going to or coming from the Internet. Integrating directly into the mail server (message store) allows monitoring of internal communications. This feature is surprisingly uncommon. Storage DLP includes four possible components: Remote/network file scanner – the easiest way to scan storage is to connect to a file share over the network and scan remotely. This component can be positioned close to the file repository to increase performance and reduce network saturation. Storage server agent – depending on the storage server, local monitoring software may be available. This reduces network overhead, runs faster, and often provides additional metadata, but may affect local performance because it uses CPU cycles on the storage server. Document management system integration or agent – document management systems combine file storage with an application layer and may support direct integration or the addition of a software agent on the server/device. This provides better performance and additional context, because the DLP tool gains access to management system metadata. Database connection – a few DLP tools support ODBC connections to scan inside databases for sensitive content. Endpoint DLP primarily relies on software agents, although you can also scan endpoint storage using administrative file shares and the same remote scanning techniques used for file repositories. There is huge variation in the types of policies and activities which can be monitored by endpoint agents, so it’s critical to understand what your tool offers. There are a few other components which aren’t directly involved with monitoring or blocking but impact integration planning: Directory server agent/connection – required to correlate user activity with user accounts. DHCP server agent/connection – to associate an assigned IP address with a user, which is required for accurate identification of users when observing network traffic. This must work directly with your directory server integration because the DHCP servers themselves are generally blind to user accounts. SIEM connection – while DLP tools include their own alerting and workflow engines, some organizations want to push incidents to their Security Information and Event Management tools. In our next post I will post a chart that maps priorities directly to technical components. Share:

Share:
Read Post

Understanding and Selecting Database Security Platforms

We love the Totally Transparent Research process. Times like this – where we hit upon new trends, discover unexpected customer uses cases, or discover something going on behind the scenes – are when our open model really shows its value. We started a Database Activity Monitoring 2.0 series last October and suddenly halted because our research showed that platform evolution has changed from convergence to independent visions of database security, with customer requirements splintering. These changes are so significant that we need to publicly discuss them so can you understand why we are suddenly making a significant departure from the way we describe a solution we have been talking about for the past 6+ years. Especially since Rich, back in his Gartner days, coined the term “Database Activity Monitoring” in the first place. What’s going on behind the scenes should help you understand how these fundamental changes alter the technical makeup of products and require new vocabulary to describe what we see. With that, welcome to the reboot of DAM 2.0. We renamed this series Understanding and Selecting Database Security Platforms to reflect massive changes in products and the market. We will fully define why this is the case as we progress through this series, but for now suffice it to say that the market has simply expanded beyond the bounds of the Database Activity Monitoring definition. DAM is now only a subset of the Database Security Platform market. For once this isn’t some analyst firm making up a new term to snag some headlines – as we go through the functions and features you’ll see that real products on the market today go far beyond mere monitoring. The technology trends, different bundles of security products, and use cases we will present, are best reflected by the term “Database Security Platform”, which most accurately reflects the state of the market today. This series will consist of 6 distinct parts, some of which appeared in our original Database Activity Monitoring paper. Defining DSP: Our longstanding definition for DAM is broad enough to include many of the changes, but will be slightly updated to incorporate the addition of new data collection and analysis options. Ultimately the core definition does not change much, as we took into account two anticipated trends when we initially created it, but a couple subtle changes encompass a lot more real estate in the data center. Available Features: Different products enter the DSP market from different angles, so we think it best to list out all the possible major features. We will break these out into core components vs. additional features to help focus on the important ones. Data Collection: The minimum feature set for DAM included database queries, database events, configuration data, audit trails, and permission management for several years. The continuing progression of new data and event sources, from both relational and non-relational data sources, extends the reach of the security platform to include many new application types. We will discuss the implications in detail. Policy Enforcement: The addition of hybrid data and database security protection bundled into a single product. Masking, redaction, dynamically altered query results, and even tokenization build on existing blocking and connection reset options to offer better granularity of security controls. We will discuss the technologies and how they are bundled to solve different problems. Platforms: The platform bundles, and these different combinations of capabilities, best demonstrate the change from DAM to DSP. There are bundles that focus on data security, compliance policy administration, application security, and database operations. We will spend time discussing these different visions and how they are being positioned for customers. Use Cases & Market Drivers: The confluence of what companies are looking to secure mirrors adoption of new platforms, such as collaboration platforms (SharePoint), cloud resources, and unstructured data repositories. Compliance, operations management, performance monitoring, and data security requirements follow the adoption of these new platforms; which has driven the adaptation and evolution of DAM into DSP. We will examine these use cases and how the DSP platforms are positioned to address demand. A huge proportion of the original paper was influenced by the user and vendor communities (I can confirm this – I commented on every post during development, a year before I joined Securosis – Adrian). As with that first version, we strongly encourage user and vendor participation during this series. It does change the resulting paper, for the better, and really helps the community understand what’s great and what needs improvement. All pertinent comments will be open for public review, including any discussion on Twitter, which we will reflect here. We think you will enjoy this series, so we look forward to your participation! Next up: Defining DSP! Share:

Share:
Read Post

Bridging the Mobile Security Gap: The Need for Context

As we discussed in the first post of this series, consumerization and mobility will remain macro drivers of security for the foreseeable future, and force us to stare down network anarchy. We can certainly go back into the security playbook and deal with an onslaught of unwieldy devices by implementing some kind of agentry on the devices to provide a measure of control. But results of this device-centric approach have been mixed. And that’s being kind. On the other hand from a network security standpoint a device is a device is a device. Whether it’s a desktop sitting in a call center, a laptop in an airline club, or a smartphone traipsing around town, the goal of a network security professional is the same. Our network security charter is always to make sure those devices access the right stuff at the right time, and don’t have access to anything else. So we enforce segmented networks to restrict devices to certain trusted network zones. Remember: segmentation is your friend – and that model holds, up to a point. But here’s the rub in dealing with those pesky smartphones: the folks using these devices actually want to do stuff. You know, productive stuff – which requires access to data. The nerve of those folks. So just parking these devices in a proverbial Siberia on your network falls apart. Instead we have to figure out how to recognize these devices, make sure each device is properly configured, and then restrict it to only what the user legitimately needs to access. But controlling the devices is only the first layer of the onion, and as you peel back layers your eyes start to tear. Are you crying? We won’t tell. The next layer is the user. Who has this device? Do they needs access to sensitive stuff? Is it a guest who wants Internet access? Is it a contractor whose access should expire after a certain number of days? Is it a finance team member who needs to use a tablet app on a warehouse floor? Is it the CEO, who basically does whatever he or she wants? Depending on the answer you would enforce a very different network security policy. For lack of a better term, let’s call this context, and be clear that the idea of a generic network security policy no longer provides adequate granularity of protection as we move to this concept of any computing. It’s not enough to know which device the user uses – it gets down to who the user is and what they are entitled to access. Unfortunately that’s not something you can enforce exclusively on the device because it doesn’t: 1) know about the access policies within your enterprise, 2) have visibility into the network to figure out what the device is accessing, or 3) have the ability to interoperate with network security devices to enforce policies. The good news is that we have seen this before, and as good security historians we can draw parallels with how we initially embraced VPNs. But there is a big difference from the past, when we could just install a VPN agent that downloaded a VPN access policy which worked with the perimeter VPN device. With smartphones we get extremely limited access to the mobile operating systems. These new operating systems were built with security much more strongly in mind – including from us – so mobile security agents don’t have nearly as deep access into what other apps are doing – that’s largely blocked by the sandbox model embraced by mobile operating systems. Simply put, the device doesn’t see enough to be able to enforce access policies without some deep, non-public access to the operating systems. But even that is generally not the stickiest issue with supporting these devices. You cannot count on being able to install mobile security agents on mobile devices, particularly because many organizations support a BYOD (bring your own device) policy, and users may not accept security agents on their devices. Of course, you can declare they can’t access the network, which quickly becomes a Mexican stand-off. Isn’t there another way, which doesn’t require agents to implement at least basic control over which mobile devices gain access and what they can reach? In fact there is. You should be looking for a network security device that can: Identify a mobile device and enforce device configuration policies. Have some idea of the user, and be able to understand the access rights of the user + device combination. For example, the CFO may be able to get to everything from their protected laptop, but be restricted if they use an app on their smartphone. Support the segmentation approach of the enterprise network – identifying users and devices is neat but academic until it enables you to restrict them to specific network segments. And we cannot forget: we must be able to most of this without an agent on the smartphone. To bridge this mobile security gap, those are the criteria we need to satisfy. In the next post we will wrap up this series by dealing with some of the additional risk and operational issues of having multiple enforcement points to provide this kind of access control. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.