Securosis

Research

Watching the Watchers: Enforce Entitlements

So far we have described the Restrict Access and Protect Credentials aspects of the Privileged User Lifecycle. So far any administrator managing a device is authorized to be there and uses strong credentials. But what happens when they get there? Do they get free reign? Should you just give them root or full Administrator rights and have done with it? What could possibly go wrong with that? Clearly you should make sure administrators only perform authorized functions on managed devices. This protects against a couple of scenarios you probably need to worry about: Insider Threat: A privileged user is the ultimate insider, as he/she has the skills and knowledge to compromise a system and take what they want, cover their tracks, etc. So it makes sense to provide a bit more specificity over what admins and groups can do, and block them from doing everything else. Separation of Duties: Related to the Insider Threat, optimally you should make sure no one person has the ability to take down your environment. So you can logically separate duties, where one group can manage the servers but not the storage. Or one admin can provision a new server but can’t move data onto it. Compromised Endpoints: You also can’t assume any endpoint isn’t compromised. So even an authenticated and authorized user may not be who you think they are. You can protect yourself from this scenario by restricting what the administrator can do. So even in the worst case, where an intruder is in your system as an admin, they can’t wreck everything. Smaller organizations may lack the resources to define administrator roles with real granularity. But the more larger enterprises can restrict administrators to particular functions the harder it becomes for a bad apple to take everything down. Policy Granularity You need to define roles and responsibilities – what administrators can and can’t do – with sufficient granularity. We won’t go into detail on the process of setting policies, but you will either adopt a whitelist approach: defining legitimate commands and blocking everything else; or block specific commands (a blacklist), such as restricting folks in the network admin group from deleting or snapshotting volumes in the data center. Depending on your needs, you could also define far more granular polices, similar to the policy options available for controlling access to the password vault. For example you might specify that a sysadmin can only add user accounts to devices during business hours, but can add and remove volumes at any time. Or you could define specific types of commands authorized to flow from an application to the back-end database to prevent unauthorized data dumps. But granularly brings complexity. In a rapidly changing environment it can be hard to truly nail down a legitimate set of allowable actions for specific administrators. So getting too granular is a problem too – similar to the issues with application whitelisting. And the higher up the application stack you go, the more integration is required, as homegrown and highly customized applications need to be manually integrated into the privileged user management system. Location, Location, Location As much fun as it is to sit around and set up policies, the reality is that nothing is protected until the entitlements are enforced. There are two main approaches to actually enforcing entitlements. The first involves implementing a proxy in between the admin and the system, which acts as a man in the middle to interpret and then either allow or block each command. Alternatively, entitlements can be enforced on the end devices via agents that intercept commands and enforce policy locally. We aren’t religious about either approach, and each has pros and cons. Specifically, the proxy implementation is simpler – you don’t need to install agents on every device, so you don’t have to worry about OS compatibility (as long as the command syntax remains consistent) or deal with incompatibilities every time an underlying OS is updated. Another advantage is that unauthorized commands are blocked before reaching the managed device, so even if the attacker has elevated privileges, management commands can only come through the proxy. On the other hand the proxy serves as a choke point, which may introduce a single point of failure. Similarly, an agent-based approach offers advantages such as preventing attackers from back-dooring devices by defeating the proxy or gaining physical access to the devices. The agent runs on each device, so even being at the keyboard doesn’t kill it. But agents require management, and consume processing resources on the managed systems. Pick the approach that makes the most sense for your environment, culture, and operational capabilities. At this point in the lifecycle privileged users should be pretty well locked down. But as a card-carrying security professional you don’t trust anything. Keep an eye on exactly what the admins are doing – we will cover privileged user monitoring next. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Infrastructure

As we discussed in the Vulnerability Management Evolution introduction, traditional vulnerability scanners, focused purely on infrastructure devices, do not provide enough context to help organizations prioritize their efforts. Those traditional scanners are the plumbing of threat management. You don’t appreciate the scanner until your proverbial toilet is overflowing with attackers and you have no idea what are they targeting. We will spend most of this series on the case for transcending device scanning, but infrastructure scanning remains a core component of any evolved threat management platform. So let’s look at some key aspects of a traditional scanner. Core Features As a mature technology, pretty much all the commercial scanners have a core set of functions that work well. Of course different scanners have different strengths and weaknesses, but for the most part they all do the following: Discovery: You can’t protect something (or know it’s vulnerable) if you don’t know it exists. So the first key feature is discovery. The enemy of a security professional is surprise, so you want to make sure you know about new devices as quickly as possible, including rogue wireless access points and other mobile devices. Given the need to continuously perform discovery, passive scanning and/or network flow analysis can be an interesting and useful complement to active device discovery. Device/Protocol Support: Once you have found a device, you need to figure out its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess the varieties of network and security devices running in your environment, as well as servers on all relevant operating systems. Of course databases and applications are important too, but we’ll discuss those later in this series. And be careful scanning brittle systems like SCADA, as knocking down production devices doesn’t make any friends in the Ops group. Inside/Out and Outside/In: You can’t assume adversaries are only external or internal, so you need the ability to assess your devices from both inside and outside your network. So some kind of scanner appliance (which could be virtualized) is needed to scan the innards of your environment. You’ll also want to monitor your IP space from the outside to identify new Internet facing devices, find open ports, etc. Accuracy: Unless you enjoy wild goose chases, you’ll come to appreciate a scanner that minimizes false positives by focusing on accuracy. Accessible Vulnerability Information: With every vulnerability found, decisions must be made on the severity of the issue, so it’s very helpful to have information from either the vendor’s research team or other third parties on the vulnerability, directly within the scanning console. Appropriate Scale: Adding capabilities to the evolved platform makes scale a much more serious issue. But first things first: the scanner must be able to scan your environment quickly and effectively, whether that is 200 or 200,000 devices. The point is to ensure the scanner is extensible to what you’ll need as you add devices, databases, apps, virtual instances, etc. over time. We will discuss platform technical architectures later in this series, but for now suffice it to say there will be a lot more data in the vulnerability management platform, and the underlying platform architecture needs to keep up. New & Updated Tests: Organizations face new attacks constantly and attacks evolve constantly. So your scanner needs to keep current to test for the latest attacks. Exploit code based on patches and public vulnerability disclosures typically appears within a day so time is of the essence. Expect your platform provider to make significant investments in research to track new vulnerabilities, attacks, and exploits. Scanners need to be updated almost daily, so you will need the ability to transparently update them with new tests – whether running on-premises or in the cloud. Additional Capabilities But that’s not all. Today’s infrastructure scanners also offer value-added functions that have become increasingly critical. These include: Configuration Assessment: There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for device compromise. For example, a patched firewall with an any-to-any policy doesn’t protect much – completely aside from any vulnerability defects. But unfortunately the industry’s focus on vulnerabilities means this capability is usually considered a scanner add-on. Over time these distinctions will fade away, as we expect both vulnerability scanning and configuration assessment to emerge as critical components of the platform. Further evolution will add the ability to monitor for system file changes and integrity – it is the same underlying technology. Patch Validation: As we described in Patch Management Quant, validating patches is an integral part of the process. With some strategic integration between patch and configuration management, the threat management platform can (and should) verify installed patches to confirm that the vulnerability has been remediated. Further integration involves sending information to and from IT Ops systems to close the loop between security and Operations. Cloud/Virtualization Support: With the increasing adoption of virtualization in data centers, you need to factor in the rapid addition and removal of virtual machines. This means not only assessing hypervisors as part of your attack surface, but also integrating information from the virtualization management console (vCenter, etc.) to discover what devices are in use and which are not. You’ll also want to verify the information coming from the virtualization console – you learned not to trust anything in security pre-school, didn’t you? Leveraging Collection So what’s the difference with all of these capabilities from what you already have? It’s all about making 1 + 1 = 3 by integrating data to derive information and drive priorities. We have seen some value-add capabilities (configuration assessment, patch validation, etc.) further integrated into infrastructure scanners to good effect. This positions the vulnerability/threat management platform as another source of intelligence for security professionals. And we are only getting started – there are plenty of other data types to incorporate into this discussion. Next we will climb the proverbial stack and evaluate how database and application scanning play into the evolved platform story. Share:

Share:
Read Post

Friday Summary: April 6, 2012

Rich here… Normally I like to open the Summary with a bit of something from my personal life. Some sort of anecdote with a message. In other words, I blatantly ripped off Mike’s format for the Security Incite… long before he took over half the company. (With Mike, even a partnership can probably be defined as a hostile takeover, based solely on his gruff voice and honesty of opinion). Heck, I can’t even remember any good anecdotes from the CCSK cloud security class Adrian and I taught last week in San Jose. Even when we hooked up with Richard Baker and our own James Arlen for dinner, I think half the conversation was about my and Jamie’s recent family trips to dinner. And that stripmall Thai place is probably better than the fanciest one here in Phoenix. I don’t even have any good workout anecdotes. I’m back on the triathlon wagon and chugging along. Although I did get a really cool new heart rate monitor/GPS that I’m totally in love with. (The Garmin 910XT, which is friggin’ amazing). I probably need to pick a race to prep for, but am otherwise enjoying being healthy and relatively uninjured, and not getting run over by cars on my bike rides. The kids are still cute and the older one is finally getting addicted to the iPad (which I encourage, although it is making normal computers really frustrating for her to use). They talk a lot, are growing too fast, and are far more interesting than anything else in my life. By nope, no major life lessons in the past few weeks that I can remember. Although there are some clear analogies between having kids and advanced persistent threats. Especially if you have daughters. And work? The only lesson there is to be careful what you wish for, as I fail, on a daily basis, to keep up with my inbox. Never mind my actual projects. But business is good, some very cool research is on the way, and it’s nice to have a paycheck. And I swear the Nexus isn’t vaporware. It’s actually all torn apart as we hammer in a ton of updates based on the initial beta feedback. In other words… life doesn’t suck. I actually enjoy it, and am amazed I get to write this on my iPad while sitting outside in perfect weather at a local restaurant. Besides, this is a security blog – if you’re reading it for life messages you need to get out more. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted by Ars Technica on iCloud privacy and security. Rich, again over at Ars, but this time on iPhone forensics. Favorite Securosis Posts Adrian Lane: iOS Data Security: Managed Devices. Both the post and the banter are quality. Mike Rothman: Defining Your iOS Data Security Strategy. Really liked this series by Rich. Great work and very timely. BYOD and other mobile security issues are the #1 concern of the folks I’m talking to during my travels. Rich: Vulnerability Management Evolution: Scanning the Infrastructure. Yes, we still have to deal with this stuff in 2012. Other Securosis Posts Incite 4/4/2012: Travel the Barbarian. Watching the Watchers: Protect Credentials. Vulnerability Management Evolution: Introduction. iOS Data Security: Securing Data on Partially-Managed Devices. Understanding and Selecting DSP: Core Features. Understanding and Selecting DSP: Extended Features. Favorite Outside Posts Adrian Lane: Hash Length Extension Attacks. Injection attack on MAC check. Interesting. Mike Rothman: Choosing Between Making Money and Doing What You Love. The answer? Both. Even if you can’t make your passion a full time gig, working at it a little every day seems to make folks happy. Good to know. Dave Lewis: Too many passwords? Just one does the trick. Rich: DNS Changer. Possibly the most important thing you’ll read this year. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts VMware High-Bandwidth Backdoor ROM Overwrite Privilege Elevation. Wig Wam Bam. & Citrix and CloudStack Citrix intends to join and contribute to Apache Software Foundation. This isn’t security specific, but it is big. Global Payments: Rumor and Innuendo. GPN is saying there was no POS or merchant account hacking, so this was a breach of their systems. Flashback Trojan Compromises Macs. Dear FBI, Who Lost $1 Billion? Oh my goodness, does Adam nail it with this one. Major VMWare vulnerability. Incredible research here. An only semi-blatant advertisement for our friend Mr. Mortman at EnStratus. ZeuS botnet targets USAirways passengers. (No, not while they’re on the plane… yet). Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ryan, in response to iOS Data Security: Managed Devices. Is it nicer to say “captive network” or “traffic backhauling”? That said, nice post, and definitely part of a strategy I’ve seen work, although the example that leaps to mind is actually a security products company Share:

Share:
Read Post

Understanding and Selecting DSP: Extended Features

In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories. In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use. Analysis and Protection Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the FROM and WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked. Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems. Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior. File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than SELECT, INSERT, UPDATE, and DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution. Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the FROM clause. Queries that include the highly suspect “1=1” WHERE clause may simply return the value 1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash. Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level. Discovery Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases

Share:
Read Post

Incite 4/4/2012: Travel the Barbarian

Flying into Milan to teach the CCSK class on Sunday morning, it really struck me how much we take this technology stuff for granted. The flight was uneventful (though that coach seat on a 9+ hour flight is the suxxor), except for the fact that the in-seat entertainment system didn’t work in our section. Wait. What? You mean you can’t see the movies and TV shows you want, or play the trivia game to pass the time? How barbaric! Glad I brought my iPad, so I enjoyed half the first season of Game of Thrones. Then when I arrive I jump in the cab. The class is being held in a suburb of Milan, a bit off the beaten path. I’m staying in a local hotel, but it’s not an issue because I have the address and the cabbie has GPS. What did we do before GPS was pervasive? Yeah, I remember. We used maps. How barbaric. Then I get to the hotel and ask for the WiFi code. The front desk guy then proceeds to explain that you can buy 1, 4 or 12 hour blocks for an obscene number of Euros. Wait. What? You don’t have a daily rate? So I’ve got to connect and disconnect? And I have to manage connections between all of my devices. Man, feels like 5 years ago when you had to pay for WiFi in hotels in the US. No longer, though, because I carry around my MiFi and it provides great bandwidth for all my devices. They do offer MiFi devices in Italy, but not for rent. Yeah, totally barbaric – making me constrain my Internet usage. And don’t even get me started on cellular roaming charges. Which is why hourly WiFi is such a problem. I forwarded my cell phone to a Skype number, and the plan was to have Skype running in the background so I could take calls. Ah, the best laid plans… But one thing about Italy is far from barbaric, and that’s gelato. So what if they don’t take AmEx at most of the places I’ll go this week. They do have gelato, so I’ll deal with the inconveniences, and get back in the gym when I return to the States. Gelato FTW. -Mike Photo credits: “Conan the Barbarian #1” originally uploaded by Philipp Lenssen Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Introduction Defending iOS Data Managed Devices Defining Your iOS Data Security Strategy Watching the Watchers (Privileged User Management) Protect Credentials Understanding and Selecting DSP Core Features Malware Analysis Quant Index of Posts Incite 4 U PCI CYA: We’ve said it here so many times that I can’t even figure out what to link to. The PCI Council claims that no PCI compliant organization has ever been breached. And as Alan Shimel points out, the Global Payments breach is no exception. The house wins once again. Or does it? Brian Krebs also reports that timelines don’t match up, or perhaps there is also another breach involved with a different payment processor? I’m sure if that’s true they’ll be dropped from PCI like a hot turd. Never forget that PCI is about protecting the card brands first, and anyone else 27th. – RM White noise: You’ve probably heard about the Global Payments breach. That means that as I write this the marketing department of every security vendor is crafting a story about how their products would have stopped the breach. And that’s all BS. Visa and Brian Krebs are reporting the attackers accessed Track 2 data – that tells us a lot. It’s clearly stated in the PCI-DSS specification that the mag stripe data is not to be stored anywhere by payment processors or merchant banks. It’s unlikely that attackers compromised the point-of-sale devices or the network feeds into Global Payments to collect 1.5M records from the merchant account of Joe’s Parking Garage in a month. As Global Payments is saying the data was ‘exported’, it’s more likely that their back office systems were breached, exposing unencrypted track data. Any security vendor’s ability to detect and stop the ‘export’ is irrelevant; it’s more secure to not collect the data at all. And even if the records were ‘temporary’, they should have been encrypted to avoid just this exposure to people poking around systems and databases at any time. So just sit back and learn (once again) from the screw-ups that continue to occur. I’m sure we’ll hear a lot more about this in the coming weeks. – AL I’ll take “nothing” for $200, Alex: Everybody batten down the hatches, it may be Spring (in the Northern Hemisphere, anyway), but when Shack becomes optimistic you can be sure that winter is coming. Though I do like to see a happier Shack talking about what is right with Infosec. Things like acceptance of breach inevitability and less acceptance of bureaucracy (though that cycles up and down). There are some good points here, but the most optimistic thing Dave says is that we have smart new blood coming into the field. And that the responsibility is ours, as the grizzled cynical old veterans, not to tarnish the new guys before their time. – MR Security is the broker: Managing enterprise adoption of cloud computing is a tough problem. There is little to prevent dev and ops from running out and spinning up their own systems on various cloud services; assuming you are silly enough to give them credit cards. Gartner thinks that enterprises will use cloud service brokerages (which will be internal) to facilitate cloud use. I agree, although if you are smart, security will play this key role (or a big part of it). Security can broker identity and access management, secure cloud APIs, handle encryption, and define compliance policies (the biggest obstacle to cloud adoption ). We have the tools, mandate, and responsibility. But if you don’t get ahead of things you will be

Share:
Read Post

Defining Your iOS Data Security Strategy

Now that we’ve covered the different data security options for iOS it’s time to focus on building a strategy. In many ways figuring out the technology is the easy part of the problem – the problems start when you need to apply that technology in a dynamic business environment, with users who have already made technology choices. Factors Most organizations we talk with – of all sizes and in all verticals – are under intense pressure to support iOS, to expand support of iOS, or to wrangle control over data security on iDevices already deployed and in active use. So developing your strategy depends on where you are starting from as much as on your overall goals. Here are the major factors to consider: Device ownership Device ownership is no longer a simple “ours or theirs”. Although some companies are able to maintain strict management of everything that connects to their networks and accesses data, this is becoming the exception more than the rule. Nearly all organizations are being forced to accept at least some level of employee-owned device access to enterprise assets whether that means remote access for a home PC, or access to corporate email on an iPad. The first question you need to ask yourself is whether you can maintain strict ownership of all devices you support – or if you even want to. The gut instinct of most security professionals is to only allow organization-owned devices, but this is rarely a viable long-term strategy. On the other hand, allowing employee-owned devices doesn’t require you to give up on enterprise ownership completely. Many of the data security options we have discussed work in a variety of scenarios. Here’s how to piece together your options: Employee owned devices: Your options are either partially managed or unmanaged. With unmanaged you have few viable security options and should focus on sandboxed messaging, encryption, and DRM apps. Even if you use one of these options, it will be more secure if you use even minimal partial management to enable data protection (by enforcing a passcode), enable remote wipe, and installing an enterprise digital certificate. The key is to sell this option to users, as we will detail below. Organization owned devices: These fall into two categories – general and limited use. Limited use devices are highly restricted and serve a single purpose; such as flight manuals for pilots, mobility apps for health care, or sales/sales engineering support. They are locked down with only necessary apps running. General use devices are issued to employees for a variety of job duties and support a wider range of applications. For data security, focus on the techniques that manage data moving on and off devices – typically managed email and networking, with good app support for what they need to get their jobs done. If the employee owns the device you need to get their permission for any management of it. Define simple clear policies that include the following points: It is the employee’s device, but in exchange for access to work resources the employee allows the organization to install a work profile on the device. The work profile requires a strong passcode to protect the device and the data stored on it. In the event the device is lost or stolen, you must report it within [time period]. If there is reasonable belief the device is at risk [employer] will remotely wipe the device. This protects both personal and company data. If you use a sandboxed app that only wipes itself, specify that here. If you use a backhaul network, detail when it is used. Devices cannot be shared with others, including family. How the user is allowed to backup the device (or a recommended backup option). Emphasize that these restrictions protect both personal and organizational data. The user must understand and accept that they are giving up some control of their device in order to gain access to work resources. They must sign the policy, because you are installing something on their personal device, and you need clear evidence they know what that means. Culture Financial services companies, defense contractors, healthcare organizations, and tech startups all have very different cultures. Some expect and accept much more tightly restricted access to employer resources, while others assume unrestricted access to consumer technology. Don’t underestimate culture when defining your strategy – we have presented a variety of options on the data security spectrum, and some may not work with your particular culture. If more freedom is expected look to sandboxed apps. If management is expected, you can support a wider range of work activities, with your tighter device control. Sensitivity of the data Not every organization has the same data security needs. There are industries with information that simply shouldn’t be allowed onto a mobile device with any chance of loss. But most organizations have more flexibility. The more sensitive the data, the more it needs to be isolated (or restricted from being on the device). This ties into both network security options (including DLP to prevent sensitive data from going to the device) and messaging/file access options (such as Exchange ActiveSync and sandboxed apps of all flavors). Not all data is equal. Assess your risk and then tie it back into an appropriate technology strategy. Business needs and workflow If you need to exchange documents with partners, you will use different tools than if you only want to allow access to employee email. If you use cloud storage or care about document-level security, you may need a different tool. Determine what the business wants to do with devices, then figure out which components you need to support that. And don’t forget to look at what they are already doing, which might surprise you. Existing infrastructure If you have backhaul networks or existing encryption tools that may incline you in a particular direction. Document storage and sharing technologies (both internal and cloud) are also likely to influence your decision. The trick is to follow the workflow. As we mentioned previously, you should map out existing

Share:
Read Post

Understanding and Selecting DSP: Core Features

So far this series has introduced Database Security Platforms, provided a full definition of DSP, discussed the origins and evolution of DAM to DSP, and described the technical platform architecture. We have covered the basics of a Database Security Platform. It might seem like a short list compared to all the other extended features we will cover later, but these are the most important ares, and the primary reasons to buy these tools. Activity Monitoring The single defining feature of Database Security Platforms is their ability to collect and monitor all database activity. This includes all administrator and system activity that touches data (short of things like indexing and other autonomous internal functions). We have already covered the various event sources and collection techniques used to power this monitoring, but let’s briefly review what kinds of activity these products can monitor: All SQL – DML, DDL, DCL, and TCL: Activity monitoring needs to include all interactions with the data in the database, which for most databases (even non-relational) involves some form of SQL (Structured Query Language). SQL breaks down into the Data Manipulation Language (DML, for select/update queries), the Data Definition Language (DDL, for creating and changing table structure), the Data Control Language (DCL, for managing permissions and such) and the Transaction Control Language (TCL, for things like rollbacks and commits). As you likely garnered from our discussion of event sources, depending on a product’s collection techniques, it may or may not cover all this activity. SELECT queries: Although a SELECT query is merely one of the DML activities, due to the potential for data leakage, SELECT statements are monitored particularly closely for misuse. Common controls examine the type of data being requested and the size of the result set, and check for SQL injection. Administrator activity: Most administrator activity is handled via queries, but administrators have a wider range of ways they can connect to database than regular users, and more ability to hide or erase traces of their activity. This is one of the biggest reasons to consider a DSP tool, rather than relying on native auditing. Stored procedures, scripts, and code: Stored procedures and other forms of database scripting may be used in attacks to circumvent user-based monitoring controls. DSP tools should also track this internal activity (if necessary). File activity, if necessary: While a traditional relational database relies on query activity to view and modify data, many newer systems (and a few old ones) work by manipulating files directly. If you can modify the data by skipping the Database Management System and editing files directly on disk (without breaking everything, as would happen with most relational systems), some level of monitoring is probably called for. Even with a DSP tool, it isn’t always viable to collect everything, so the product should support custom monitoring policies to select what types of activities and/or user accounts to monitor. For example, many customers deploy a tool only to monitor administrator activity, or to monitor all administrators’ SELECT queries and all updates by everyone. Policy Enforcement One of the distinguishing characteristics of DSP tools is that they don’t just collect and log activity – they analyze it in real or near-real time for policy violations. While still technically a detective control (we will discuss preventative deployments later), the ability to alert and respond in or close to real time offers security capabilities far beyond simple log analysis. Successful database attacks are rarely the result of a single malicious query – they involve a sequence of events (such as exploits, alterations, and probing) leading to eventual damage. Ideally, policies are established to detect such activity early enough to prevent the final loss-bearing act. Even when an alert is triggered after the fact, it facilitates immediate incident response, and investigation can begin immediately rather than after days or weeks of analysis. Monitoring policies fall into two basic categories: Rule-based: Specific rules are established and monitored for violation. They can include specific queries, result counts, administrative functions (such as new user creation and rights changes), signature-based SQL injection detection, UPDATE or other transactions by users of a certain level on certain tables/fields, or any other activity that can be specifically described. Advanced rules can correlate across different parts of a database or even different databases, accounting for data sensitivity based on DBMS labels or through registration in the DAM tool. Heuristic: Monitoring database activity builds a profile of ‘normal’ activity (we also call this “behavioral profiling”). Deviations then generate policy alerts. Heuristics are complicated and require tuning to work effectively. They are a good way to build a base policy set, especially with complex systems where manually creating deterministic rules by hand isn’t realistic. Policies are then tuned over time to reduce false positives. For well-defined systems where activity is consistent, such as an application talking to a database using a limited set of queries, they are very useful. Of course heuristics fail when malicious activity is mis-profiled as good activity. Aggregation and Correlation One characteristic which Database Security Platforms share with System Information and Event Management (SIEM) tools is their ability to collect disparate activity logs from a variety of database management systems – and then to aggregate, correlate, and enrich event data. The combination of multiple data sources across heterogenous database types enables more complete analysis of activity rather than working only on one isolated query at a time. And by understanding the Structured Query Language (SQL) syntax of each database platform, DSP can interpret queries and parse their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is chock full of its own particular syntax. A DSP solution should understand the SQL for each covered platform and be able to normalize events so the analyst doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DSP tool will recognize those various events across platforms and present you with a complete report without you having to understand the SQL particulars of each one. Assessment We typically see three types of assessment

Share:
Read Post

iOS Data Security: Managed Devices

In our last post, on data security for partially-managed devices, I missed one option we need to cover before moving onto fully-managed devices: User-owned device with managed/backhaul network (cloud or enterprise) This option is an adjunct to our other data security tools, and isn’t sufficient for protecting data on its own. The users own their devices, but agree to route all traffic through an enterprise-managed network. This might be via a VPN back to the corporate network or through a VPN service. On the data security side, this enables you to monitor all network traffic – possibly including SSL traffic (by installing a special certificate on the device). This is more about malware protection and reducing the likelihood of malicious apps on the devices, but it also supports more complete DLP. Managed Devices When it comes to data security on managed devices, life for the security administrator gets a bit easier. With full control of the device we can enforce any policies we want, although users might not be thrilled. Remember that full control doesn’t necessarily mean the device is in a highly-restricted kiosk mode – you can still allow a range of activities while maintaining security. All our previous data security options are available here, as well as: MDM managed device with Data Protection Using a Mobile Device Management tool, the iOS device is completely managed and restricted. The user is unable to install unapproved applications, email is limited to the approved enterprise account, and all security settings are enabled for Data Protection. Restricting the applications allowed on the device and enforcing security policies makes it much more difficult for users to leak data through unapproved services. Plus you gain full Data Protection, strong passcodes, and remote wiping. Some MDM tools even detect jailbroken devices. To gain the full benefit of Data Protection, you need to block unapproved apps which could leak data (such as Dropbox and iCloud apps). This isn’t always viable, which is why this option is often combined with a captive network to give users a bit more flexibility. Managed/backhaul network with DLP, etc. The device uses an on-demand VPN to route all network traffic, at all times, through an enterprise or cloud portal. We call it an “on-demand” VPN because the device automatically shuts it down when there is no network traffic and brings it up before sending traffic – the VPN ‘coverage’ is comprehensive. “On-demand” here definitely does **not* mean users can bring the VPN up and down as they want. Combined with full device management, the captive network affords complete control over all data moving onto and off the devices. This is primarily used with DLP to manage sensitive data, but it may also be used for application control or even to allow use of non-enterprise email accounts, which are still monitored. On the DLP front, while we can manage enterprise email without needing a full captive network, this option enables us to also manage data in web traffic. Full control of the device and network doesn’t obviate the need for certain other security options. For example, you might still need encryption or DRM, as these allow use of otherwise insecure cloud and sharing services. Now that we have covered our security options, our next post will look at picking a strategy. Share:

Share:
Read Post

Watching the Watchers: Protect Credentials

As we continue our march through the Privileged User Lifecycle, we have provisioned the privileged users and restricted access to only the devices they are authorized to manage. The next risk to address is the keys or credentials of these privileged users (P-Users) falling into the wrong hands. The best access and entitlements security controls fail if someone can impersonate a P-User. But the worst risk isn’t even compromised credentials. It’s not having unique credentials in the first place. You must have seen the old admin password sharing scheme, right? It was used, mostly out of necessity, many moons ago. Administrators needed access to the devices they managed. But at times they needed help, so they asked a buddy to take care of something, and just gave him/her the credentials. What could possibly go wrong? We covered a lot of that in the Keys to the Kingdom. Shared administrative credentials open Pandora’s box. Once the credentials are in circulation you can’t get them back – which is a problem when an admin leaves the company or no longer has those particular privileges. You can’t deprovision shared credentials so you need to change them. PCI, as the low bar for security (just ask Global Payments), recognizes the issues with sharing IDs, so Requirement 8 is all about making sure anyone with access to protected data uses a unique ID, and that their use is audited – so you can attribute every action to a particular user. But that’s not all! (in my best infomercial voice). What about the fact that some endpoints could be compromised? Even administrative endpoints. So sending admin credentials to that endpoint might not be safe. And what happens when developers hard-code credentials into an applications? Why go through the hassle of secure coding – just embed the password right into the application! That password never changes anyway, so what’s the risk? So we need to protect credentials, as much as whatever they control. Credential Lockdown How can we protect these credentials? Locking the credentials away in a vault meets many of the requirements described above. First, if the credentials are stored in a vault, it harder for admins to share them. Let’s not put the cart before the horse, but this makes it pretty easy (and transparent) to change the password after every access, eliminating the sticky-note-under-keyboard risk. Going through the vault for every administrative credential access means you have an audit trail of who used which credentials (and presumably which specific devices they were managing) and when. That kind of stuff makes auditors happy. Depending on the deployment of the vault, the administrator may never even see the credentials, as they can be automatically entered on the server if you use a proxy approach to restricting access. And this also provides single sign-on to all managed devices, as the administrator authenticates (presumably using multiple factors) to the proxy, which interfaces directly to the vault again, transparently to the user. So even an administration device teeming with malware cannot expose critical credentials. Similarly, an application can make a call to the vault, rather than hard-coding credentials into the app. Yes, the credentials still end up on the application server, but that’s still much better than hard-coding the password. So are you sold yet? If you worry about credentials being access and misused, a password vault provides a good mechanism for protecting them. Define Policies As with most things in security, using a vault involves both technology and process. We will tackle the process first, because without a good process even the best technology has no chance. So before you implement anything you need to define the rules of (credential) engagement. You need to answer some questions. Which systems and devices need to be involved in the password management system? This may involve servers (physical and/or virtual), network & security devices, infrastructure services (DNS, Directory, mail, etc.), databases, and/or applications. Ideally your vault will natively support most of your targets, but broad protection is likely to require some integration work on your end. So make sure any solution you look at has some kind of API to facilitate this integration. How does each target use the vault? Then you need to decide who (likely by group) can access each target, how long they are allowed to use the credentials (and manage the device), and whether they need to present additional authentication factors to access the device. You’ll also define whether multiple administrators can access managed devices simultaneously and whether to change the password after each check-in/check-out cycle. Finally, you may need to support external administrators (for third party management or business partner integration), so keep that in mind as you work through these decisions. What kind of administrator experience makes sense? Then you need to figure out the P-User interaction with the system. Will it be via a proxy login, where the user never sees the credentials, or will there be a secure agent on the device to receive and protect the credential? Figure out how the vault supports application-to-database and application-to-application interaction, as those are different than supporting human admins. You’ll also want to specify which activities are audited and how long audit logs are kept. Securing the Vault If you are putting the keys to the kingdom in this vault, make sure it’s secure. You probably will not bring a product in and set your application pen-test ninjas loose on it, so you are more likely to rely on what we call the sniff test. Ask questions to see whether the vendor has done their homework to protect the vault. You should understand the security architecture of the vault. Yes, you may have to sign a non-disclosure agreement to see the details, but it’s worth it. You need to know how they protect things. Discuss the threat model(s) the vendor uses to implement that security architecture. Make sure they didn’t miss any obvious attack vectors. You also need to poke around their development process a bit and make sure they have a proper SDLC and actually test for security defects before

Share:
Read Post

Vulnerability Management Evolution: Introduction

Back when The Pragmatic CSO was published in 2007, I put together a set of tips for being a better CISO. In fact you can still get the tips (sent one per day for five days) if you register on the Pragmatic CSO site. Not to steal any thunder, but Tip #2 is Prioritize Fiercely. Let’s take a look at what I wrote back then. Tip #2 is all about the need to prioritize. The fact is you can’t get everything done. Not by a long shot. So you have a choice. You can just not get to things and hope you don’t end up overly exposed. Or you can think about what’s important to your business and act to protect those systems first. Which do you think is the better approach? The fact is that any exposure can create problems. But you dramatically reduce the odds of a career-limiting incident if you focus most of your time on the highest profile systems. Maybe it’s not good old Pareto’s 80/20 rule, but you should be spending a bulk of your time focused on the systems that are most important to your business. Or hope the bad guys don’t know which is which. 5 years later that tip still makes perfect sense. No organization, including the biggest of the big, has enough resources. Which means you must make tough choices. Things won’t be done when they need to be. Some things won’t get done at all. So how do you choose? Unfortunately most organizations don’t choose at all. They do whatever is next on the list, without much rhyme or reason determining where things land on it. It’s the path of least resistance for a tactically oriented environment. Oil the squeakiest wheel. Keep your job. It’s all very understandable, but not very effective. Optimally, resources are allocated and priorities set based upon value to the business. In a security context, that means the next thing done should reduce the most risk to your organization. Of course calculating that risk is where things get sticky. Regardless of your specific risk quantification religion, we can all agree that you need data to accurately evaluate these risks and answer the prioritization question. Last year we did a project called Fact-Based Network Security: Metrics and the Pursuit of Prioritization which dealt with one aspect of this problem: how to make decisions based on network metrics. But the issue is bigger than that. Network exposure is only one factor in the decision-making process. You need to factor in a lot of other data – including vulnerability scans, device configurations, attack paths, application and database posture, security intelligence, benchmarks, and lots of other stuff – to get a full view of the environment, evaluate the risk, and make appropriate prioritization decisions. Historically, vulnerability scanners haves provided a piece of that data, telling you which devices were vulnerable to what attacks. The scanners didn’t tell you whether the devices were really at risk – only whether they were vulnerable. From Tactical to Strategic Organizations have traditionally viewed vulnerability scanners as a tactical product, largely commoditized, and only providing value around audit time. How useful is a 100-page vulnerability report to an operations person trying to figure out what to fix next? Though the 100-page report did make the auditor smile, as it provides a nice listing of all the audit deficiencies to address in the findings of fact. At the recent RSA Conference 2012, we definitely saw a shift from largely compliance-driven messaging to a more security-centric view. It’s widely acknowledged that compliance provides a low (okay – very low) bar for security, and it just isn’t high enough. So more strategic security organizations need better optics. They need the ability to pull in a lot of threat-related data, reference it with an understanding of what is vulnerable, and figure out what is at risk. Yesterday’s vulnerability scanners are evolving to meet this need, and are emerging as a much more strategic component of an organization’s control set than in the past. So we are starting a new series to tackle this evolution – we call it Vulnerability Management Evolution. As with last year’s SIEM Replacement research, we believe it is now time to revisit your threat management/vulnerability scanning strategy. Not necessarily to swap out products, services, or vendors, but to enssure your capabilities map to what you need now and in the future. We will start by covering the traditional scanning technologies and then quickly go on to some advanced capabilities you will need to start leveraging these platforms for decision support. Yes, decision support is the fancy term for helping you prioritize. Platform Emergence As we’ve discussed, you need more than just a set of tactical scans to generate a huge list of things you’ll never get to. You need information that helps you decide how to allocate resources and prioritize efforts. We believe what used to be called a “vulnerability scanner” is evolving into a threat management platform. Sounds spiffy, eh? When someone says platform, that usually indicates use of a common data model as the foundation, with a number of different applications riding on top, to deliver value to customers. You don’t buy a platform per se. You buy applications that leverage a platform to provide value to solve the problems you have. That’s exactly what we are talking about here. But traditional scanning technology isn’t a platform in any sense of the word. So this vulnerability management evolution requires a definite technology evolution. We are talking about growth from single-purpose product into multi-function platform. This evolved platform encompasses a number of different capabilities. Starting with the tried and true device scanner, to include database and application scanning and risk scoring. But we don’t want to spoil the fun today – we will describe not just the core technology that enables the platform, but the critical enterprise integration points and bundled value-added technologies (such as attack path analysis, automated pen testing, benchmarking, et al) that differentiate between a tactical product decision to a strategic platform deployment. We will also talk about the enterprise features you need from a platform, including

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.