Securosis

Research

Understanding and Selecting DSP: Extended Features

In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories. In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use. Analysis and Protection Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the FROM and WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked. Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems. Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior. File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than SELECT, INSERT, UPDATE, and DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution. Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the FROM clause. Queries that include the highly suspect “1=1” WHERE clause may simply return the value 1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash. Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level. Discovery Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases

Share:
Read Post

Incite 4/4/2012: Travel the Barbarian

Flying into Milan to teach the CCSK class on Sunday morning, it really struck me how much we take this technology stuff for granted. The flight was uneventful (though that coach seat on a 9+ hour flight is the suxxor), except for the fact that the in-seat entertainment system didn’t work in our section. Wait. What? You mean you can’t see the movies and TV shows you want, or play the trivia game to pass the time? How barbaric! Glad I brought my iPad, so I enjoyed half the first season of Game of Thrones. Then when I arrive I jump in the cab. The class is being held in a suburb of Milan, a bit off the beaten path. I’m staying in a local hotel, but it’s not an issue because I have the address and the cabbie has GPS. What did we do before GPS was pervasive? Yeah, I remember. We used maps. How barbaric. Then I get to the hotel and ask for the WiFi code. The front desk guy then proceeds to explain that you can buy 1, 4 or 12 hour blocks for an obscene number of Euros. Wait. What? You don’t have a daily rate? So I’ve got to connect and disconnect? And I have to manage connections between all of my devices. Man, feels like 5 years ago when you had to pay for WiFi in hotels in the US. No longer, though, because I carry around my MiFi and it provides great bandwidth for all my devices. They do offer MiFi devices in Italy, but not for rent. Yeah, totally barbaric – making me constrain my Internet usage. And don’t even get me started on cellular roaming charges. Which is why hourly WiFi is such a problem. I forwarded my cell phone to a Skype number, and the plan was to have Skype running in the background so I could take calls. Ah, the best laid plans… But one thing about Italy is far from barbaric, and that’s gelato. So what if they don’t take AmEx at most of the places I’ll go this week. They do have gelato, so I’ll deal with the inconveniences, and get back in the gym when I return to the States. Gelato FTW. -Mike Photo credits: “Conan the Barbarian #1” originally uploaded by Philipp Lenssen Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Introduction Defending iOS Data Managed Devices Defining Your iOS Data Security Strategy Watching the Watchers (Privileged User Management) Protect Credentials Understanding and Selecting DSP Core Features Malware Analysis Quant Index of Posts Incite 4 U PCI CYA: We’ve said it here so many times that I can’t even figure out what to link to. The PCI Council claims that no PCI compliant organization has ever been breached. And as Alan Shimel points out, the Global Payments breach is no exception. The house wins once again. Or does it? Brian Krebs also reports that timelines don’t match up, or perhaps there is also another breach involved with a different payment processor? I’m sure if that’s true they’ll be dropped from PCI like a hot turd. Never forget that PCI is about protecting the card brands first, and anyone else 27th. – RM White noise: You’ve probably heard about the Global Payments breach. That means that as I write this the marketing department of every security vendor is crafting a story about how their products would have stopped the breach. And that’s all BS. Visa and Brian Krebs are reporting the attackers accessed Track 2 data – that tells us a lot. It’s clearly stated in the PCI-DSS specification that the mag stripe data is not to be stored anywhere by payment processors or merchant banks. It’s unlikely that attackers compromised the point-of-sale devices or the network feeds into Global Payments to collect 1.5M records from the merchant account of Joe’s Parking Garage in a month. As Global Payments is saying the data was ‘exported’, it’s more likely that their back office systems were breached, exposing unencrypted track data. Any security vendor’s ability to detect and stop the ‘export’ is irrelevant; it’s more secure to not collect the data at all. And even if the records were ‘temporary’, they should have been encrypted to avoid just this exposure to people poking around systems and databases at any time. So just sit back and learn (once again) from the screw-ups that continue to occur. I’m sure we’ll hear a lot more about this in the coming weeks. – AL I’ll take “nothing” for $200, Alex: Everybody batten down the hatches, it may be Spring (in the Northern Hemisphere, anyway), but when Shack becomes optimistic you can be sure that winter is coming. Though I do like to see a happier Shack talking about what is right with Infosec. Things like acceptance of breach inevitability and less acceptance of bureaucracy (though that cycles up and down). There are some good points here, but the most optimistic thing Dave says is that we have smart new blood coming into the field. And that the responsibility is ours, as the grizzled cynical old veterans, not to tarnish the new guys before their time. – MR Security is the broker: Managing enterprise adoption of cloud computing is a tough problem. There is little to prevent dev and ops from running out and spinning up their own systems on various cloud services; assuming you are silly enough to give them credit cards. Gartner thinks that enterprises will use cloud service brokerages (which will be internal) to facilitate cloud use. I agree, although if you are smart, security will play this key role (or a big part of it). Security can broker identity and access management, secure cloud APIs, handle encryption, and define compliance policies (the biggest obstacle to cloud adoption ). We have the tools, mandate, and responsibility. But if you don’t get ahead of things you will be

Share:
Read Post

Defining Your iOS Data Security Strategy

Now that we’ve covered the different data security options for iOS it’s time to focus on building a strategy. In many ways figuring out the technology is the easy part of the problem – the problems start when you need to apply that technology in a dynamic business environment, with users who have already made technology choices. Factors Most organizations we talk with – of all sizes and in all verticals – are under intense pressure to support iOS, to expand support of iOS, or to wrangle control over data security on iDevices already deployed and in active use. So developing your strategy depends on where you are starting from as much as on your overall goals. Here are the major factors to consider: Device ownership Device ownership is no longer a simple “ours or theirs”. Although some companies are able to maintain strict management of everything that connects to their networks and accesses data, this is becoming the exception more than the rule. Nearly all organizations are being forced to accept at least some level of employee-owned device access to enterprise assets whether that means remote access for a home PC, or access to corporate email on an iPad. The first question you need to ask yourself is whether you can maintain strict ownership of all devices you support – or if you even want to. The gut instinct of most security professionals is to only allow organization-owned devices, but this is rarely a viable long-term strategy. On the other hand, allowing employee-owned devices doesn’t require you to give up on enterprise ownership completely. Many of the data security options we have discussed work in a variety of scenarios. Here’s how to piece together your options: Employee owned devices: Your options are either partially managed or unmanaged. With unmanaged you have few viable security options and should focus on sandboxed messaging, encryption, and DRM apps. Even if you use one of these options, it will be more secure if you use even minimal partial management to enable data protection (by enforcing a passcode), enable remote wipe, and installing an enterprise digital certificate. The key is to sell this option to users, as we will detail below. Organization owned devices: These fall into two categories – general and limited use. Limited use devices are highly restricted and serve a single purpose; such as flight manuals for pilots, mobility apps for health care, or sales/sales engineering support. They are locked down with only necessary apps running. General use devices are issued to employees for a variety of job duties and support a wider range of applications. For data security, focus on the techniques that manage data moving on and off devices – typically managed email and networking, with good app support for what they need to get their jobs done. If the employee owns the device you need to get their permission for any management of it. Define simple clear policies that include the following points: It is the employee’s device, but in exchange for access to work resources the employee allows the organization to install a work profile on the device. The work profile requires a strong passcode to protect the device and the data stored on it. In the event the device is lost or stolen, you must report it within [time period]. If there is reasonable belief the device is at risk [employer] will remotely wipe the device. This protects both personal and company data. If you use a sandboxed app that only wipes itself, specify that here. If you use a backhaul network, detail when it is used. Devices cannot be shared with others, including family. How the user is allowed to backup the device (or a recommended backup option). Emphasize that these restrictions protect both personal and organizational data. The user must understand and accept that they are giving up some control of their device in order to gain access to work resources. They must sign the policy, because you are installing something on their personal device, and you need clear evidence they know what that means. Culture Financial services companies, defense contractors, healthcare organizations, and tech startups all have very different cultures. Some expect and accept much more tightly restricted access to employer resources, while others assume unrestricted access to consumer technology. Don’t underestimate culture when defining your strategy – we have presented a variety of options on the data security spectrum, and some may not work with your particular culture. If more freedom is expected look to sandboxed apps. If management is expected, you can support a wider range of work activities, with your tighter device control. Sensitivity of the data Not every organization has the same data security needs. There are industries with information that simply shouldn’t be allowed onto a mobile device with any chance of loss. But most organizations have more flexibility. The more sensitive the data, the more it needs to be isolated (or restricted from being on the device). This ties into both network security options (including DLP to prevent sensitive data from going to the device) and messaging/file access options (such as Exchange ActiveSync and sandboxed apps of all flavors). Not all data is equal. Assess your risk and then tie it back into an appropriate technology strategy. Business needs and workflow If you need to exchange documents with partners, you will use different tools than if you only want to allow access to employee email. If you use cloud storage or care about document-level security, you may need a different tool. Determine what the business wants to do with devices, then figure out which components you need to support that. And don’t forget to look at what they are already doing, which might surprise you. Existing infrastructure If you have backhaul networks or existing encryption tools that may incline you in a particular direction. Document storage and sharing technologies (both internal and cloud) are also likely to influence your decision. The trick is to follow the workflow. As we mentioned previously, you should map out existing

Share:
Read Post

Understanding and Selecting DSP: Core Features

So far this series has introduced Database Security Platforms, provided a full definition of DSP, discussed the origins and evolution of DAM to DSP, and described the technical platform architecture. We have covered the basics of a Database Security Platform. It might seem like a short list compared to all the other extended features we will cover later, but these are the most important ares, and the primary reasons to buy these tools. Activity Monitoring The single defining feature of Database Security Platforms is their ability to collect and monitor all database activity. This includes all administrator and system activity that touches data (short of things like indexing and other autonomous internal functions). We have already covered the various event sources and collection techniques used to power this monitoring, but let’s briefly review what kinds of activity these products can monitor: All SQL – DML, DDL, DCL, and TCL: Activity monitoring needs to include all interactions with the data in the database, which for most databases (even non-relational) involves some form of SQL (Structured Query Language). SQL breaks down into the Data Manipulation Language (DML, for select/update queries), the Data Definition Language (DDL, for creating and changing table structure), the Data Control Language (DCL, for managing permissions and such) and the Transaction Control Language (TCL, for things like rollbacks and commits). As you likely garnered from our discussion of event sources, depending on a product’s collection techniques, it may or may not cover all this activity. SELECT queries: Although a SELECT query is merely one of the DML activities, due to the potential for data leakage, SELECT statements are monitored particularly closely for misuse. Common controls examine the type of data being requested and the size of the result set, and check for SQL injection. Administrator activity: Most administrator activity is handled via queries, but administrators have a wider range of ways they can connect to database than regular users, and more ability to hide or erase traces of their activity. This is one of the biggest reasons to consider a DSP tool, rather than relying on native auditing. Stored procedures, scripts, and code: Stored procedures and other forms of database scripting may be used in attacks to circumvent user-based monitoring controls. DSP tools should also track this internal activity (if necessary). File activity, if necessary: While a traditional relational database relies on query activity to view and modify data, many newer systems (and a few old ones) work by manipulating files directly. If you can modify the data by skipping the Database Management System and editing files directly on disk (without breaking everything, as would happen with most relational systems), some level of monitoring is probably called for. Even with a DSP tool, it isn’t always viable to collect everything, so the product should support custom monitoring policies to select what types of activities and/or user accounts to monitor. For example, many customers deploy a tool only to monitor administrator activity, or to monitor all administrators’ SELECT queries and all updates by everyone. Policy Enforcement One of the distinguishing characteristics of DSP tools is that they don’t just collect and log activity – they analyze it in real or near-real time for policy violations. While still technically a detective control (we will discuss preventative deployments later), the ability to alert and respond in or close to real time offers security capabilities far beyond simple log analysis. Successful database attacks are rarely the result of a single malicious query – they involve a sequence of events (such as exploits, alterations, and probing) leading to eventual damage. Ideally, policies are established to detect such activity early enough to prevent the final loss-bearing act. Even when an alert is triggered after the fact, it facilitates immediate incident response, and investigation can begin immediately rather than after days or weeks of analysis. Monitoring policies fall into two basic categories: Rule-based: Specific rules are established and monitored for violation. They can include specific queries, result counts, administrative functions (such as new user creation and rights changes), signature-based SQL injection detection, UPDATE or other transactions by users of a certain level on certain tables/fields, or any other activity that can be specifically described. Advanced rules can correlate across different parts of a database or even different databases, accounting for data sensitivity based on DBMS labels or through registration in the DAM tool. Heuristic: Monitoring database activity builds a profile of ‘normal’ activity (we also call this “behavioral profiling”). Deviations then generate policy alerts. Heuristics are complicated and require tuning to work effectively. They are a good way to build a base policy set, especially with complex systems where manually creating deterministic rules by hand isn’t realistic. Policies are then tuned over time to reduce false positives. For well-defined systems where activity is consistent, such as an application talking to a database using a limited set of queries, they are very useful. Of course heuristics fail when malicious activity is mis-profiled as good activity. Aggregation and Correlation One characteristic which Database Security Platforms share with System Information and Event Management (SIEM) tools is their ability to collect disparate activity logs from a variety of database management systems – and then to aggregate, correlate, and enrich event data. The combination of multiple data sources across heterogenous database types enables more complete analysis of activity rather than working only on one isolated query at a time. And by understanding the Structured Query Language (SQL) syntax of each database platform, DSP can interpret queries and parse their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is chock full of its own particular syntax. A DSP solution should understand the SQL for each covered platform and be able to normalize events so the analyst doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DSP tool will recognize those various events across platforms and present you with a complete report without you having to understand the SQL particulars of each one. Assessment We typically see three types of assessment

Share:
Read Post

iOS Data Security: Managed Devices

In our last post, on data security for partially-managed devices, I missed one option we need to cover before moving onto fully-managed devices: User-owned device with managed/backhaul network (cloud or enterprise) This option is an adjunct to our other data security tools, and isn’t sufficient for protecting data on its own. The users own their devices, but agree to route all traffic through an enterprise-managed network. This might be via a VPN back to the corporate network or through a VPN service. On the data security side, this enables you to monitor all network traffic – possibly including SSL traffic (by installing a special certificate on the device). This is more about malware protection and reducing the likelihood of malicious apps on the devices, but it also supports more complete DLP. Managed Devices When it comes to data security on managed devices, life for the security administrator gets a bit easier. With full control of the device we can enforce any policies we want, although users might not be thrilled. Remember that full control doesn’t necessarily mean the device is in a highly-restricted kiosk mode – you can still allow a range of activities while maintaining security. All our previous data security options are available here, as well as: MDM managed device with Data Protection Using a Mobile Device Management tool, the iOS device is completely managed and restricted. The user is unable to install unapproved applications, email is limited to the approved enterprise account, and all security settings are enabled for Data Protection. Restricting the applications allowed on the device and enforcing security policies makes it much more difficult for users to leak data through unapproved services. Plus you gain full Data Protection, strong passcodes, and remote wiping. Some MDM tools even detect jailbroken devices. To gain the full benefit of Data Protection, you need to block unapproved apps which could leak data (such as Dropbox and iCloud apps). This isn’t always viable, which is why this option is often combined with a captive network to give users a bit more flexibility. Managed/backhaul network with DLP, etc. The device uses an on-demand VPN to route all network traffic, at all times, through an enterprise or cloud portal. We call it an “on-demand” VPN because the device automatically shuts it down when there is no network traffic and brings it up before sending traffic – the VPN ‘coverage’ is comprehensive. “On-demand” here definitely does **not* mean users can bring the VPN up and down as they want. Combined with full device management, the captive network affords complete control over all data moving onto and off the devices. This is primarily used with DLP to manage sensitive data, but it may also be used for application control or even to allow use of non-enterprise email accounts, which are still monitored. On the DLP front, while we can manage enterprise email without needing a full captive network, this option enables us to also manage data in web traffic. Full control of the device and network doesn’t obviate the need for certain other security options. For example, you might still need encryption or DRM, as these allow use of otherwise insecure cloud and sharing services. Now that we have covered our security options, our next post will look at picking a strategy. Share:

Share:
Read Post

Watching the Watchers: Protect Credentials

As we continue our march through the Privileged User Lifecycle, we have provisioned the privileged users and restricted access to only the devices they are authorized to manage. The next risk to address is the keys or credentials of these privileged users (P-Users) falling into the wrong hands. The best access and entitlements security controls fail if someone can impersonate a P-User. But the worst risk isn’t even compromised credentials. It’s not having unique credentials in the first place. You must have seen the old admin password sharing scheme, right? It was used, mostly out of necessity, many moons ago. Administrators needed access to the devices they managed. But at times they needed help, so they asked a buddy to take care of something, and just gave him/her the credentials. What could possibly go wrong? We covered a lot of that in the Keys to the Kingdom. Shared administrative credentials open Pandora’s box. Once the credentials are in circulation you can’t get them back – which is a problem when an admin leaves the company or no longer has those particular privileges. You can’t deprovision shared credentials so you need to change them. PCI, as the low bar for security (just ask Global Payments), recognizes the issues with sharing IDs, so Requirement 8 is all about making sure anyone with access to protected data uses a unique ID, and that their use is audited – so you can attribute every action to a particular user. But that’s not all! (in my best infomercial voice). What about the fact that some endpoints could be compromised? Even administrative endpoints. So sending admin credentials to that endpoint might not be safe. And what happens when developers hard-code credentials into an applications? Why go through the hassle of secure coding – just embed the password right into the application! That password never changes anyway, so what’s the risk? So we need to protect credentials, as much as whatever they control. Credential Lockdown How can we protect these credentials? Locking the credentials away in a vault meets many of the requirements described above. First, if the credentials are stored in a vault, it harder for admins to share them. Let’s not put the cart before the horse, but this makes it pretty easy (and transparent) to change the password after every access, eliminating the sticky-note-under-keyboard risk. Going through the vault for every administrative credential access means you have an audit trail of who used which credentials (and presumably which specific devices they were managing) and when. That kind of stuff makes auditors happy. Depending on the deployment of the vault, the administrator may never even see the credentials, as they can be automatically entered on the server if you use a proxy approach to restricting access. And this also provides single sign-on to all managed devices, as the administrator authenticates (presumably using multiple factors) to the proxy, which interfaces directly to the vault again, transparently to the user. So even an administration device teeming with malware cannot expose critical credentials. Similarly, an application can make a call to the vault, rather than hard-coding credentials into the app. Yes, the credentials still end up on the application server, but that’s still much better than hard-coding the password. So are you sold yet? If you worry about credentials being access and misused, a password vault provides a good mechanism for protecting them. Define Policies As with most things in security, using a vault involves both technology and process. We will tackle the process first, because without a good process even the best technology has no chance. So before you implement anything you need to define the rules of (credential) engagement. You need to answer some questions. Which systems and devices need to be involved in the password management system? This may involve servers (physical and/or virtual), network & security devices, infrastructure services (DNS, Directory, mail, etc.), databases, and/or applications. Ideally your vault will natively support most of your targets, but broad protection is likely to require some integration work on your end. So make sure any solution you look at has some kind of API to facilitate this integration. How does each target use the vault? Then you need to decide who (likely by group) can access each target, how long they are allowed to use the credentials (and manage the device), and whether they need to present additional authentication factors to access the device. You’ll also define whether multiple administrators can access managed devices simultaneously and whether to change the password after each check-in/check-out cycle. Finally, you may need to support external administrators (for third party management or business partner integration), so keep that in mind as you work through these decisions. What kind of administrator experience makes sense? Then you need to figure out the P-User interaction with the system. Will it be via a proxy login, where the user never sees the credentials, or will there be a secure agent on the device to receive and protect the credential? Figure out how the vault supports application-to-database and application-to-application interaction, as those are different than supporting human admins. You’ll also want to specify which activities are audited and how long audit logs are kept. Securing the Vault If you are putting the keys to the kingdom in this vault, make sure it’s secure. You probably will not bring a product in and set your application pen-test ninjas loose on it, so you are more likely to rely on what we call the sniff test. Ask questions to see whether the vendor has done their homework to protect the vault. You should understand the security architecture of the vault. Yes, you may have to sign a non-disclosure agreement to see the details, but it’s worth it. You need to know how they protect things. Discuss the threat model(s) the vendor uses to implement that security architecture. Make sure they didn’t miss any obvious attack vectors. You also need to poke around their development process a bit and make sure they have a proper SDLC and actually test for security defects before

Share:
Read Post

Vulnerability Management Evolution: Introduction

Back when The Pragmatic CSO was published in 2007, I put together a set of tips for being a better CISO. In fact you can still get the tips (sent one per day for five days) if you register on the Pragmatic CSO site. Not to steal any thunder, but Tip #2 is Prioritize Fiercely. Let’s take a look at what I wrote back then. Tip #2 is all about the need to prioritize. The fact is you can’t get everything done. Not by a long shot. So you have a choice. You can just not get to things and hope you don’t end up overly exposed. Or you can think about what’s important to your business and act to protect those systems first. Which do you think is the better approach? The fact is that any exposure can create problems. But you dramatically reduce the odds of a career-limiting incident if you focus most of your time on the highest profile systems. Maybe it’s not good old Pareto’s 80/20 rule, but you should be spending a bulk of your time focused on the systems that are most important to your business. Or hope the bad guys don’t know which is which. 5 years later that tip still makes perfect sense. No organization, including the biggest of the big, has enough resources. Which means you must make tough choices. Things won’t be done when they need to be. Some things won’t get done at all. So how do you choose? Unfortunately most organizations don’t choose at all. They do whatever is next on the list, without much rhyme or reason determining where things land on it. It’s the path of least resistance for a tactically oriented environment. Oil the squeakiest wheel. Keep your job. It’s all very understandable, but not very effective. Optimally, resources are allocated and priorities set based upon value to the business. In a security context, that means the next thing done should reduce the most risk to your organization. Of course calculating that risk is where things get sticky. Regardless of your specific risk quantification religion, we can all agree that you need data to accurately evaluate these risks and answer the prioritization question. Last year we did a project called Fact-Based Network Security: Metrics and the Pursuit of Prioritization which dealt with one aspect of this problem: how to make decisions based on network metrics. But the issue is bigger than that. Network exposure is only one factor in the decision-making process. You need to factor in a lot of other data – including vulnerability scans, device configurations, attack paths, application and database posture, security intelligence, benchmarks, and lots of other stuff – to get a full view of the environment, evaluate the risk, and make appropriate prioritization decisions. Historically, vulnerability scanners haves provided a piece of that data, telling you which devices were vulnerable to what attacks. The scanners didn’t tell you whether the devices were really at risk – only whether they were vulnerable. From Tactical to Strategic Organizations have traditionally viewed vulnerability scanners as a tactical product, largely commoditized, and only providing value around audit time. How useful is a 100-page vulnerability report to an operations person trying to figure out what to fix next? Though the 100-page report did make the auditor smile, as it provides a nice listing of all the audit deficiencies to address in the findings of fact. At the recent RSA Conference 2012, we definitely saw a shift from largely compliance-driven messaging to a more security-centric view. It’s widely acknowledged that compliance provides a low (okay – very low) bar for security, and it just isn’t high enough. So more strategic security organizations need better optics. They need the ability to pull in a lot of threat-related data, reference it with an understanding of what is vulnerable, and figure out what is at risk. Yesterday’s vulnerability scanners are evolving to meet this need, and are emerging as a much more strategic component of an organization’s control set than in the past. So we are starting a new series to tackle this evolution – we call it Vulnerability Management Evolution. As with last year’s SIEM Replacement research, we believe it is now time to revisit your threat management/vulnerability scanning strategy. Not necessarily to swap out products, services, or vendors, but to enssure your capabilities map to what you need now and in the future. We will start by covering the traditional scanning technologies and then quickly go on to some advanced capabilities you will need to start leveraging these platforms for decision support. Yes, decision support is the fancy term for helping you prioritize. Platform Emergence As we’ve discussed, you need more than just a set of tactical scans to generate a huge list of things you’ll never get to. You need information that helps you decide how to allocate resources and prioritize efforts. We believe what used to be called a “vulnerability scanner” is evolving into a threat management platform. Sounds spiffy, eh? When someone says platform, that usually indicates use of a common data model as the foundation, with a number of different applications riding on top, to deliver value to customers. You don’t buy a platform per se. You buy applications that leverage a platform to provide value to solve the problems you have. That’s exactly what we are talking about here. But traditional scanning technology isn’t a platform in any sense of the word. So this vulnerability management evolution requires a definite technology evolution. We are talking about growth from single-purpose product into multi-function platform. This evolved platform encompasses a number of different capabilities. Starting with the tried and true device scanner, to include database and application scanning and risk scoring. But we don’t want to spoil the fun today – we will describe not just the core technology that enables the platform, but the critical enterprise integration points and bundled value-added technologies (such as attack path analysis, automated pen testing, benchmarking, et al) that differentiate between a tactical product decision to a strategic platform deployment. We will also talk about the enterprise features you need from a platform, including

Share:
Read Post

Incite 3/28/2012: Gone Tomorrow

A recent Tweet from Shack was pretty jarring. Old friend from college died today. Got some insane rare lung disease out of nowhere, destroyed them. Terrifying. 37 years old. :/ Here today. Gone tomorrow. It’s been a while since I have ranted about the importance of enjoying (most) every day. About spending time with the people who matter to you. People who make you better, not break you down. Working at something you like, not something you tolerate. Basically making the most of each day, which most of us don’t do very well. Myself included. This requires a change in perspective. Enjoying not just the good days but also the bad ones. I know the idea of enjoying a bad day sounds weird. It’s kind of like sales. Great sales folks have convinced themselves that every no is one step closer to a yes. Are they right? Inevitably, at some point they will sell something to someone, so they are in fact closer to a ‘yes’ with every ‘no’. So a bad day means you are closer to a good day. That little change in perspective can have a huge impact on your morale. The challenge is that you have to live through bad days to appreciate good days. It takes a few cycles thorugh the ebbs and flows to realize that this too shall pass. Whatever it is. It’s hard to have that patience when you are young. Everything is magnified. The highs are really high. And the lows, well, you know. You tend to remember the lows a lot longer than the highs. So a decade passes and you wonder what happened? You question all the time you wasted. The decisions you made. The decisions you didn’t. How did you turn 30? Where did the time go? The time is gone. And it gets worse. My 30s were a blur. 3 kids. Multiple jobs. A relocation. I was so busy chasing things I didn’t have, I forgot to enjoy the things I did. I’m only now starting to appreciate the path I’m on. To realize I needed the hard times. And to enjoy the small victories and have a short memory about the minor defeats. I was a guest speaker at Kennesaw State yesterday, talking to a bunch of students studying security. There were some older folks there. You know, like 30. But mostly I saw kids, just starting out. I didn’t spend a lot of time talking about perspective because kids don’t appreciate experience. They still think they know it all. Most kids anyway. These kids need to screw up a lot of things. And soon. They need to get on with bungling anything and everything. I didn’t say that, but I should have. Because actually all these kids have is time. Time to gain the experience they’ll need to realize they don’t know everything. Dave’s college friend doesn’t have any more time. He’s gone. If you are reading this you are not. Enjoy today, even if it’s a crappy day. Because the crappy days make you appreciate the good days to come. –Mike Photo credits: “Free Beer Tomorrow Neon Sign” originally uploaded by Lore SR Heavy Research We’re back at work on a variety of our blog series. So here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in it’s unabridged glory. Defending iOS Data Securing Data on Partially-Managed Devices Watching the Watchers (Privileged User Management) The Privileged User Lifecycle Restrict Access Understanding and Selecting DSP Technical Architecture Incite 4 U This sounds strangely familiar… It seems our friend Richard Bejtlich spent some time on Capital Hill recently, and had a Groundhog Day experience. You know, the new regime asking him questions he answered back in 2007. Like politicians are going to remember anything from 2007. Ha! They can’t even remember their campaign promises from two years ago (yup, I’ll be here all week). So he went back into the archives to remind everyone what he’s been saying for years. You know, reduce attack surface by identifying all egress points and figure out which ones need to be protected. And monitor both those egress paths and allegedly friendly networks. Though I think over the past 5 years we have learned that no networks are friendly. Not for long, anyway. Finally, Richard also recommended a Federal I/R team be established. All novel ideas. None really implemented. But on the good news front, the US Government spends a lot of money each year on security products. – MR Perverse economics: I’m going to go out on a limb and make a statement about vulnerability disclosure. After years of watching, and sometimes participating, in the debate, I finally think I have the answer. There is only one kind of responsible disclosure, and the economics are so screwed up that it might as well be a cruddy plot device in a bad science fiction novel. Researchers should disclose vulnerabilities privately to vendors. Vendors are then responsible for creating timely patches. Users are then responsible for patching their systems within a reasonable period. Pretty much anything else screws at minimum users, and likely plenty of other folks. (And this doesn’t apply if something is already in the wild). But as Dennis Fisher highlights, the real world never works that way. Today it’s more economically viable for researchers to sell their exploits to governments, which will use them against some other country, if not their own citizens. It’s more economically viable for vendors to keep vulnerabilities quiet so they don’t have to patch. And users? Well, no one seems to care much about them, but scrambling to patch sure isn’t in their economic interest. It seems ‘responsible’ means ‘altruistic’, and we all know where human nature takes us from there. – RM Scoring credit: Hackers have been stealing credit reports and financial data from – where else? – credit scoring agencies and selling the data to the highest bidder. Shocking, I know. Seems they are abusing the sooper-secure credit score user validation system; asking “which bank holds

Share:
Read Post

iOS Data Security: Securing Data on Partially-Managed Devices

Our last two posts covered iOS data security options on unmanaged devices; now it’s time to discuss partially managed devices. Our definition is: Devices that use a configuration profile or Exchange ActiveSync policies to manage certain settings, but the user is otherwise still in control of the device. The device is the user’s, but they agree to some level of corporate management. The following policies are typically deployed onto partially-managed devices via Exchange ActiveSync: Enforce passcode lock. Disable simple passcode. Enable remote wipe. This, in turn, enables Data Protection on supporting hardware (including all models currently for sale). In addition, you can also add the following using iOS configuration profiles – which can also enforce all the previous policies except remote wiping, unless you also use a remote wipe server tool: On-demand VPN for specific domains (not all traffic, but all enterprise traffic). Manual VPN for access to corporate resources. Digital certificates for access to corporate resources (VPN or SSL). Installation of custom enterprise applications. Automatic wipe on failed passcode attempts (the number of attempts can be specified, unlike the user setting which is simply ON/OFF for wipe after 10 failures, in the Settings app). The key differences between partially and a fully managed devices are a) the user can still install arbitrary applications and make settings changes, and b) not all traffic is routed through a mandatory full-time VPN. One key point to administering managed policies on a user-owned device is to ensure that you obtain the user’s consent and notify them of what will happen. The user should sign a document saying they understand that although they own the device, by accessing corporate resources they are allowing management, which may include remote wiping a lost or stolen device. And that the user is responsible for their own backups of personal data. Enhanced security for existing options Most of the previous options we have discussed are significantly enhanced when digital certificate, passcode, and Data Protection policies are enforced. This is especially true of all the sandboxed app options – and, in fact, many vendors in those categories generally don’t support use of their tools without a configuration profile to require at least a passcode. Managed Exchange ActiveSync (or equivalent) Microsoft’s ActiveSync protocol, despite its name, is separate from the Exchange mail server and included with alternate products, including some that compete with Exchange. iOS natively supports it, so it is the backbone for managed email on iDevices when a sandboxed messaging app isn’t used. By setting the policies listed above, all email is encrypted to under user’s passcode using Data Protection. Other content is not protected, but remote wipe is supported. Custom enterprise sandboxed application Now that you can install an enterprise digital certificate onto the device and guarantee Data Protection is active, you can also deploy custom enterprise applications that leverage this built-in encryption. This option allows you to use the built-in iOS document viewer within your application’s sandbox, which enables you to fairly easily deploy a custom application that provides fully sandboxed and encrypted access to enterprise documents. Combine it with an on-demand VPN tied to the domain name of the server or a manual VPN, and you have data encrypted both in transit and in storage. Today a few vendors provide toolkits to build this sort of application. Some are adding document annotation for PDF files, and based on recent announcements we expect to see full editing capabilities also added for MS Office document formats. Share:

Share:
Read Post

Watching the Watchers: Restrict Access

As we discussed in the Privileged User Lifecycle post, there are a number of aspects to Watching the Watchers. Our first today is Restricting Access. This is first mostly because it reduces your attack surface. We want controls to ensure administrators only access devices they are authorization to manage. There are a few ways to handle restriction: Device-centricy (Status Quo): Far too many organizations rely on their existing controls, which include authentication and other server-based access control mechanisms. Network-based Isolation: Tried and true network segmentation approaches enable you to isolate devices (typically by group) and only allow authorized administrators access to the networks on which they live. PUM Proxy: This entails routing all management communications through a privileged user management proxy server or service which enforces access policies. The devices only accept management connections from the proxy server, and do not allow direct management access. There are benefits and issues to each approach, so ultimately you’ll be making some kind of compromise. So let’s dig into each approach and highlight what’s good and what’s not so good. Device-centricity (Status Quo) There are really two levels of status quo; the first is common authentication, which we understand in this context is not really “restricting access” effectively. Obviously you could do a bit to make the authentication more difficult, including strong passwords and/or multi-factor authentication. You would also integrate with an existing identity management platform (IDM) to keep entitlements current. But ultimately you are relying on credentials as a way to keep unauthorized folks from managing your critical devices. And basic credentials can be defeated. Many other organizations use server access control capabilities, which are fairly mature. This involves loading an agent onto each managed device and enforcing the access policy on the device. The agent-based approach offers rather solid security – the risk becomes compromise of the (security) agent. Of course there is management overhead to distribute and manage the agents, as well as the additional computational load imposed by the agent. But any device-based approach is in opposition to one of our core philosophies: “If you can’t see it, it’s much harder to compromise.” Device-centric access approaches don’t affect visibility at all. This is suboptimal because in the real world new vulnerabilities appear every month on all operating systems – and many of them can be exploited via zero-day attacks. And those attacks provide a “back door” into servers, giving attackers control without requiring legitimate credentials – regardless of agentry on the device. So any device-based method fails if the device is rooted somehow. Network Segmentation This entails using network-layer technologies such as virtual LANs (VLANs) and network access control (NAC) to isolate devices and restrict access based on who can connect to specific protected networks. The good news is that many organizations (especially those subject to PCI) have already implemented some level of segmentation. It’s just a matter of building another enclave, or trust zone, for each group of servers to protect. As mentioned, it’s much harder to break something you can’t see. Segmentation requires the attacker to know exactly what they are looking for and where it resides, and to have a mechanism for gaining access to the protected segment. Of course this is possible – there have been way to defeat VLANs for years – but vendors have closed most of the very easy loopholes. More problematic to us is that this relies on the networking operations team. Managing entitlements and keeping devices on the proper segment in a dynamic environment, such as your data center, can be challenging. It is definitely possible, but it’s also difficult, and it puts direct responsibility for access restriction in the hands of the network ops team. That can and does work for some organizations, but organizationally this is complicated and somewhat fragile. The other serious complication for this approach is cloud computing – including both private and public clouds. The cloud is key and everybody is jumping on the bandwagon, but unfortunately it largely removes visibility at the physical layer. If you don’t really know where specific instances are running, this approach becomes difficult or completely unworkable. We will discuss this in detail later in the series, when we discuss the cloud in general. PUM Proxy This approach routes all management traffic through a proxy server. Administrators authenticate to the PUM proxy, presumably using strong authentication. The authenticated administrator gets a view of the devices they can manage, and establishes a management session directly to the device. Another possible layer of security involves loading a lightweight agent on every managed devices to handle the handshake & mutual authentication with the PUM proxy, and to block management connections from unauthorized sources. This approach is familiar to anyone who has managed cloud computing resources via vCenter (in VMware land) or a cloud console such as Amazon Web Services. You log in and see the devices/instances you can manage, and proceed accordingly. This fits our preference for providing visibility to only those devices that can legitimately be managed. It also provides significant control over granular administrative functions, as commands can be blocked in real-time (it is a man in the middle, after all). Another side benefit is what we call the deterrent effect: administrators know all their activity is running through a central device and typically heavily monitored – as we will discuss in depth later. But any proxy presents issues, including a possible single point of failure, and additional latency for management sessions. Some additional design & architecture work is required to ensure high availability and reasonable efficiency. It’s a bad day for the security team if ops can’t do their jobs. And periodic latency testing is called for, to make sure the proxy doesn’t impair productivity. And finally: as with virtualization and cloud consoles, if you own the proxy server, you own everything in the environment. So the security of the proxy is paramount. All these approaches are best in different environments, and each entails its own compromises. For those just starting to experiment with privileged user management, a PUM proxy is typically the path of least

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.