Securosis

Research

Understanding and Selecting DSP: Extended Features

In the original Understanding and Selecting a Database Activity Monitoring Solution paper we discussed a number of Advanced Features for analysis and enforcement that have since largely become part of the standard feature set for DSP products. We covered monitoring, vulnerability assessment, and blocking, as the minimum feature set required for a Data Security Platform, and we find these in just about every product on the market. Today’s post will cover extensions of those core features, focusing on new methods of data analysis and protection, along with several operational capabilities needed for enterprise deployments. A key area where DSP extends DAM is in novel security features to protect databases and extend protection across other applications and data storage repositories. In other words, these are some of the big differentiating features that affect which products you look at if you want anything beyond the basics, but they aren’t all in wide use. Analysis and Protection Query Whitelisting: Query ‘whitelisting’ is where the DSP platform, working as an in-line reverse proxy for the database, only permits known SQL queries to pass through to the database. This is a form of blocking, as we discussed in the base architecture section. But traditional blocking techniques rely on query parameter and attribute analysis. This technique has two significant advantages. First is that detection is based on the structure of the query, matching the format of the FROM and WHERE clauses, to determine if the query matches the approved list. Second is how the list of approved queries is generated. In most cases the DSP maps out the entire SQL grammar – in essence a list of every possible supported query – into binary search tree for super fast comparison. Alternatively, by monitoring application activity, the DSP platform can automatically mark which queries are permitted in baselining mode – of course the user can edit this list as needed. Any query not on the white list is logged and discarded – and never reaches the database. With this method of blocking false positives are very low and the majority of SQL injection attacks are automatically blocked. The downside is that the list of acceptable queries must be updated with each application change – otherwise legitimate requests are blocked. Dynamic Data Masking: Masking is a method of altering data so that the original data is obfuscated but the aggregate value is maintained. Essentially we substitute out individual bits of sensitive data and replace them with random values that look like the originals. For example we can substitute a list of customer names in a database with a random selection of names from a phone book. Several DSP platforms provide on-the-fly masking for sensitive data. Others detect and substitute sensitive information prior to insertion. There are several variations, each offering different security and performance benefits. This is different from the dedicated static data masking tools used to develop test and development databases from production systems. Application Activity Monitoring: Databases rarely exist in isolation – more often they are extensions of applications, but we tend to look at them as isolated components. Application Activity Monitoring adds the ability to watch application activity – not only the database queries that result from it. This information can be correlated between the application and the database to gain a clear picture of just how data is used at both levels, and to identify anomalies which indicate a security or compliance failure. There are two variations currently available on the market. The first is Web Application Firewalls, which protect applications from SQL injection, scripting, and other attacks on the application and/or database. WAFs are commonly used to monitor application traffic, but can be deployed in-line or out-of-band to block or reset connections, respectively. Some WAFs can integrate with DSPs to correlate activity between the two. The other form is monitoring of application specific events, such as SAP transaction codes. Some of these commands are evaluated by the application, using application logic in the database. In either case inspection of these events is performed in a single location, with alerts on odd behavior. File Activity Monitoring: Like DAM, FAM monitors and records all activity within designated file repositories at the user level and alerts on policy violations. Rather than SELECT, INSERT, UPDATE, and DELETE queries, FAM records file opens, saves, deletions, and copies. For both security and compliance, this means you no longer care if data is structured or unstructured – you can define a consistent set of policies around data, not just database, usage. You can read more about FAM in Understanding and Selecting a File Activity Monitoring Solution. Query Rewrites: Another useful technique for protecting data and databases from malicious queries is query rewriting. Deployed through a reverse database proxy, incoming queries are evaluated for common attributes and query structure. If a query looks suspicious, or violates security policy, it is substituted with a similar authorized query. For example, a query that includes a column of Social Security numbers may be omitted from the results by removing that portion of the FROM clause. Queries that include the highly suspect “1=1” WHERE clause may simply return the value 1. Rewriting queries protects application continuity, as the queries are not simply discarded – they return a subset of the requested data, so false positives don’t cause the application to hang or crash. Connection-Pooled User Identification: One of the problems with connection pooling, whereby an application using a single shared database connection for all users, is loss of the ability to track which actions are taken by which users at the database level. Connection pooling is common and essential for application development, but if all queries originate from the same account that makes granular security monitoring difficult. This feature uses a variety of techniques to correlate every query back to an application user for better auditing at the database level. Discovery Database Discovery: Databases have a habit of popping up all over the place without administrators being aware. Everything from virtual copies of production databases showing up in test environments, to Microsoft Access databases embedded in applications. These databases are commonly not secured to any standard, often have default configurations, and provide targets of opportunity for attackers. Database discovery works by scanning networks looking for databases

Share:
Read Post

Understanding and Selecting DSP: Core Features

So far this series has introduced Database Security Platforms, provided a full definition of DSP, discussed the origins and evolution of DAM to DSP, and described the technical platform architecture. We have covered the basics of a Database Security Platform. It might seem like a short list compared to all the other extended features we will cover later, but these are the most important ares, and the primary reasons to buy these tools. Activity Monitoring The single defining feature of Database Security Platforms is their ability to collect and monitor all database activity. This includes all administrator and system activity that touches data (short of things like indexing and other autonomous internal functions). We have already covered the various event sources and collection techniques used to power this monitoring, but let’s briefly review what kinds of activity these products can monitor: All SQL – DML, DDL, DCL, and TCL: Activity monitoring needs to include all interactions with the data in the database, which for most databases (even non-relational) involves some form of SQL (Structured Query Language). SQL breaks down into the Data Manipulation Language (DML, for select/update queries), the Data Definition Language (DDL, for creating and changing table structure), the Data Control Language (DCL, for managing permissions and such) and the Transaction Control Language (TCL, for things like rollbacks and commits). As you likely garnered from our discussion of event sources, depending on a product’s collection techniques, it may or may not cover all this activity. SELECT queries: Although a SELECT query is merely one of the DML activities, due to the potential for data leakage, SELECT statements are monitored particularly closely for misuse. Common controls examine the type of data being requested and the size of the result set, and check for SQL injection. Administrator activity: Most administrator activity is handled via queries, but administrators have a wider range of ways they can connect to database than regular users, and more ability to hide or erase traces of their activity. This is one of the biggest reasons to consider a DSP tool, rather than relying on native auditing. Stored procedures, scripts, and code: Stored procedures and other forms of database scripting may be used in attacks to circumvent user-based monitoring controls. DSP tools should also track this internal activity (if necessary). File activity, if necessary: While a traditional relational database relies on query activity to view and modify data, many newer systems (and a few old ones) work by manipulating files directly. If you can modify the data by skipping the Database Management System and editing files directly on disk (without breaking everything, as would happen with most relational systems), some level of monitoring is probably called for. Even with a DSP tool, it isn’t always viable to collect everything, so the product should support custom monitoring policies to select what types of activities and/or user accounts to monitor. For example, many customers deploy a tool only to monitor administrator activity, or to monitor all administrators’ SELECT queries and all updates by everyone. Policy Enforcement One of the distinguishing characteristics of DSP tools is that they don’t just collect and log activity – they analyze it in real or near-real time for policy violations. While still technically a detective control (we will discuss preventative deployments later), the ability to alert and respond in or close to real time offers security capabilities far beyond simple log analysis. Successful database attacks are rarely the result of a single malicious query – they involve a sequence of events (such as exploits, alterations, and probing) leading to eventual damage. Ideally, policies are established to detect such activity early enough to prevent the final loss-bearing act. Even when an alert is triggered after the fact, it facilitates immediate incident response, and investigation can begin immediately rather than after days or weeks of analysis. Monitoring policies fall into two basic categories: Rule-based: Specific rules are established and monitored for violation. They can include specific queries, result counts, administrative functions (such as new user creation and rights changes), signature-based SQL injection detection, UPDATE or other transactions by users of a certain level on certain tables/fields, or any other activity that can be specifically described. Advanced rules can correlate across different parts of a database or even different databases, accounting for data sensitivity based on DBMS labels or through registration in the DAM tool. Heuristic: Monitoring database activity builds a profile of ‘normal’ activity (we also call this “behavioral profiling”). Deviations then generate policy alerts. Heuristics are complicated and require tuning to work effectively. They are a good way to build a base policy set, especially with complex systems where manually creating deterministic rules by hand isn’t realistic. Policies are then tuned over time to reduce false positives. For well-defined systems where activity is consistent, such as an application talking to a database using a limited set of queries, they are very useful. Of course heuristics fail when malicious activity is mis-profiled as good activity. Aggregation and Correlation One characteristic which Database Security Platforms share with System Information and Event Management (SIEM) tools is their ability to collect disparate activity logs from a variety of database management systems – and then to aggregate, correlate, and enrich event data. The combination of multiple data sources across heterogenous database types enables more complete analysis of activity rather than working only on one isolated query at a time. And by understanding the Structured Query Language (SQL) syntax of each database platform, DSP can interpret queries and parse their meaning. While a simple SELECT statement might mean the same thing across different database platforms, each database management system (DBMS) is chock full of its own particular syntax. A DSP solution should understand the SQL for each covered platform and be able to normalize events so the analyst doesn’t need to know the ins and outs of each DBMS. For example, if you want to review all privilege escalations on all covered systems, a DSP tool will recognize those various events across platforms and present you with a complete report without you having to understand the SQL particulars of each one. Assessment We typically see three types of assessment

Share:
Read Post

Friday Summary: March 23, 2012

This should not matter: The Square Register. But it does. What do I mean by that? Check out the picture: There’s something catchy and slick about the set-up of an iPad cash register and the simple Square device. It looks like something Apple would produce. It seems right at home with – almost a natural extension of – the iPad. I run into small shop owners and independent business people who are using Square everywhere. It’s at Target, right next to the Apple products, and the salesperson said they have been flying off the shelves. People say “Wow, that’s cool.” And that’s how Square is going to win this part of the burgeoning personal payment space. The new competitor, PayPal’s Here, is marketing the superiority of their device, better service, and lower costs. Much of that ‘superiority’ is in the device’s security features – such as encrypting data inside the device – which early Square devices currently deployed do not. That’s a significant security advantage. But it won’t matter – next to its competitor, ‘Here’ looks about as modern and relevant as a Zip drive. Being in the field of security, and having designed mobile payment systems and digital wallets in the past, I care a great deal about the security of these systems. So I hate to admit that marketing the security of Here is doomed to fail. Simplicity, approachability, and ease of use are more important to winning the customers Square and PayPal are targeting. The tiny cost savings offered by Paypal do not matter to small merchants, and they’re not great enough to make a difference to many mid-sized merchants. A fast, friendly shopping experience is. I’m sure Paypal’s position in the market will help a lot to drag along sales, but they need to focus more on experience and less on technical features if they want to win in this space. While I’m sharing my stream of consciousness, there’s something else I want to share with readers that’s not security related. As someone who writes for a living these days, I appreciate good writers more than ever. Not just skilled use of English, but styles of presentation and the ability to blend facts, quality analysis, and humor. When I ran across Bill Simmons’ post on How to Annoy Fans in 60 Easy Steps on the Grantland web site I was riveted to the story. I confess to being one of the long-suffering fans he discusses – in fact it was the Run TMC Warriors teams, circa 1992, that started my interest in sports. But even if you’re not a Warriors fan, this is a great read for anyone who likes basketball. If you’re a statistician you understand what a special kind of FAIL it is when you consistently snatch defeat from the jaws of victory – for 35 years. It’s a great piece – like a narration of a train wreck in slow motion – and highly recommended. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted on the 2012 DBIR report. Rich quoted in IT Security News. Favorite Securosis Posts Adrian Lane: Incite 3/21/2012: Wheel Refresh. I’ve been there. Twice. My wife was so frustrated with my waffling that she bought me a car. Mike Rothman: Last week’s Friday Summary. Rich shows he’s human, and not just a Tweetbot automaton. Kidding aside, anyone with kids will understand exactly where Rich is coming from. Rich: Watching the Watchers: The Privileged User Lifecycle. Mike’s new series is on Privileged User Management – which is becoming a major issue with the increasing complexity of our environments. Not that it wasn’t a big issue before. Other Securosis Posts How to Read and Act on the 2012 Verizon Data Breach Investigations Report (DBIR). Understanding and Selecting DSP: Technical Architecture. iOS Data Security: Protecting Data on Unmanaged Devices. iOS Data Security: Secure File Apps for Unmanaged Devices. Talkin’ Tokenization. Favorite Outside Posts Dave Lewis: Too many passwords? Just one does the trick. Adrian Lane: The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say). There is so much interesting stuff in this article that I don’t know where to start. Great read. Mike Rothman: Give it five minutes. This is great advice from 37Signals’ Jason Fried. People rarely remember you because of how smart you are. But they definitely remember you if you are an know-it-all, and not in a good way. Rich: Verizon DBIR 2012: Automated large-scale attacks taking down SMBs. Mike Mimoso’s article on the DBIR. He provides a little more context, and the report is a must-read. Project Quant Posts Malware Analysis Quant: Metrics–Monitor for Reinfection. Malware Analysis Quant: Metrics–Remediate. Malware Analysis Quant: Metrics–Find Infected Devices. Malware Analysis Quant: Metrics–Define Rules and Search Queries. Malware Analysis Quant: Metrics–The Malware Profile. Malware Analysis Quant: Metrics–Dynamic Analysis. Malware Analysis Quant: Metrics–Static Analysis. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Google Hands Out $4500 in Rewards for Chrome 17.0.963.83. Adam’s analysis of 1Password findings in Secure Password Managers report. Report: Hacktivists Out-Stole Cybercriminals in 2011. Three times during my career I have heard “20XX was the year of the breach.” And for 2011 that again looks like a legitimate statement. Bredolab Botmaster ‘Birdie’ Still at Large via Krebs. Microsoft Donates Software To Protect Exploited Children. NSA Chief Denies Domestic Spying But Whistleblowers Say Otherwise. Confirm nothing, deny everything, and make counter-accusations. When you see this from a government, you know you hit the nail on the head. BBC attacked by Iran? Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Ryan, in response to iOS Data Security: Secure File Apps for Unmanaged Devices. Great post, Rich. Another thing to note about mobile EDRM is that the better solutions will allow you to

Share:
Read Post

Understanding and Selecting DSP: Technical Architecture

One of the key strengths of DSP is its ability to scan and monitor multiple databases running on multiple database management systems (DBMSs) across multiple platforms (Windows, Unix, etc.). The DSP tool aggregates information from multiple collectors to a secure central server. In some cases the central server/management console also collects information while in other cases it serves merely as a repository for data from collectors. This creates three options for deployment, depending on organizational requirements: Single Server/Appliance: A single server, appliance, or software agent serves as both the sensor/collection point and management console. This mode is typically used for smaller deployments. Two-tier Architecture: This option consists of a central management server and remote collection points/sensors. The central server does no direct monitoring, but aggregates information from remote systems, manages policies, and generates alerts. It may also perform assessment functions directly. The remote collectors may use any of the collection techniques. Hierarchical Architecture: Collection points/sensors/scanners aggregate to business-level or geographically distributed management servers, which in turn report to an enterprise management server. Hierarchical deployments are best suited for large enterprises, which may have different business unit or geographic needs. They can also be configured to only pass certain kinds of data between the tiers to manage large volumes of information or maintain unit/geographic privacy, and to satisfy policy requirements. This can be confusing because each server or appliance can manage multiple assessment scanners, network collectors, or agent-based collectors may also perform some monitoring directly. But a typical deployment includes a central management server (or cluster) handling all the management functions, with collectors spread out to handle activity monitoring on the databases. Blocking architecture options There are two different ways to block queries, depending on your deployment architecture and choice of collection agents. Agent-based Blocking: The software agent is able to directly block queries – the actual technique varies with the vendor’s agent implementation. Agents may block inbound queries, returned results, or both. Proxy-based Blocking: Instead of connecting directly to the database, all connections are to a local or network-based proxy (which can be a separate server/appliance or local software). The proxy analyzes queries before passing them to the database, and can block by policy. We will go into more detail on blocking later in this series, but the important point is that if you want to block, you need to either deploy some sort of software agent or proxy the database connection. Next we will recap the core features of DAM and show the subtle additions to DSP. Share:

Share:
Read Post

Talkin’ Tokenization

I want to announce a couple webcasts I’ll be on this week regarding tokenization: one will focus on the grey areas of compliance with tokenization, and the other will offer buyers a list of key evaluation criteria. The first will be Tuesday March 20th, Selecting a Tokenization Solution – a Tokenization Buyer’s Guide. This is the last of a three-part series, and I will focus on all the questions you need to ask vendors. As with most security technologies, there are plenty of little ‘gotchas’ to look out for, and pricing options that are not always apparent. There will be a ton of content – some covered in the research paper, some completely new. On Thursday March 22nd, it will be What the Task Force Did Not Say, focusing on critical compliance issues which the PCI Council’s Tokenization Guidelines skirted. I will highlight key issues left dangling by the council, as well as specific areas merchants need to consider when using tokenization for PCI scope reduction. I will include advice on how to comply and address the most common questions I get from merchants considering tokenization. And bring your questions – we’ll leave time for your specific inquires about the official Guidance or the white paper. Share:

Share:
Read Post

Friday Summary: March 9, 2012

By Adrian Lane: I learned something from the e10+ session during RSA. Usually it’s my least favorite event but this year was different – it was most favorite, and not just because Rich and Mike were instrumental in putting it together. The consumerization presentation was really informative – the audience responses surprised me – but the breach victim “fireside chat” was awesome. The only way we could mimic the human stress angle in a preparedness drill is to set part of your office on fire during a press conference, or taze IT personnel as they rummage through logs. Don’t discount the stress factor in breach planning. Around the time of the RSA conference I have had a few discussions with VCs about technology acquisition. We discussed market trends, total market revenue estimates, sales opportunities, how products should be sold, and what changes in the ‘go-to-market’ strategy were needed to turn the company around. At the end of the day, the investment was a non-starter. Not because of the market, or the value of the technology, but because of the level of trust in the management team. They simply got “a bad feeling” they could not overcome. People not trusting people. I know several other bloggers have mentioned the exotic cars this year in vendor booths on the conference floor. What’s the connection with security? Nothing. Absolutely nothing. But they sure pulled in the crowds. Cars and booth babes with matching attire. I admit the first time I swung by Fortinet’s booth was to see the Ferrari. Sure, it was an unapologetic lure. And it worked. I even took a photo, I was so impressed with the beauty of its engineering. Nice, huh? It’s too easy to be dispassionate about security, especially when talking about cryptography or key management. Heck, I have seen presentations on social engineering that had the sex appeal of paint brushes. How many of you have seen the “blinky light phenomena”, where buyers prefer hardware over software because there was a very cool looking (read: tangible) representation of their investment? But security users – or should I say security buyers – are motivated by human factors like everyone else. Too many CTOs I speak with talk about what we should be doing in security, or the right way to solve security problems. They fail to empathize with IT guys who are trying to get multiple jobs done without much fanfare. And many of them don’t want to talk about it – they want to get out of their cubicles for a day, walk around some shiny cars, have someone listen to their security issues and bring some tchochkes back to their desks. Human behavior is not just an exploit vector – it’s also part of the solution space. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on Bank Info Security. Adrian’s DR article on Deleting Databases. Rich quoted on Daring Fireball about OS X Gatekeeper. Adrian talks Tokenization at RSA Conference. Adrian quoted in CSO on Big Data. Favorite Securosis Posts Mike Rothman: Burnout. This is a great post by Rich. Read. it. now. Adrian Lane: Bringing Sexy back (to Security): Mike’s RSAC 2012 Wrap-up. Rich: Lazy Deal Analysis in this week’s Incite Other Securosis Posts Incite 3/7/2012: Perspective. Upcoming Cloud Security Training Courses. Objectivity Matters. Implementing DLP: Ongoing Management. Implementing DLP: Deploy. The Securosis Guide to RSA 2012. The Last Friday before the 2012 RSA Conference. RSA Conference 2012 Guide: Cloud Security. RSA Conference 2012 Guide: Data Security. RSA Conference 2012 Guide: Security Management and Compliance. Favorite Outside Posts Rich: How’s that secrecy working out? The bad guys talk. We don’t. Guess who has the advantage? Dave Lewis: Researchers find MYSTERY programming language in Duqu Trojan. It shows both skill and dedication to create your own language to write malware. But why? Anti-reverse engineering? Sounds spook-y! Mike Rothman: Heartland 2011. Gunnar revisits the impact of the breach Heartland’s on business operations. Some folks will use this as proof that a high-profile breach is nothing but brief event. Heartland clearly responded effectively and got their business back on a nice growth path. But don’t make the mistake that correlation = causation. It’s a data point, nothing more nor less. Adrian Lane: The Ruby/GitHub hack: translated. The only lucid discussion of the GitHub incident I’ve seen, and a nice breakdown of the issue. Project Quant Posts Malware Analysis Quant: Metrics–Monitor for Reinfection. Malware Analysis Quant: Metrics–Remediate. Malware Analysis Quant: Metrics–Find Infected Devices. Malware Analysis Quant: Metrics–Define Rules and Search Queries. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts We Need to Talk About Android. The practical side of app security. East Villager is ID’d as leading hacker with Anonymous group. Consumerization is not BYOD, and what that means for security. Head of Lulzsec Arrested. TSA Pooh-Poohs Video Purporting to Defeat Airport Body Scanners The video is a ‘must see’. Adobe patches critical flaws. Feels like I write that sentence every week. Tips for NOT getting Hacked on the Web. I would have said “for the common man”, and I would included a recommendation to not click email links, but solid advice for basic user protection! Fake AV attack targets WordPress users. Apparently some people we know have experienced this. Apple releases Configurator app for Mac. With all the hoopla surrounding the new iPad and Apple TV releases, you might have missed this important iDevice management tool. Still Life With Anonymous. In case you missed it, just before the RSA Conference, Imperva launched a report containing their findings on Anonymous. Mitt Romney or Mr. Burns? OK, it’s political, but it’s really funny. Hackers grab Michael Jackson songs from Sony. Google Pay Out $47,500 in bug bounties. This program is working, people! Take note. Blog Comment of the Week Remember, for every comment selected, Securosis makes a

Share:
Read Post

Understanding and Selecting DSP: Data and Event Collection

In our previous post on DSP components we outlined the evolution of Database Activity Monitoring into Database Security Platforms. One of its central aspects is the evolution of event collection mechanisms from native audit, to monitoring network activity, to agent-based activity monitoring. These are all database-specific information sources. The evolution of DAM has been framed by these different methods of data collection. That’s important, because what you can do is highly dependent on the data you can collect. For example, the big reason agents are the dominant collection model is that you need them to monitor administrators – network monitoring can’t do that (and is quite difficult in distributed environments). The development of DAM into DSP also entails examination of a broader set of application-related events. By augmenting the data collection agents we can examine other applications in addition to databases – even including file activity. This means that it has become possible to monitor SAP and Oracle application events – in real time. It’s possible to monitor user activity in a Microsoft SharePoint environment, regardless of how data is stored. We can even monitor file-based non-relational databases. We can perform OS, application, and database assessments through the same system. A slight increase in the scope of data collection means much broader application-layer support. Not that you necessarily need it – sometimes you want a narrow database focus, while other times you will need to cast a wider net. We will describe all the options to help you decide which best meets your needs. Let’s take a look at some of the core data collection methods used by customers today: Event Sources Local OS/Protocol Stack Agents: A software ‘agent’ is installed on the database server to capture SQL statements as they are sent to the databases. The events captured are returned to the remote Database Security Platform. Events may optionally be inspected locally by the agent for real-time analysis and response. The agents are either deployed into the host’s network protocol stack or embedded into the operating system, to capture communications to and from the database. They see all external SQL queries sent to the database, including their parameters, as well as query results. Most critically, they should capture administrative activity from the console that does not come through normal network connections. Some agents provide an option to block malicious activity – either by dropping the query rather than transmitting it to the database, or by resetting the suspect user’s database connection. Most agents embed into the OS in order to gain full session visibility, and so require a system reboot during installation. Early implementations struggled with reliability and platform support problems, causing system hangs, but these issues are now fortunately rare. Current implementations tend to be reliable, with low overhead and good visibility into database activity. Agents are a basic requirement for any DSP solution, as they are a relatively low-impact way of capturing all SQL statements – including those originating from the console and arriving via encrypted network connections. Performance impact these days is very limited, but you will still want to test before deploying into production. Network Monitoring: An exceptionally low-impact method of monitoring SQL statements sent to the database. By monitoring the subnet (via network mirror ports or taps) statements intended for a database platform are ‘sniffed’ directly from the network. This method captures the original statement, the parameters, the returned status code, and any data returned as part of the query operation. All collected events are returned to a server for analysis. Network monitoring has the least impact on the database platform and remains popular for monitoring less critical databases, where capturing console activity is not required. Lately the line between network monitoring capabilities and local agents has blurred. Network monitoring is now commonly deployed via a local agent monitoring network traffic on the database server itself, thereby enabling monitoring of encrypted traffic. Some of these ‘network’ monitors still miss console activity – specifically privileged user activity. On a positive note, installation as a user process does not require a system reboot or cause adverse system-wide side effects if the monitor crashes unexpectedly. Users still need to verify that the monitor is collecting database response codes, and should determine exactly which local events are captured, during the evaluation process. Memory Scanning: Memory scanners read the active memory structures of a database engine, monitoring new queries as they are processed. Deployed as an agent on the database platform, the memory scanning agent activates at pre-determined intervals to scan for SQL statements. Most memory scanners immediately analyze queries for policy violations – even blocking malicious queries – before returning results to a central management server. There are numerous advantages to memory scanning, as these tools see every database operation, including all stored procedure execution. Additionally, they do not interfere with database operations. You’ll need to be careful when selecting a memory scanning product – the quality of the various products varies. Most vendors only support memory scanning on select Oracle platforms – and do not support IBM, Microsoft, or Sybase. Some vendors don’t capture query variables – only the query structure – limiting the usefulness of their data. And some vendors still struggle with performance, occasionally missing queries. But other memory scanners are excellent enterprise-ready options for monitoring events and enforcing policy. Database Audit Logs: Database Audit Logs are still commonly used to collect database events. Most databases have native auditing features built in; they can be configured to generate an audit trail that includes system events, transactional events, user events, and other data definitions not available from any other sources. The stream of data is typically sent to one or more locations assigned by the database platform, either in a file or within the database itself. Logging can be implemented through an agent, or logs can be queried remotely from the DSP platform using SQL. Audit logs are preferred by some organization because they provide a series of database events from the perspective of the database. The audit trail reconciles database rollbacks, errors, and uncommitted statements – producing an accurate representation of changes

Share:
Read Post

Understanding and Selecting DSP: Core Components

Those of you familiar with DAM already know that over the last four years DAM solutions have been bundled with assessment and auditing capabilities. Over the last two years we have seen near universal inclusion of discovery and rights management capabilities. DAM is the centerpiece of a database security strategy, but as a technology it is just one of a growing number of important database security tools. We have already defined Database Security Platform, so now let’s spend a moment looking at the key components, how we got here, and where the technology and market are headed. We feel this will fully illustrate the need for the name change. Database Security Platform Origins The situation is a bit complicated, so we include a diagram that maps out the evolution. Database Activity Monitoring originated from leveraging core database auditing features, but quickly evolved to include supporting event collection capabilities: Database Auditing using native audit capabilities. Database Activity Monitoring using network sniffing to capture activity. Database Activity Monitoring with server agents to capture activity. So you either used native auditing, a network sniffer, or a local agent to track database activity. Native auditing had significant limitations – particularly performance – so we considered the DAM market distinct from native capabilities. Due to customer needs, most products combined network monitoring and agents into single products – perhaps with additional collection capabilities, such as memory scanning. The majority of deployments were to satisfy compliance or audit requirements, followed by security. There were also a range of distinct database security tools, generally sold standalone: Data Masking to generate test data from protection data, and to protect sensitive information while retaining important data size and structural characteristics. Database Assessment (sometimes called Database Vulnerability Assessment) to assess database configurations for security vulnerabilities and general configuration policy compliance. User Rights Management to evaluate user and group entitlements, identify conflicts and policy violations, and otherwise help manage user rights. File Activity Monitoring to monitor (and sometimes filter) non-database file activity. Other technologies have started appearing as additional features in some DAM products: Content Discovery and Filtering to identify sensitive data within databases and even filter query results. Database Firewalls which are essentially DAM products placed inline and set to filter attack traffic, not merely monitor activity. The following graph shows where we are today: As the diagram shows, many of these products and features have converged onto single platforms. There are now products on the market which contain all these features, plus additional capabilities. Clearly the term “Database Activity Monitoring” only covers a subset of what these tools offer. So we needed a new name to better reflect the capabilities of these technologies. As we looked deeper we realized how unusual standalone DAM products were (and still are). It gradually became clear that we were watching the creation of a platform, rather than the development of a single-purpose product. We believe the majority of database security capabilities will be delivered either as a feature of a database management system, or in these security products. We have decided to call them Database Security Platforms, as that best reflects the current state of the market and how we see it evolving. Some of these products include non-database features designed for data center security – particularly File Activity Monitoring and combined DAM/Web Application Firewalls. We wouldn’t be surprised to see this evolve into a more generic data center security play, but it’s far too early to see that as a market of its own. Market and Product Evolution We already see products differentiating based on user requirements. Even when feature parity is almost complete between products, we sometimes see vendors shifting them between different market sectors. We see primary use cases, and we expect products to differentiate along these lines over time: Application and Database Security: These products focus more on integrating with Web Application Firewalls and other application security tools. They place a higher priority on vulnerability and exploit detection and blocking; and sell more directly to security, application, and database teams. Data and Data Center Security: These products take a more data-centric view of security. Their capabilities will expand more into File Activity Monitoring, and they will focus more on detecting and blocking security incidents. They sell to security, database, and data center teams. Audit and Compliance: Products that focus more on meeting audit requirements – and so emphasize monitoring capabilities, user rights management, and data masking. While there is considerable feature overlap today, we expect differentiation to increase as vendors pursue these different market segments and buying centers. Even today we see some products evolving primarily in one of these directions, which is often reflected in their sales teams and strategies. This should give you a good idea of how we got here from the humble days of DAM, and why this is more than just a rebranding exercise. We don’t know of any DAM-only tools left on the market, so that name clearly no longer fits. As a user and/or buyer we also think it’s important to know which combination of features to look at, and how they can indicate the future of your product. Without revisiting the lessons learned from other security platforms, suffice it to say that you will want a sense of which paths the vendor is heading down before locking yourself into a product that might not meet your needs in 3-5 years. Share:

Share:
Read Post

Webcast Wednesday 22nd: Tokenization Scope Reduction

Just a quick announcement that this Wednesday I will be doing a webcast on how to reduce PCI-DSS scope and audit costs with tokenization. This will cover the meaty part of our Tokenization Guidance paper from last year. In the past I have talked about issues with the PCI Council’s Tokenization supplement; now I will dig into how tokenization affects credit card processing systems, and how supplementary systems can fall out of scope. The webcast will start at 11am PST and run for an hour. You can sign up at the sponsor’s web site. Share:

Share:
Read Post

Friday Summary: February 17, 2012

I managed to take a couple days off last week, and got out of town. I went camping with a group of friends, all from very different backgrounds, with totally unrelated day jobs – but we all love camping in the desert. Whenever we’re BSing by the camp fire, they ask me about current events in security. There’s almost always a current data breach, ‘Anonymous’ attack, or whatever. This group is decidedly non-technical and does not closely follow the events I do. This trip the question on their minds was “What ‘s the big deal with SOPA?” Staying away from the hyperbole and accusations on both sides, I explained that the bill would have given content creators the ability to shut down web sites without due process if they suspected they hosted or distributed pirated content. I went into some of the background around issues of content piracy; sharing of intellectual property; and how digital media, rights management, and parody make the entire discussion even more cloudy. I was surprised that this group – on average a decade older than myself – reacted more negatively to SOPA than I did. One of them had heard about the campaign contributions and was pissed. “Politicians on the take, acting on behalf of greedy corporations!” was the general sentiment. “My sons share music with me all the time – and I am always both happy and surprised when they take an interest in my music, and buy songs from iTunes after hearing it at my place.” And, “Who the hell pirates movies when you can stream them from Netflix for a couple bucks a month?” I love getting non-security people’s reactions to security events. It was a very striking reaction from a group I would not have expected to get all that riled up about it. The response to SOPA has been interesting because it crosses political and generational lines. And I find it incredibly ironic that the first thing both sides state is that they are against piracy – but they cannot agree what constitutes piracy vs. fair use. One of my favorite slogans from the whole SOPA debate was It’s No Longer OK To Not Know How The Internet Works, accusing the backers of the legislation of being completely ignorant of a pervasive technology that has already changed the lives of most people. And even people who I do not consider technically sophisticated seem to “get it”, and we saw wit the ground-swell of support. I am willing to bet that continuing advances in technology will make it harder and harder for organizations like the RIAA to harass their customers. Maybe invest some of that money in a new business model? I know, that’s crazy talk! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s OWASP presentation is live. Adrian’s Dark Reading post on The Financial Industry’s Effect On Database Security. Rich’s TidBITS posts: Mac OS X 10.8 Mountain Lion Stalks iOS & Gatekeeper Slams the Door on Mac Malware Epidemics. Favorite Securosis Posts Mike Rothman: RSAG 2012: Application Security. Love Adrian’s summary of what you’ll see at the RSA Conference around AppSec. Especially since we get to see SECaaS in print. Adrian Lane: OS X 10.8 Gatekeeper in Depth. Real. Practical. Security. Other Securosis Posts RSA Conference 2012 Guide: Key Themes. RSA Conference 2012 Guide: Network Security. Incite 2/15/2012: Brushfire. Friday Summary: February 10, 2012. [New White Paper] Network-Based Malware Detection: Filling the Gaps of AV. Implementing and Managing a Data Loss Prevention (DLP) Solution: Index of Posts. Implementing DLP: Starting Your Integration. Implementing DLP: Deploying Network DLP. Implementing DLP: Deploying Storage and Endpoint. Favorite Outside Posts Mike Rothman: The Sad and Ironic Competition Within the Draft “Expert” Community. Whether you are a football fan or not, read this post and tell me there aren’t similarities in every industry. There are experts, and more who think they are experts, and then lots of other jackasses who think breaking folks down is the best way to make themselves look good. They are wrong… Adrian Lane: Printing Drones. I can think of several good uses – and a couple dozen evil ones – for something like this. Control and power will be a bit tricky, but the potential for amusement is staggering! Project Quant Posts Malware Analysis Quant: Metrics – Build Testbed. Malware Analysis Quant: Metrics – Confirm Infection. Malware Analysis Quant: Monitoring for Reinfection. Malware Analysis Quant: Remediate. Malware Analysis Quant: Find Infected Devices. Malware Analysis Quant: Defining Rules. Malware Analysis Quant: The Malware Profile. Research Reports and Presentations Network-Based Malware Detection: Filling the Gaps of AV. Tokenization Guidance Analysis: Jan 2012. Applied Network Security Analysis: Moving from Data to Information. Tokenization Guidance. Security Management 2.0: Time to Replace Your SIEM? Fact-Based Network Security: Metrics and the Pursuit of Prioritization. Tokenization vs. Encryption: Options for Compliance. Top News and Posts Flash Player Security Update via Krebs, and a Java Security Update. Gatekeeper for Mountain Lion. Vote for Web Hacking Top Ten. No so random numbers lead to bad keys? Who Knew? Paget Demo’s Wireless Credit Card Theft. Carrier IQ Concerns. Blog Comment of the Week No comments this week. Starting to think our comments feature is broken. Oh, wait, it is! Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.