Login  |  Register  |  Contact

Database Security

Wednesday, March 07, 2012

Understanding and Selecting DSP: Data and Event Collection

By Adrian Lane

In our previous post on DSP components we outlined the evolution of Database Activity Monitoring into Database Security Platforms. One of its central aspects is the evolution of event collection mechanisms from native audit, to monitoring network activity, to agent-based activity monitoring. These are all database-specific information sources. The evolution of DAM has been framed by these different methods of data collection. That’s important, because what you can do is highly dependent on the data you can collect. For example, the big reason agents are the dominant collection model is that you need them to monitor administrators – network monitoring can’t do that (and is quite difficult in distributed environments).

The development of DAM into DSP also entails examination of a broader set of application-related events. By augmenting the data collection agents we can examine other applications in addition to databases – even including file activity. This means that it has become possible to monitor SAP and Oracle application events – in real time. It’s possible to monitor user activity in a Microsoft SharePoint environment, regardless of how data is stored. We can even monitor file-based non-relational databases. We can perform OS, application, and database assessments through the same system.

A slight increase in the scope of data collection means much broader application-layer support. Not that you necessarily need it – sometimes you want a narrow database focus, while other times you will need to cast a wider net. We will describe all the options to help you decide which best meets your needs.

Let’s take a look at some of the core data collection methods used by customers today:

Event Sources

Local OS/Protocol Stack Agents: A software ‘agent’ is installed on the database server to capture SQL statements as they are sent to the databases. The events captured are returned to the remote Database Security Platform. Events may optionally be inspected locally by the agent for real-time analysis and response. The agents are either deployed into the host’s network protocol stack or embedded into the operating system, to capture communications to and from the database. They see all external SQL queries sent to the database, including their parameters, as well as query results. Most critically, they should capture administrative activity from the console that does not come through normal network connections. Some agents provide an option to block malicious activity – either by dropping the query rather than transmitting it to the database, or by resetting the suspect user’s database connection.

Most agents embed into the OS in order to gain full session visibility, and so require a system reboot during installation. Early implementations struggled with reliability and platform support problems, causing system hangs, but these issues are now fortunately rare. Current implementations tend to be reliable, with low overhead and good visibility into database activity. Agents are a basic requirement for any DSP solution, as they are a relatively low-impact way of capturing all SQL statements – including those originating from the console and arriving via encrypted network connections.

Performance impact these days is very limited, but you will still want to test before deploying into production.

Network Monitoring: An exceptionally low-impact method of monitoring SQL statements sent to the database. By monitoring the subnet (via network mirror ports or taps) statements intended for a database platform are ‘sniffed’ directly from the network. This method captures the original statement, the parameters, the returned status code, and any data returned as part of the query operation. All collected events are returned to a server for analysis. Network monitoring has the least impact on the database platform and remains popular for monitoring less critical databases, where capturing console activity is not required.

Lately the line between network monitoring capabilities and local agents has blurred. Network monitoring is now commonly deployed via a local agent monitoring network traffic on the database server itself, thereby enabling monitoring of encrypted traffic. Some of these ‘network’ monitors still miss console activity – specifically privileged user activity. On a positive note, installation as a user process does not require a system reboot or cause adverse system-wide side effects if the monitor crashes unexpectedly. Users still need to verify that the monitor is collecting database response codes, and should determine exactly which local events are captured, during the evaluation process.

Memory Scanning: Memory scanners read the active memory structures of a database engine, monitoring new queries as they are processed. Deployed as an agent on the database platform, the memory scanning agent activates at pre-determined intervals to scan for SQL statements. Most memory scanners immediately analyze queries for policy violations – even blocking malicious queries – before returning results to a central management server. There are numerous advantages to memory scanning, as these tools see every database operation, including all stored procedure execution. Additionally, they do not interfere with database operations.

You’ll need to be careful when selecting a memory scanning product – the quality of the various products varies. Most vendors only support memory scanning on select Oracle platforms – and do not support IBM, Microsoft, or Sybase. Some vendors don’t capture query variables – only the query structure – limiting the usefulness of their data. And some vendors still struggle with performance, occasionally missing queries. But other memory scanners are excellent enterprise-ready options for monitoring events and enforcing policy.

Database Audit Logs: Database Audit Logs are still commonly used to collect database events. Most databases have native auditing features built in; they can be configured to generate an audit trail that includes system events, transactional events, user events, and other data definitions not available from any other sources. The stream of data is typically sent to one or more locations assigned by the database platform, either in a file or within the database itself. Logging can be implemented through an agent, or logs can be queried remotely from the DSP platform using SQL.

Audit logs are preferred by some organization because they provide a series of database events from the perspective of the database. The audit trail reconciles database rollbacks, errors, and uncommitted statements – producing an accurate representation of changes made. But the downsides are equally serious. Historically, audit performance was horrible. While the database vendors have improved audit performance and capabilities, and DSP vendors provide great advice for tuning audit trails, bias against native auditing persists. And frankly, it’s easy to mess up audit configurations. Additionally, the audit trail is not really intended to collect SELECT statements – viewing data – but focused on changes to data or the database system. Finally, as the audit trail is stored and managed on the database platform, it competes heavily for database resources – much more than other data collection methods. But given the accuracy of this data, and its ability to collect internal database events not available to network and OS agent options, audit remains a viable – if not essential – event collection option.

One advantage of using a DSP tool in conjunction with native logs is that it is easier to securely monitor administrator activity. Admins can normally disable or modify audit logs, but a DSP tool may mitigate this risk.

Discovery and Assessment Sources

Network Scans: Most DSP platforms offer database discovery capabilities, either through passive network monitoring for SQL activity or through active TCP scans of open database ports. Additionally, most customers use remote credentialed scanning of internal database structures for data discovery, user entitlement reporting, and configuration assessment. None of these capabilities are new, but remote scanning with read-only user credentials is the the standard data collection method for preventative security controls.

There are many more methods of gathering data and events, but we’re focusing on the most commonly used. If you are interested in a more depth on the available options, our blog post on Database Activity Monitoring & Event Collection Options provides much greater detail. For those of you who follow our stuff on a regular basis, there’s not a lot of new information there.

Expanded Collection Sources

A couple new features broaden the focus of DAM. Here’s what’s new:

File Activity Monitoring: One of the most intriguing recent changes in event monitoring has been the collection of file activity. File Activity Monitoring (FAM) collects all file activity (read, create, edit, delete, etc.) from local file systems and network file shares, analyzes the activity, and – just like DAM – alerts on policy violations. FAM is deployed through a local agent, collecting user actions as they are sent to the operating system. File monitors cross reference requests against Identity and Access Management (e.g., LDAP and Active Directory) to look up user identities. Policies for security and compliance can then be implemented on a group or per-user basis.

This evolution is important for two reasons. The first is that document and data management systems are moving away from strictly relational databases as the storage engine of choice. Microsoft SharePoint, mentioned above, is a hybrid of file management and relational data storage. FAM provides a means to monitor document usage and alert on policy violations. Some customers need to address compliance and security issues consistently, and don’t want to differentiate based on the idiosyncrasies of underlying storage engines, so FAM event collection offers consistent data usage monitoring.

Another interesting aspect of FAM is that most of the databases used for Big Data are non-relational file-based data stores. Data elements are self-describing and self-indexing files. FAM provides the basic capabilities of file event collection and analysis, and we anticipate the extension of these capabilities to cover non-relational databases. While no DSP vendor offers true NoSQL monitoring today, the necessary capabilities are available in FAM solutions.

Application Monitoring: Databases are used to store application data and persist application state. It’s almost impossible to find a database not serving an application, and equally difficult to find an application that does not use a database. As a result monitoring the database is often considered sufficient to understand application activity. However most of you in IT know database monitoring is actually inadequate for this purpose. Applications use hundreds of database queries to support generic forms, connect to databases with generic service accounts, and/or uses native application codes to call embedded stored procedures rather than direct SQL queries. Their activity may be too generic, or inaccessible to traditional Database Activity Monitoring solutions. We now see agents designed and deployed specifically to collect application events, rather than database events. For example SAP transaction codes can be decoded, associated with a specific application user, and then analyzed for policy violations. As with FAM, much of the value comes from better linking of user identity to activities. But extending scope to embrace the application layer directly provides better visibility into application usage and enables more granular policy enforcement.

This post has focused on event collection for monitoring activity. In our next section we will delve into greater detail on how these advancements are put to use: Policy Enforcement.

–Adrian Lane

Monday, July 11, 2011

Friction and Security

By Adrian Lane

Every company I have worked for has had some degree of friction between sales and marketing teams. While their organizational charters are to support one another, sales always has some disagreement about how products are positioned, the quality of competitive intelligence, the quality of leads, and the lack of <insert object here> to grease the customer skids. Marketing complains that sales does not follow the product sales scripts, doesn’t call leads in a timely fashion, and don’t do a good job of collecting customer intelligence. Friction is a natural part of the relationship between the two organizations, so careful balancing is necessary.

I was reading George Hulme’s interview David Litchfield on securing the data castle this morning, which provides basic security steps every organization should take. There’s also a list of intermediate Oracle security controls (PDF). But the real challenge was not performing Litchfield’s steps – it’s managing the resulting friction. The issue is that problems arising between database administrators and everybody else. Litchfield says:

Beyond patch updates and good password management, what else can organizations be doing that they’re not? Use the principle of least privilege within their applications. This is a very important one. People are pressured into getting their applications running as quickly as they can. However, when they try to manage permissions properly, that good practice can delay deployment slightly. So they say, “Oh look, let’s just give users all the permissions. The application seems to work with these settings. Let’s shove that into production.” Not a great approach. If you don’t want a breach, it’s really worth spending the extra time to design an application that operates on least privilege.

Which is all true, but only one side of the coin. For example, setting permissions is easy. Managing and maintaining good permissions over time is more work and creates friction between organizations. Most DBAs face user calls on a daily basis, asking for added permissions to complete some task. Users look at permissions – or their lack – as impediments to getting their jobs done. Worse, should the DBA decline the request, the DBA takes the blame for lost time. DBAs need to add the permissions and then – at some prearranged time – revoke them. But most DBAs, looking to avoid future calls to add privileges, never revoke them. It’s easier and less hassle, and users are happier. Face it – a few minutes of wasted time for both parties, especially with hundreds or even thousands of users, adds up to a lot of time. Who’s going to notice?

Patching is the same – upgrade an application or database revision and stuff breaks. Or just as bad, the application works differently than before. New features and functions create complaints like “What happened to X?” and “It used to do Y, but now it doesn’t!”, so for several weeks the help desk is swamped with calls. And password rotation and long password requirements both generate help desk calls by the dozen.

So what’s the result? User complaints. Systems are not reliable, which results in the poor DBA getting a poor ‘performance’ rating. Which is sad because the friction between user demands for everything and DBAs holding the line for security is a sign that DBAs are doing their jobs. But doing their jobs gets them dinged on performance, so they don’t get raises, so they leave for other jobs.

Any good DBA understands that there is a correct degree of friction in their role for security. It’s not just planning for the security measures you want to put in place, but understanding how to mitigate their impact on the organization. Plan ahead and don’t let security be “your fault”.

–Adrian Lane

Tuesday, April 12, 2011

New Release: Our Insanely Comprehensive Database Security Framework and Metrics

By Rich

Some projects take us a few days. Others? More like 18 months.

Back before Mike even joined us, Adrian and I started a ‘quick’ project to develop a basic set of metrics for database security programs. As with most of our Project Quant efforts, we quickly realized there wasn’t even a starting framework out there, never mind any metrics. We needed to create a process for every database security task before we could define where people spent their time and money. Over the next year and a half we posted, reposted, designed, redesigned, and finally produced a framework we are pretty darn proud of.

To our knowledge this is the most comprehensive database security program framework out there. From developing policies, to patch management, to security assessments, to activity monitoring, we cover all the major database security activities. We have structured this with a modular set of processes and subprocesses, with metrics to measure key costs at each step.

The combination of process framework and metrics should give you some good ideas for structuring, improving, and optimizing your own program.

Here’s the permanent home for the report, where you can post feedback and which will include update notices: Measuring and Optimizing Database Security Operations (DBQuant).

We broke this into an Executive Summary that focuses on the process, and the full report with everything:

Executive Summary. (PDF)

The Full Report. (PDF)

Special thanks to Application Security Inc. for sponsoring the report, and sticking with us as we pretended to be PhD candidates and dragged this puppy out.

–Rich

Wednesday, February 23, 2011

What You *Really* Need to Know about Oracle Database Firewall

By Rich

Nothing amuses me more than some nice vendor-on-vendor smackdown action. Well, plenty of things amuse me more, especially Big Bang Theory and cats on YouTube, but the vendor thing is still moderately high on my list.

So I quite enjoyed this Dark Reading article on the release of the Oracle Database Firewall. But perhaps a little outside perspective will help. Here are the important bits:

  1. As mentioned in the article, this is the first Secerno product release since their acquisition.
  2. Despite what Oracle calls it, this is a Database Activity Monitoring product at its core. Just one with more of a security focus than audit/compliance, and based on network monitoring (it lacks local activity monitoring, which is why it’s weaker for compliance). Many other DAM products can block, and Secerno can monitor. I always thought it was an interesting product.
  3. Most DAM products include network monitoring as an option. The real difference with Secerno is that they focused far more on the security side of the market, even though historically that segment is much smaller than the audit/monitoring/compliance side. So Oracle has more focus on blocking, and less on capturing and storing all activity.
  4. It is not a substitute for Database Activity Monitoring products, nor is it “better” as Oracle claims. Because it is a form of DAM, but – as mentioned by competitors in the article – you still need multiple local monitoring techniques to handle direct access. Network monitoring alone isn’t enough. I’m sure Oracle Services will be more than happy to connect Secerno and Oracle Audit Vault to do this for you.
  5. Secerno basically whitelists queries (automatically) and can block unexpected activity. This appears to be pretty effective for database attacks, although I haven’t talked to any pen testers who have gone up against it. (They do also blacklist, but the whitelist is the main secret sauce).
  6. Secerno had the F5 partnership before the Oracle acquisition. It allowed you to set WAF rules based on something detected in the database (e.g., block a signature or host IP). I’m not sure if they have expanded this post-acquisition. Imperva is the only other vendor that I know of to integrate DAM/WAF.
  7. Oracle generally believes that if you don’t use their products your are either a certified idiot or criminally negligent. Neither is true, and while this is a good product I still recommend you look at all the major competitors to see what fits you best. Ignore the marketing claims.
  8. Odds are your DBA will buy this when you aren’t looking, as part of some bundle deal. If you think you need DAM for security, compliance, or both… start an assessment process or talk to them before you get a call one day to start handling incidents.

In other words: a good product with advantages and disadvantages, just like anything else. More security than compliance, but like many DAM tools it offers some of both. Ignore the hype, figure out your needs, and evaluate to figure out which tool fits best. You aren’t a bad person if you don’t buy Oracle, no matter what your sales rep tells your CIO.

And seriously – watch out for the deal bundling. If you haven’t learned anything from us about database security by now, hopefully you at least realize that DBAs and security don’t always talk as much as they should (the same goes for Guardium/IBM). If you need to be involved in any database security, start talking to the DBAs now, before it’s too late.

BTW, not to toot our own horns, but we sorta nailed it in our original take on the acquisition. Next we will see their WAF messaging. And we have some details of how Secerno works.

–Rich

Tuesday, February 23, 2010

RSAC 2010 Guide: Data Security

By Rich

Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security.

Data Security

Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications.

What We Expect to See

There are four areas of interest at the show relative to data security:

  • Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems.

The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon.

  • DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution.

When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups.

  • Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors.

You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific.

  • Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development.

–Rich

Thursday, October 01, 2009

SQL Injection Prevention

By Adrian Lane

The team over at Dark Reading was kind enough to invite me to blog on their Database Security portal. This week I started a mini-series on threat detection and prevention by leveraging native database features. This week’s post is on using stored procedures to combat SQL injection attacks. But those posts are fairly short and written for a different audience. Here, I will be cross-posting additional points and advanced content I left out of those articles.

My goal was to demystify how stored procedures can help combat SQL injection. There are other options to detect and block SQL injection attacks, many of which have been in use with limited success for some time now.

What can you do about SQL injection? You can patch your database to block known threats. You can buy firewalls to try to intercept these rogue statements, but the application and general network firewalls have shown only limited effectiveness. You need to have a very clear signature for the threat, as well as a written a policy that does not break your application. Many Database Activity Monitoring vendors can block queries before they arrive. Early DAM versions detected SQL injection based on exact pattern matching that was easy for attackers to avoid, back when DAM policy management could not accommodate business policy issues; this resulted in too many false negatives, too many false positives, and deadlocked applications. These platforms are now much better at policy management and enforcement. There are memory scanners to examine statement execution and parameters, as well as lexical and content analyzers to detect and block (with fair success). Some employ a hybrid approach, with assessment to detect known vulnerabilities, and database/application monitoring to provide ‘virtual patching’ as a complement.

I have witnessed many presentations at conferences during the last two years demonstrating how a SQL injection attack works. Many vendors have also posted examples on their web sites and show how easy it is to compromise and unsecured database with SQL injection. At the end of the session, “how to fix” is left dangling. “Buy our product and we will fix this problem for you” is often their implication. That may be true or false, but you do not necessarily need a product to do this, and a bolt-on product is not always the best way. Most are reactive and not 100% effective.

As an application developer and database designer, I always took SQL injection attacks personally. The only reason the SQL injection attack succeeded was a flaw in my code, and probably a bad one. The applications I produced in the late 90s and early 2000s were immune to this form of attack (unless someone snuck an ad-hoc query into the code somewhere without validating the inputs) because of stored procedures. Some of you might say note this was really before SQL injection was fashionable, but as part of my testing efforts, I adopted early forms of fuzzing scripts to do range testing and try everything possible to get the stored procedures to crash. Binary inputs and obtuse ‘where’ clauses were two such variations. I used to write a lot of code in stored procedures and packages. And I used to curse and swear a lot as packages (Oracle’s version, anyway) are syntactically challenging. Demanding. Downright rigorous in enforcing data type requirements, making it very difficult to transition data to and from Java applications. But it was worth it. Stored procedures are incredibly effective at stopping SQL injection, but they can be a pain in the ass for more complex objects. But from the programmer and DBA perspectives, they are incredibly effective for controlling the behavior of queries in your database. And if you have ever had a junior programmer put a three-table cartesian product select statement into a production database, you understand why having only certified queries stored in your database as part of quality control is a very good thing (you don’t need a botnet to DDoS a database, just an exuberant young programmer writing the query to end all queries). And don’t get me started on the performance gains stored procedures offer, or this would be a five-page post …

If you like waiting around for your next SQL injection 0-day patch, keep doing what you have been doing.

–Adrian Lane

Thursday, September 03, 2009

Understanding and Choosing a Database Assessment Solution, Part 6: Administration

By Adrian Lane

Reporting for compliance and security, job scheduling, and integration with other business systems are the topics this post will focus on. These are the features outside the core scanning function that make managing a database vulnerability assessment product easier. Most database assessment vendors have listed these features for years, but they were implemented in a marketing “check the box” way, not really to provide ease of use and not particularly intended to help customers. Actually, that comment applies to the products in general. In the 2003-2005 time frame, database assessment products pretty much sucked. There really is no other way to capture the essence of the situation. They had basic checks for vulnerabilities, but most lacked security best practices and operational policies, and were insecure in their own right. Reliability, separation of duites, customization, result set management, trend analysis, workflow, integration with reporting or trouble-ticketing – for any of these, you typically had to look elsewhere. Application Security’s product was the best of a bad lot, which included crappy offerings from IPLocks, NGS, ISS, nTier, and a couple others.

I was asked the other day “Why are you writing about database assessment? Why now? Don’t most people know what assessment is?” There are a lot of reasons for this. Unlike DAM or DLP, we’re not defining and demystifying a market. Database security and compliance requirements have been at issue for many years now, but only recently have platforms have matured sufficiently to realize their promise. These are not funky little homegrown tools any longer, but maturing into enterprise-ready products. There are new vendors in the space, and (given some of the vendor calls we get) several more will join the mix. They are bringing considerable resources to table beyond what the startups of 5 years ago were capable of, integrating the assessment feature into a broader security portfolio of preventative and detective controls. Even the database vendors are starting to take notice and invest in their products. If you reviewed database assessment products more than two years ago and were dissatisfied, it’s time for another look.

On to some of the management features that warrant closer review:

Reporting

As with nearly any security tool, you’ll want flexible reporting options, but pay particular attention to compliance and auditing reports, to support compliance needs. What is suitable for the security staffer or administrator may be entirely unsuitable for a different internal audience, both in content and level of detail. Further, some products generate one or more reports from scan results while others tie scan results to a single report.

Reports should fall into at least three broad categories: compliance and non-technical reports, security reports (incidents), and general technical reports. Built-in report templates can save valuable time by not only grouping together the related policies, providing the level of granularity you want. Some vendors have worked with auditors from the major firms to help design reports for specific regulations, like SOX & PCI, and automatically generate reports during an audit.

If your organization needs flexibility in report creation, you may exceed the capability of the assessment product and need to export the data to a third party tool. Plan on taking some time to analyze built-in reports, report templates, and report customization capabilities.

Alerts

Some vendors offer single policy alerts for issues deemed critical. These issues can be highlighted and escalated independent of other reporting tools, providing flexibility in how to handle high priority issues. Assessment products are considered a preventative security measure, and unlike monitoring, alerting is not a typical use case. Policies are grouped by job function, and rather than provide single policy scanning or escalation internally, critical policy failures are addressed through trouble-ticketing systems, as part of normal maintenance. If your organization is moving to a “patch and shield” model, prioritized policy alerts are a long-term feature to consider.

Scheduling

You will want to schedule policies to run on a periodic basis, and all of the platforms provide schedulers to launch scans. Job control may be provided internally, or handled via external software or even as “cron jobs”. Most customers we speak with run security scans on a weekly basis, but compliance scans vary widely. Frequency depends upon type and category of the policy. For example, change management / work order reconciliation is a weekly cycle for some companies, and a quarterly job at others. Vendors should be able to schedule scans to match your cycles.

Remediation & Integration

Once policy violation are identified, you need to get the information into the right hands so that corrective action can be taken. Since incident handlers may come from either a database or a security background, look for a tool that appeals to both audiences and supplies each with the information they need to understand incidents and investigate appropriately. This can be done through reports or workflow systems, such as Remedy from BMC. As we discussed in the policy section, each policy should have a thorough description, remediation instructions, and references to additional information. Addressing all of the audiences may be a policy and report customization effort for your team. Some vendors provide hooks for escalation procedures and delivery to different audiences. Others use relational databases to store scan results and can be directly integrated into third-party systems.

Result Set Management

All the assessment products store scan results, but differ on where and how. Some store the raw data retrieved from the database, some store the result of a comparison of raw data against the policy, and still others store the result within a report structure. Both for trend analysis, and pursuant to certain regulatory requirements, you might need to store scan results for a period of a year or more. Depending upon how these results are stored, the results and the reports may change with time! Examine how the product stores and retrieves prior scan results and reports as they may keep raw result data, or the reports, or both. Regenerated reports might be different if the policies they were mapped to change. Trend analysis is an important aspect to understanding how security is affected by normal administration and patch management. Consider how historic data is presented to ensure it is suitable your requirements.

Platform and Deployment

Assessment scanners are offered both as appliances and as software. Remote credentials assessments as SaaS are not available as of this writing. Your vendor should provide a web management interface over a secure connection. Proper account management is needed to enforce roles for policy creation, database credential management, and scan results, and many offer integration with external access control systems. The scanner will require maintenance like any other platform. If the vendor is using a relational database to store data within their application stack, this will impact security and operations (positively and negatively), and should be included as one of your regularly scanned databases.

As with any product, it’s sometimes difficult to cut through the marketing materials and figure out if a product really meets your needs. This breakdown of the functional elements is intended to give you an idea of what is possible with state of the art products, and a basic checklist of functions to review for a proof of concept. While the cost of the assessment features is much less than monitoring or auditing solutions, don’t skimp on the evaluation and make sure you test the products as thoroughly as possible. The results need to satisfy a large audience and be integrated with more systems than DAM or other auditing products.

–Adrian Lane

Thursday, August 27, 2009

Database Assessment Solutions, Part 5: Operations and Compliance policies

By Adrian Lane

  • Rich
  • Technically speaking, the market segment we are talking about is “Database Vulnerability Assessment”. You might have noticed that we titled this series “Database Assessment”. No, it was not just because the titles of these posts are too long (they are). The primary motivation for this name was to stress that this is not just about vulnerabilities and security. While the genesis of this market is security, compliance with regulatory mandates and operations policies are what drives the buying decisions, as noted in part 2. (For easy reference, here are Part 1, Part 3, and Part 4). In many ways, compliance and operational consistency are harder problems to solve because they requires more work and tuning on your part, and that need for customization is our focus in this post.

    In 4GL programming we talk about objects and instantiation. The concept of instantiation is to take a generic object and give it life; make it a real instance of the generic thing, with unique attributes and possibly behavior. You need to think about databases in the same way as, when started up, no two are alike. There may be two installations of DB2 that serve the same application, but they are run by different companies, store different data, are managed by different DBAs, have altered the base functions in various ways, run on different hardware, and have different configurations. This is why configuration tuning can be difficult: unlike vulnerability policies that detect specific buffer overflows or SQL injection attacks, operational policies are company specific and are derived from best practices.

    We have already listed a number of the common vulnerability and security policies. The following is a list of policies that apply to IT operations on the database environment or system:

    Operations Policies

    Password requirements (lifespan, composition) Data files (number, location, permissions) Audit log files (presence, permissions, currency) Product version (version control, patches) Itemize (unneeded) functions Database consistency (i.e., DBCC-DB on SQL Server) checks Statistics (statspack, auto-statistics) Backup report (last, frequency, destination) Error log generation and access Segregation of admin role Simultaneous admin logins Ad hoc query usage Discovery (databases, data) Remediation instructions & approved patches Orphaned databases Stored procedures (list, last modified) Changes (files, patches, procedures, schema, supporting functions)

    There are a lot more, but these should give you an idea of the basics a vendor should have in place, and allow you to contrast with the general security and vulnerability policies we listed in section 4.

    Compliance Policies

    Most regulatory requirements, from industry or government, are fulfilled by access control and system change policies we have already introduced. PCI adds a few extra requirements in the verification of security settings, access rights and patch levels, but compliance policies are generally a subset of security rules and operational policies. As the list varies by regulation, and the requirements change over time, we are not going to list them separately here. Since compliance is likely what is motivating your purchase of database assessment, you must to dig into vendor claims to verify they offer what you need. It gets tricky because some vendors tout compliance, for example “configuration compliance”, which only means you will be compliant with their list of accepted settings. These policies may not be endorsed by anyone other than the vendor, and only have coincidental relevance to PCI or SOX. In their defense, most commercially available database assessment platforms are sufficiently evolved to offer packaged sets of relevant polices for regulatory compliance, industry best practices, and detection of security vulnerabilities across all database platforms. They offer sufficient breadth and depth for what you need to get up and running very quickly, but you will need to verify your needs are met, and if not, what the deviation is.

    What most of the platforms do not do very well is allow for easy policy customization, multiple policy groupings, policy revisions, and creating copies of the “out of the box” policies provided by the vendor. You need all of these features for day-to-day management, so let’s delve into each of these areas a little more. This leads into our next section on policy customization.

    Policy Customization

    Remember how I said in Part 3 that “you are going to be most interested in evaluating assessment tools on how well they cover the policies you need”? That is true, but probably not for the reasons that you thought. What I deliberately omitted is that the policies you are interested in prior to product evaluation will not be the same policy set you are interested in afterwards. This is especially true for regulatory policies, which grow in number and change over time. Most DBAs will tell you that the steps a database vendor advises to remediate a problem may break your applications, so you will need a customized set of steps appropriate to your environment. Further, most enterprises have evolved database usage polices far beyond “best practices”, and greatly augment what the assessment vendor provides. This means both the set of policies, and the contents of the policies themselves, will need to change. And I am not just talking about criticality, but description, remediation, the underlying query, and the result set demanded to demonstrate adherence. As you learn more about what is possible, as you refine your internal requirements, or as auditor expectations evolve, you will experience continual drift in your policy set. Sure, you will have static vulnerability and security policies, but as the platform, process, and requirements change, your operations and compliance policy sets will be fluid. How easy it is to customize policies and manage policy sets is extremely important, as it directly affects the time and complexity required to manage the platform. Is it a minute to change a policy, or an hour? Can the auditor do it, or does it require a DBA? Don’t learn this after you have made your investment. On a day-to-day basis, this will be the single biggest management challenge you face, on par with remediation costs.

    Policy Groupings & Separation of Duties

    For any given rule, you have several different potential audiences who may be interested in the results. IT, internal audit, external audit, security, or the DBAs may need the results from the rule in their reports. Conversely, each of these audiences might not be interested, or might be affected by and thus disallowed from seeing the results from certain rules. For example, your SQL Server database group does not need Oracle results, internal audit reports need not contain all security settings, your European database staff may not be interested in US database reports, and separation of duties may require some information be blocked from some users. Managing and grouping policies into logical sets is very important, as the reports derived from the policy set must be specific to certain audiences. You need the ability to group according to function, location, regulatory requirements, security clearance, and so on. The ability to import, update, save different versions, and schedule one or more policy sets is mandatory for modern database assessment tools.

    If you take one thing away from this post it should be that you need to compare what policies are available from the vendor, what will you need to create, and how difficult that will be to accomplish. In the next post we will cover what you actually do with all the data you collect from the vulnerability, security, and operational policies. We will discuss reporting, scheduling, and integration with workflow and trouble ticket systems. We will also cover some of the more advanced topics having to do with platform management, scheduling, data storage, separation of assessment roles, and security of the assessment system itself.

    –Adrian Lane

  • Rich
  • Tuesday, August 25, 2009

    Database Assessment Solutions, Part 4: Vulnerability and Security Policies

    By Adrian Lane

    Understanding and Choosing a Database Assessment Solution, Part 4: Vulnerability and Security Policies

    I was always fascinated by the Sapphire/Slammer worm. The simplicity of the attack and how quickly it spread were astounding. Sure, it didn’t have a malicious payload, but the simple fact that it could have created quite a bit of panic. This event is what I consider the dawn of database vulnerability assessment tools. From that point on it seemed like every couple of weeks we were learning of new database vulnerabilities on every platform. Compliance may drive today’s assessment purchase, but the vulnerabilities are always what grabs the media’s attention, and it remains a key feature for any database security product.

    Prior to writing this post I went back and looked at all the buffer overflow and SQL injection attacks on DB2, Oracle, and SQL Server. It struck me when looking at them – especially those on SQL Server – why half of the administrative functions had vulnerabilities: whoever wrote them assumed that the functions were inaccessible to anyone who was not a DBA. The functions were conceptually supposed to be gated by access control and therefore safe. It was not so much that the programmers were not thinking about security, but they made incorrect assumptions about how the database internals like the parser and preprocessor worked. I have always said that SQL injection is an attack on the database through an application. It’s true, but technically the attacks are also getting through internal database processing layers prior to the exploit, as well as an eternal application layer. Looking back at the details it just seemed reasonable we would have these vulnerabilities, given the complexity of the database platforms and the lack of security training among software developers. Anyway, enough rambling about database security history.

    Understanding database vulnerabilities and knowing how to remediate – whether through patches, workarounds, or third party detection tools – requires significant skill and training. Policy research is expensive, and so is writing and testing these policies. In my experience over the four years that I helped define and build database assessment policies, it would take an average of 3 days to construct a policy after a vulnerability was understood: A day to write and optimize the SQL test case, a day to create the description and put together remediation information, and another day to test on supported platforms. Multiply by 10 policies across 6 different platforms and you get an idea of the cost involved. Policy development requires a full-time team of skilled practitioners to manage and update vulnerability and security policies across the half dozen platforms commonly supported by the vendors. This is not a reasonable burden for non-security vendors to take on, so if database security is an issue, don’t try to do this in-house! Buying an aftermarket product excuses your organization from developing these checks, protecting you from specific threats hackers are likely to deploy, as well as more generic security threats.

    What specific vulnerability checks should be present in your database assessment product? In a practical sense, it does not matter. Specific vulnerabilities come and go too fast for any list to be relevant. What I am going to do is provide a list of general security checks that should be present, and list the classes of vulnerabilities any product you evaluate should have policies for. Then I will cover other relevant buying criteria to consider.

    General Database Security Policies

    • List database administrator accounts and how they map to domain users.
    • Product version (security patch level)
    • List users with admin/special privileges
    • List users with access to sensitive columns or data (credit cards, passwords)
    • List users with access to system tables
    • Database access audit (failed logins)
    • Authentication method (domain, database, mixed)
    • List locked accounts
    • Listener / SQL Agent / UDP, network configuration (passwords in clear text, ports, use of named pipes)
    • Systems tables (subset) not updatable
    • Ownership chains
    • Database links
    • Sample Databases (Northwind, pubs, scott/tiger)
    • Remote systems and data sources (remote trust relationships)

    Vulnerability Classes

    • Default Passwords
    • Weak/blank/same as login passwords
    • Public roles or guest accounts to anything
    • External procedures (CmdExec, xp_cmdshell, active scripting, exproc, or any programatic access to OS level code)
    • Buffer overflow conditions (XP, admin functions, Slammer/Sapphire, HEAP, etc. – too numerous to list)
    • SQL Injection (1=1, most admin functions, temporary stored procedures, database name as code – too numerous to list)
    • Network (Connection reuse, man in the middle, named pipe hijacking)
    • Authentication escalation (XStatus / XP / SP, exploiting batch jobs, DTS leakage, remote access trust)
    • Task injection (Webtasks, sp_xxx, MSDE service, reconfiguration)
    • Registry access (SQL Server)
    • DoS (named pipes, malformed requests, IN clause, memory leaks, page locks creating deadlocks)

    There are many more. It is really important to understand that the total number of in policies any given product is irrelevant. As an example, let’s assume that your database has two modules with buffer overflow vulnerabilities, and each has eight different ways to exploit it. Comparing two assessment products, one might have 16 policies checking for each exploit, and the other could have two policies checking for two vulnerabilities. These products are functionally equivalent, but one vendor touts an order of magnitude more policies, which have no actual benefit. Do NOT let the number of policies influence your buying decision and don’t get bogged down in what I call a “policy escalation war”. You need to compare functional equivalence and realize that if one product can check for more vulnerabilities in fewer queries, it runs faster! It may take a little work on your part to comb through the policies to make sure what you need is present, but you need to perform that inspection regardless.

    You will want to carefully confirm that the assessment platform covers the database versions you have. And just because your company supposedly migrated to Oracle 11 some time back does not mean you get to discount Oracle 9 database support, because odds are better than even that you have at least one still hanging around. Or you don’t officially support SQL Server, but it just so happens that some of the applications you run have it embedded. Furthermore, mergers and acquisitions bring unexpected benefits, such as database platforms you did not previously have in house. Or plans to migrate off a current platform have a way of changing suddenly, with the database sticking around in your organization many years longer than anticipated. Broad database coverage should weigh heavily in your buying decision.

    The currency of policies (at least for coverage of the latest vulnerabilities) is very important. Check to make sure the vendor has a solid track record of delivering policy updates no less than once a quarter. The database vendors typically release a security patch each quarter, so your assessment vendor should as well. A ‘plan’ to do so is insufficient and should be a warning signal. Press vendors for proof of delivery, such as release documentation or policy maintenance update announcements to demonstrate consistent delivery of updated policies vulnerability polices.

    Cross reference policies with the database vendor information. One of the interesting friction points between the database vendors and the vulnerability scanning vendors is the production of complete and detailed information on vulnerabilities. You need to have a clear and detailed explanation of each vulnerability to understand how a vulnerability affects your organization and what workarounds may be at your disposal. While assessment vendors are motivated to provide detailed information on the vulnerability itself, for whatever reason the database vendors tend to offer terse descriptions of the threats and corresponding patch data. Press the assessment vendor for detailed information, but keep in mind the database vendor must be considered the primary source of complete and accurate remediation information. It is wise, either during an evaluation or in production, to cross reference the information provided, and weigh how well each policy documents each threat.

    Finally, database security advice comes from many different sources. The database vendors usually supply best practice checklists for free. The assessment products usually list the policies that they have developed over time from what they have learned in the field. There are other independent blogs, such as Pete Finnigan’s, that offer solid advice. Finally, most database vendors have regional user groups that share information on how to approach database security, which I have always found useful. Check to see if your assessment vendor has what you need, and they probably will given that the major data breaches as of this writing are leveraging the basic vulnerabilities. If you find something is missing, find out if your vendor can provide it for you. We will get into policy customization in more detail in the next post, as well as cover integration and policy set management topics.

    This was supposed to be a short post on vulnerability and security best practices. Short because these two topics are not good indicators of how well a particular database assessment product will meet your needs. It may be interesting to a researcher like myself, but I realize this might be more information than you need or want. Data collection options discussed in part 3, alongside operations and compliance policies which we will discuss in part 5, have a greater bearing on how useful the product will be.

    –Adrian Lane

    Friday, July 03, 2009

    Database Security: The Other First Steps

    By Adrian Lane

    Going through my feed reader this morning when I ran across this post on Dark Reading about Your First Three Steps for database security. As these are supposed to be your first steps with database security,
    the suggestions not only struck me as places I would not start, it offered a method that I would not employ. I believe that there there is a better way to proceed, so I offer you my alternative set of recommendations.

    The biggest issue I had with the article was not that these steps did not improve security, or that the tools were not right for the job, but the path you are taken down by performing these steps are the wrong ones. Theoretically its a good idea to understand the scope of the database security challenge when starting, but infeasible in practice. Databases are large, complex applications, and starting with a grand plan on how to deal with all of them is a great way to grind the process to a halt and require multiple restarts when your plan beaks apart. This article advises you start your process by cataloging every single database instance, and then try to catalog all of the sensitive data in those databases. This is the security equivalent to a ‘cartesian product’ with a database select statement. And just as it is with database queries, it results in an enormous, unwieldy amount of data. You can labor through the result and determine what to protect, but not how.

    At Securosis, we’re all about simplifying security, I am a personal advocate of the ‘divide and conquer’ methodology. Start small. Pick the one or two critical databases in your organization, and start there. Your database administrator knows which database is the critical one. Heck, even your CFO knows which one that is: it’s that giant SAP/Oracle one in the corner that he is still pissed off he had to sign the $10 million dollar requisition for.

    Now, here are the basics steps:

    • Patch your databases to address most known security issues. Highly recommended you test the patch prior to operational deployment.
    • Configuring your database. Consult the vendor recommendations on security. You will need to balance these suggestions with operational consistency (i.e. don’t break you applications). There are also third party security practitioners who offer advice on their blogs for free, and free assessment tools that will help a lot.
    • Get rid of the default passwords, remove unneeded user accounts, and make sure that nothing (users, web connections, stored procedures, modules, etc) is available to the ‘public’.

    Consider this an education exercise to provide base understanding of what needs to be addressed and how best to proceed. At this point you should be ready to a) you can document what exactly your ‘corporate configuration policies’ are and b) develop a tiered plan of action to tackle databases in descending order of priority. Keep in mind that these are just a fraction of the preventative security controls you might employ, and does not address active security measures or forensic analysis. You are still a ways off from employing more intermediate and advanced security stuff … like Database Activity Monitoring, auditing and Data Loss Prevention.

    –Adrian Lane

    Thursday, July 02, 2009

    Three Database Roles: Programmer, DBA, Architect

    By Adrian Lane

    When I interview database candidates, I want to asses their skills in three different areas; how well they can set-up and maintain a database, how well they can program to a database, and how well they can design database systems. These coincide with the three roles I would typically hire: database administrator, database programmer and database architect. Even though I am hiring for just one of these roles, and I don’t expect any single candidate to be fully proficient in all three areas, I do want to understand the breadth of their exposure. It is an indicator of how much empathy they will have for their team members when working on database projects, and understand the sometimes competing challenges each faces. While there will always be some overlap, the divisions of responsibility are broken down as follows

    • Database administrator - Installs, configures, manages the database installation. This will include access control, provisioning and patch management. Typically provide analysis into resource usage and performance.
    • Database architect - Selects and designs the platforms, and designs or approves schema. It’s the architect’s responsibility to understand how data is used, processed and stored within the database. They typically select which database platform is appropriate, and will make judgment calls whether or not to use partitioning, replication, and other advanced features to support database applications.
    • Database programmer - Responsible for coding the queries and use of the database infrastructure. Selection of data types and table design, and assists with

    We talk a lot about database security on this blog, but we should probably spend more time talking about the people who affect database security. In my experience database programmers are the least knowledgeable about the database, but have the greatest impact on database security and performance. I have been seeing a disturbing trend of development teams, especially web application programmers, who perform every function in the application and regard the database as a bucket where they dump stuff to save application state. This is reflected in the common choice of smaller, lighter databases that provide less functionality, and the use of abstraction techniques that clean up the object model but lose native functions that benefit performance, data integrity and security. Worse, they really don’t care the details of how it works as long as their database connection driver is reasonably reliable and the queries are easy to write.

    Why this is important, especially as it pertains to database security, is that you need to view security from these three perspectives and leverage these other practitioner skills within the organization. And if you have the luxury of being able to afford to employ all of these three disciplines, then by all means, have them cooperate in development, deployment and maintenance of database security. You architect is going to know where the critical data is and how it is moved through the system. Your DBA is going to understand how the databases are configured and what operations would be best moved into the database. If you are not already doing it, I highly recommend that you have your DBA’s and Architects do a sanity check on developer schema designs, review any application code that uses the database, and provide support to the development in team access control planning and data processing. It’s hard to willingly submit code for review, but better fix it prior to deployment than after.

    –Adrian Lane

    Thursday, June 04, 2009

    Introduction To Database Encryption - The Reboot!

    By Adrian Lane

    Updated June 4th to reflect terminology change.

    This is the Re-Introduction to our Database Encryption series. Why are we re-introducing this series? I’m glad you asked. The more we worked on the separation of duties and key management sections, the more dissatisfied we became. Rich and I got some really good feedback from vendors and end users, and we felt we were missing the mark with this series. And not just because the stuff I drafted when I was sick completely lacked clarity of thought, but there are three specific reasons we were unhappy. The advice we were giving was not particularly pragmatic, the terminology we thought worked didn’t, and we were doing a poor job of aligning end-user goals with available options. So yeah, this is an apology to our audience as the series was not up to our expectations and we failed to achieve some of our own Totally Transparent Research concepts. But we’re ‘fessing up to the problem and starting from scratch.

    So we want to fix these things in two ways. First we want to change some of the terminology we have been using to describe database encryption. Using ‘media encryption’ and ‘separation of duties’ is confusing the issues, and we want to differentiate between the threat we are trying to protect against vs. what is being encrypted. And as we are talking to IT, developers, DBAs, and other audiences, we wanted to reduce confusion as much as possible. Second, we will create a simple guide for people to select a database encryption strategy that addresses their goals. Basically we are going to outline a decision tree of user requirements and map those to the available database encryption choices. Rich and I think that will aid end users to both clarify their goals and determine the correct implementation strategy.

    In our original introduction we provided a clear idea of where we wanted to go with this series, but we did adopt our own terminology in order to better encapsulate the database encryption options vendors provide. We chose “Encryption for Separation of Duties” and “Encryption for Media Protection”. This is a bit of an oversimplification, and mapped to the threat rather than to the feature. Plus, if you asked your RDBMS vendor for ‘media encryption’, they would not know what they heck you were talking about. We are going to change the terminology back to the following:

    1. Database Transparent/External Encryption: Encryption of the entire database. This is provided by native encryption functions within the database. The goal is to prevent exposure of information due to loss of the physical media. This can also be done through drive or OS/file system encryption, although they lack some of the protections of native database encryption. The encryption is invisible to the application and does not require alterations to the code or schema.

    2. Data User Encryption: Encrypting specific columns, tables, or even data elements in the database. The classic example is credit card numbers. The goal is to provide protection against inadvertent disclosure, or to enforce separation of duties. How this is accomplished will depend upon how key management is utilized and (internal/external) encryption services, and will affect the way the application uses the database, but provides more granular access control.

    While we’re confident we’ve described the two options accurately, we’re not convinced the specific terms “database encryption” and “data encryption” are necessarily the best, so please suggest any better options.

    Blanket encryption of all database content for media protection is much easier than encrypting specific columns & tables for separation of duties, but it doesn’t offer the same security benefits. Knowing which to choose will depend upon three things:

    • What do you want to protect?
    • What do you want to protect it from?
    • What application changes and management tasks will you tolerate?

    Thus, the first thing we need to decide when looking at database encryption is what are we trying to protect and why. If we’re just going after the ‘PCI checkbox’ or are worried about losing data from swapping out hard drives, someone stealing the files off the server, or misplacing backup tapes, then database encryption (for media protection) is our answer. If the goal is to protect data in the event of compromised accounts, rogue DBAs, or inadvertent disclosure; then things get a lot more complicated. We will go into the details of ‘why’ and ‘how’ in a future post, as well as the issues of application alterations, after we have introduced the decision tree overview. If you have any comments, good, bad, or indifferent, please share. As always, we want the discussion to be as open as possible.

    –Adrian Lane

    Tuesday, April 14, 2009

    Security Inevitabilities

    By Rich

    Despite my intensive research into cryonics, I have to accept that someday I will die. Permanently. I don’t know when, where, or how, but someday I will cease to exist. Heck, even if I do manage to freeze myself (did you know one of the biggest cryonincs companies is only 20 minutes from my house?), get resurrected into a cloned 20-year-old version of myself, and eventually upload my consciousness into a supercomputer (so I can play Skynet, since I don’t really like most people) I have to accept that someday Mother Entropy will bitch slap me with the end of the universe.

    There are many inevitabilities in life, and it’s often far easier to recognize these end results than the exact path that leads us to them. Denial is often closely tied to the obscurity of these journeys; when you can’t see how to get from point A to point B (or from Alice to Bob, for you security geeks), it’s all too easy to pretend that Bob Can’t Ever Happen. Thus we find ourselves debating the minutiae, since the result is too far off to comprehend.

    (Note that I’d like credit for not going deep into an analogy about Bob and Alice inevitably making Charlie after a few too many mojitos).

    Security includes no shortage of inevitabilities. Below are just a few that have been circling my brain lately, in no particular order. It’s not a comprehensive list, just a few things that come to mind (and please add your own in the comments). I may not know when they’ll happen, or how, but they will happen:

    • Everyone will use some form of NAC on their networks.
    • Despite PCI, we will move off credit card numbers to a more secure transaction system. It may not be chip and PIN, but it definitely won’t be magnetic strips.
    • Everyone will use some form of DLP, we’ll call it CMP, and it will only include tools with real content analysis.
    • Log management and SIEM will converge into single products. Completely.
    • UTM will rule the day on the perimeter, and we won’t buy separate boxes for every function anymore.
    • Virtualization and information-centric security will totally fuck up network security, especially internally.
    • Any critical SCADA network will be pulled off the Internet.
    • Database encryption will be performed inside the database with native functionality, with keys managed externally.
    • The WAF vs. secure development debate will end as everyone buys/implements both.
    • We’ll stop pretending web application and database security are different problems.
    • We will encrypt all laptops. It will be built into the hardware.
    • Signature AV will die. Mostly.
    • Chris Hoff will break the cloud.

    –Rich

    Saturday, February 07, 2009

    Database Security for DBAs

    By Rich

    I think I’ve discovered the perfect weight loss technique- a stomach virus. In 48 hours I managed to lose 2 lbs, which isn’t too shabby. Of course I’m already at something like 10% body fat, so I’m not sure how needed the loss was, but I figure if I just write a book about this and hock it in some informercial I can probably retire. My wife, who suffered through 3 months of so-called “morning” sickness, wasn’t all that sympathetic for some strange reason.

    On that note, it’s time to shift gears and talk about database security. Or, to be more accurate, talk about talking about database security.

    Tomorrow (Thursday Feb 5th) I will be giving a webcast on Database Security for Database Professionals. This is the companion piece to the webinar I recently presented on Database Security for Security Professionals. This time I flip the presentation around and focus on what the DBA needs to know, presenting from their point of view.

    It’s sponsored by Oracle, presented by NetworkWorld, and you can sign up here.

    I’ll be posting the slides after the webinar, but not for a couple of months as we reorganize the site a bit to better handle static content. Feel free to email me if you want a PDF copy.

    –Rich

    Tuesday, December 16, 2008

    Database Security Webcast Tomorrow

    By Rich

    Tomorrow I’ll be giving the first webcast in a three part series I’m presenting for Oracle. It’s actually a cool concept (the series) and I’m having a bit more fun than usual putting it together. The first session is Database Security for Security Professionals. If you are a security professional and want to learn more about databases, this is targeted right between your eyes. Rather than rehashing the same old issues, we’re going to start with an overview of some database principles and how they mess up our usual approaches to security. Then we’ll dig into those things that the security team can control and influence, and how to work with DBAs. Although we are focusing on Oracle, all the core principles will apply to any database management system.

    And I swear to keep the relational calculus to myself.

    The next webcast flips the story and we’ll be talking about security principles for DBAs. Yes, you DBAs will finally learn why those security types are so neurotic and paranoid. The final webcast in the series will be a “build your own”. We’ll be soliciting questions and requests ahead of time, and then I’ll crawl into a cave throw it all together into a complete presentation.

    The webcast tomorrow (December 17th) will be at 11 am PT and you can sign up here.

    –Rich