Login  |  Register  |  Contact
Wednesday, June 23, 2010

DB Quant: Protection Metrics, Part 4, Web Application Firewalls

By Adrian Lane

Web Application Firewall deployment metrics are next up in the protection phase. This process is somewhat truncated compared to our other deployment processes, as the database administrator is tasked with only a subset of the overall effort that goes into WAF deployment. Regardless, there is a lot of work to be done for policy development and testing, and the process will be repeated many times over.

Our WAF deployment process is:

  1. Identify
  2. Profile
  3. Test
  4. Review
  5. Document

Identify

Variable Notes
Time to identify which databases are part of web applications

Profile

Variable Notes
Time to gather application query and parameter profiles i.e., What does the web application send to the database? Provide to the WAF team to generate rules/policies.

Test

Variable Notes
Time to analyze pre-deployment test results
Optional: time to retest

Review

Variable Notes
Variable: periodically review logs for failures e.g., check to see if rules broke any functionality
Variable: repeat Investigate task to adjust rules i.e., failed tests or policies need to be fixed
Time to investigate and remediate incidents WAF administrators should be managing most incidents, but DBAs are often involved for investigation and remediation of problems beyond rule failures

Document

Variable Notes
Time to document WAF rules

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring
  31. DB Quant: Monitoring Metrics, Part 2, Audit
  32. DB Quant: Protect Metrics, Part 1, DAM Blocking
  33. DB Quant: Protect Metrics, Part 2, Encryption
  34. DB Quant: Protect Metrics, Part 3, Masking

—Adrian Lane

Incite 6/23/2010: Competitive Fire

By Mike Rothman

I’ve always been pretty competitive. For instance, back in high school my friends and I would make boasts about how we’d have more of this or that, and steal the other’s wife, etc. Yes, it was silly high school ego run rampant, but I thought life was a zero sum game back then. Win/win was not in my vocabulary. I win, you lose, that’s it.

Now that is some fireworks... I carried that competitive spirit into the first 15 years or so of my working career. At META, it was about my service selling more than yours. About me being able to stake out overlapping coverage areas and winning the research battle. In the start-up world, it was about raising the money and beating the other companies with similar stories & models.

Then in a variety of vendor gigs, each in very competitive market spaces, it was about competing and winning and having a better story and giving the sales team better tools to win more deals. Nothing was ever good enough – not at work, not at home, and not in my own head.

Yeah, I was frackin’ miserable. And made most of the people around me miserable as well.

When I was told my services were no longer needed at CipherTrust, I saw it as an opportunity to go in a different direction. To focus on helping folks do better, as opposed to winning whatever ‘needed’ to be won. It wasn’t exactly a conscious decision, but I knew I needed a change in focus and attitude. For the most part, it worked. I was much happier, I was doing better, and I was less grumpy.

Then I stepped back into corporate life, but to be honest, my heart wasn’t in it. I didn’t care if we lost a specific deal because we should be able to get into a lot of deals and statistically we’d be OK. Of course, I had to mask that indifference, but ultimately for a lot of reasons it didn’t make sense for me to continue in that role. So I left and got back to where I could help folks, and not worry about winning.

But you can’t entirely escape competition. Now I play softball on Sundays with a bunch of old guys like me. But some of them still have that competitive fire burning and to be honest it gets annoying. When someone boots a ground ball or lines out with runners on, these guys get all pissed off. We lost a one-run game last Sunday, after coming back from 3 runs down in the last inning. I was happy with that effort – we didn’t give up. Others were pissed.

Personally, I play softball because it’s fun. I get outside, I run around, I get my couple of at-bats and make a few plays in the field. But when guys get all uppity about not winning or someone making a mistake, it’s demotivating to me. I’ve got to find a way to tune out the negativity and still have fun playing. Or I’ll need to stop, which is the wrong answer. But I am working too hard to be positive (which is not my default mode) to hang around with negatives.

Yes, I like to win. But I don’t need to win anymore. And I’m a lot happier now because of it. But that’s just me.

– Mike.

Photo credits: “win win” uploaded to Flickr by TheTruthAbout…


Recent Securosis Posts


Incite 4 U

  1. Different NAC strokes for different folks – A few weeks ago, Joel Snyder talked about what went wrong with NAC. It was a good analysis of the market issues. Joel’s conclusion is that there isn’t really a standard set of NAC features, but rather a number of different breeds. Which basically means there is no market – not a consistent one, anyway. No wonder the category has struggled – nobody can agree on what problem the technology is supposed to solve. Joel also points out some of the political issues of deploying a solution that spans network, endpoint, and security teams. This week, NetworkWorld published the Joel’s review. He does likes some of the products (those based on 802.1X like Avenda, Enterasys, and Juniper), and has issues with some of the others (ForeScout and TrustWave). But ultimately the review highlights the reality of the market, which is that there isn’t one. – MR

  2. DRM dreams – Designing DRM systems in 1996, I had big hopes that digital lockers would be a popular choice to secure content for people to share on the Internet. I thought everyone from banking systems to media distribution could benefit. By 1998 that dream faded as nobody was really interested in secure content storage or delivery. But it turns out someone has the same dreams I did: hackers embrace DRM as a way to hide pirated content as reported on Yahoo! News. Basically pirated video is wrapped up in a protective blanket of encryption, which can then be moved and stored freely, without detection by content analysis tools. Porn, pirated movies, and whatever else, can be distributed without fear of being inspected and discovered. And this model works really freaking’ well when the buyer and seller want to keep their activity a secret. Hollywood may have complained bitterly about pirated DVDs, but this particular delivery model will be near impossible to stop. No, Cyber-nanny will not cut it. There are only a handful of ways to catch and prosecute this type of crime. Law enforcement will have to figure out how to police the exchange of decryption keys for money. – AL

  3. Disclosure is religion – I’ve been known to write and talk about the disclosure debate, but I’m starting to wonder if it’s worth the effort. Disclosure has clearly become religion, with everyone believing what they want, nothing more than anecdotal evidence to support anyone’s position, and enough logical fallacies on all sides to fill all the empty heads at a Crossing Over with John Edward show. Tyler Reguly wades in with an informed and reasonable post on the relationship between Full Disclosure and Responsible Disclosure that’s worth a read, but I don’t expect it to change any minds. I worry that even if we ever do get the kinds of studies and data we need to make informed disclosure decisions, they will be ignored faster than evolution in a Texas school book (how’s that for troll bait?!) – RM

  4. Cyber-insurance a messy business – When there are no precedents, things inevitably get messy. As Ed points out on the SecurityCurve blog, an insurer called Colorado Casualty is basically making a pre-emptive strike against the University of Utah to protect against any potential claims from a set of lost tapes (that triggered a $3.3MM disclosure). Is Colorado Casualty wrong? Without precedent, there is no way to know. It seems like your typical insurance company crap of not wanting to pay even when they should, but who knows? And it will take a few years and lots of legal fees to figure out what is right and wrong. Until then, understand that cyber-insurance may not insure you from much of anything. – MR

  5. iPhone encryption trick – If you have an iPhone 3GS or later there is hardware encryption on the device to protect your data. But Apple screwed the pooch on the implementation which basically made the encryption worthless. But the good news is they seem to have fixed this in the just-released iOS 4 software, although you need to take a couple extra steps to make it work. The new version uses your passcode to protect the encryption keys, assuming they got it right this time. If you buy a new iPhone 4 it is enabled by default if you use a passcode, but as described in this support note, you need to take a couple extra steps to enable the improved encryption if you upgraded your device. I hope this works… – RM

  6. Einstein not so smart? – I guess Stiennon isn’t happy with his infamy in declaring IDS dead 10 or so years ago. So now he’s getting on his soapbox and saying DHS’s Einstein project (basically a mondo-IDS) is all wrong. Of course, he doesn’t offer any solutions in the piece or directions on how to make it better. The issues he points out (information overload, lack of staffing) are real. But you need to monitor to know what is going on. Period. The real gap thus far is how to deal with the amount of data – and more importantly how to fix the issues you find. Calling Einstein stupid doesn’t solve the problem. Richard is a smart guy and it would be great to see less rhetoric and more constructive ideas. Though I’m sure all is divulged in Richard’s book. ;-) – MR

  7. Cloudy value, crystal clear motivation – Mike Vizard has been running a series of posts on application testing, with liberal quotes from Aparna Sharma of Infosys Technologies. The entire mindset – premises and approach – is totally backwards. First, having application developers test their own code is not “a case of the fox guarding the hen house”. Developers are not the ones hacking their own code, so this is a B.S. argument. Second, it’s not a huge burden to run automated tests. Many developers link test cases into nightly builds for component and module sanity tests for both security and quality. Test cases requiring complete builds are usually run by QA. The pain in the ass is figuring out if the results are useful or just more false positive garbage. Third, automated application testing has its place, but it’s not a substitute for manual testing. In fact you do both, leveraging the strengths of each where needed – both for coverage and to reduce costs. Finally, the bias towards manual testing is because it is effective: the preference is to automate when possible. Reviewing code is hard work, and very few people are qualified to do it or like to do it. Remember that development teams create libraries of code, trusted both to function and to be secure, to minimize the need to do automated or manual tests. It’s not like you need the limitless resources of the cloud to perform automated testing – ‘the cloud’ is just a convenient delivery model. – AL

  8. The benefit of copying – So how do you get security kung fu and/or improve your skills? Take some advice from the folks at 37Signals and copy someone you respect. 37S is talking about design, but the same method can apply to security. With the advent of lots of video content available nowadays, you can see someone else do something cool, mostly for free. I guess you could go to a hands-on education class, but I’ve found seeing someone else do something and then screwing it up myself is the best way for me to learn. Check out Mubix’s Practical Exploitation and The Academy Pro (mostly vendor stuff); and we know of a few other folks planning detailed video courses, so we expect the amount of content available to mushroom over the next 18 months. – MR

—Mike Rothman

Tuesday, June 22, 2010

DB Quant: Protect Metrics, Part 3, Masking

By Adrian Lane

Masking is the next phase of protective controls in our series. As a reminder, most firms decide to use either ETL or dynamic masking – we cover both in the metrics and process below, but they require slightly different product evaluation, details of which are beyond our scope here. As with encryption, masking is an add-on tool for databases and applications, so I have removed the ‘optional’ tag from the cost metrics.

It’s worth noting during the setup phase that masking tools have numerous options for retaining original data type, format, data ranges, etc. Depending on the tool, you can configure the mask to maintain referential integrity, or emit numeric values to mimic original sums and averages. Plan on spending time to refining the masks during setup.

Our masking process is:

  1. Plan
  2. Acquire
  3. Setup
  4. Deploy & Test
  5. Document

Plan

Variable Notes
Time to confirm data security and compliance requirements
Time to identify preservation requirements e.g., last 4 digits of credit card, format, data type
Time to specify masking model e.g., ETL, in place, etc.
Time to generate baseline e.g., gather sample data and formats for testing

Acquire

Variable Notes
Time to evaluate masking products
Cost to acquire masking products/packages
Time to acquire access and authorization to data systems Credentials to implement mask on sensitive data

Setup

Variable Notes
Time to install masking tools
Time to select masks and masking options e.g., to preserve value, consistency, and referential integrity
Time to configure Implement masks

Deploy & Test

Variable Notes
Time to perform transformations e.g., create extraction or view
Time to verify masks i.e. data is masked, format is preserved, and applications still work
Time to collect sign-offs and approval

Document

Variable Notes
Time to document masks and configuration settings

Keep in mind that masking is a cycle, not a one-time operation. You’ll need to adjust the metrics for ongoing masking projects, but these will leverage initial investments so you won’t be repeating all costs.


Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring
  31. DB Quant: Monitoring Metrics, Part 2, Audit
  32. DB Quant: Protect Metrics, Part 1, DAM Blocking
  33. DB Quant: Protect Metrics, Part 2, Encryption

—Adrian Lane

Understanding and Selecting SIEM/LM: Deployment Models

By Adrian Lane

We have covered the major features and capabilities of SIEM and Log Management tools, so now let’s discuss architecture and deployment models. Each architecture addresses a specific issue, such as coverage for remote devices, scaling across hundreds of thousands of devices, real-time analysis, or handling millions of events per second. Each has advantages and disadvantages in analysis performance, reporting performance, scalability, storage, and cost.

There are four models to discuss: ‘flat’ central collection, hierarchical, ring, and mesh. As a caveat, none of these deployment models is mutually exclusive. Some regions may deploy a flat model, but send information up to a central location via a hierarchy. These are not absolutes, just guidelines to consider as you design your deployment to solve the specific use cases driving your project.

Flat

The original deployment model for SIM and log management platforms was a single server that collected and consolidated log files. In this model all log storage, normalization, and correlation occurs within a central appliance. All data collection methods (agent, flow, syslog, etc.) are available, but data is always stored in the same central location.

A flat model is far simpler to deploy. All data and policies reside in a single location, so there are no policy or data synchronization issues. But of course ultimately a flat central collection model is limited in scalability, processing, and the quantity of data it can manage. A single installation provides a fixed amount of processing and storage, and reporting becomes progressively harder and slower as data sets grow. Truth be told, we only see this kind of architecture for “checkbox compliance”, predominately for smaller companies with modest data collection needs.

The remaining models address the limitations of this base architecture.

Hierarchical

The hierarchical model consists of a central SIEM server, similar to the flat model above. Rather than communicating directly with endpoints where data is collected, the central SIEM server acts as a parent, and communicates with intermediary collection appliances (children). Each child collects data from some of the devices, typically from a specific region or location. The regional child nodes collect and store data, then normalize events before passing them along to the central SIEM server for aggregation, correlation, and reporting. Raw event data remains on the local child for forensic purposes.

The hierarchical model was introduced to help scale across larger organizations, where it wasn’t practical to send the raw data streams across the network and some level of storage tiers were required for scaling. The hierarchical model helps divide and conquer data management challenges by distributing load among a larger number of engines, and reduces network overhead by only passing a subset of the captured data to the parent for correlation and analysis. Data storage, backup, and processing are much easier on smaller data sets. Further, construction of reports can be distributed across multiple nodes – important for very large data sets.

There are many variations on this model, but the primary point is that the parent and child nodes each take on different responsibilities. Depending upon the vendor alerting, filtering, normalization, reporting, and anything else having to do with policy enforcement can be part of the parent or the child, but not both. The good news is you can scale up by adding new child nodes. The downside is that every function handled by the child nodes requires synchronization with the server. For example, alerting is faster from the child node, but requires distribution of the code and policies. Further, alerting from the child node(s) lacks correlation of events to refine the accuracy of alerts. Despite the trade-offs, this hierarchical model is very flexible.

Ring

In the Ring model – or what Mike likes to call the Moat – you have a central SIEM server ringed by many log collection devices. Each logger in the ring is responsible for collecting data from event sources. These log archives are also used to support distributed reporting. The log devices send a normalized and filtered (so substantially reduced) stream of events to the master SIEM device. The SIEM server sitting in the middle is responsible for correlation of events and analysis. This architecture was largely designed to address scalability limitations with some SIEM offerings. It wasn’t cost effective to scale the SIEM engine to handle mushrooming event traffic, so surrounding the SIEM centerpiece with logging devices allowed it to analyze the most critical events while providing a more cost-effective scaling mechanism.

The upside of this model is that simple (cheaper) high-performance loggers do the bulk of the heavy lifting, and the expensive SIEM components provide the meat of the analysis. This model addresses scalability and data management issues, while reducing the need to distribute code and policies among many different devices.

There are a couple issues with the ring model. The biggest problem remains a lack of integration between the two systems. Management tools for the data loggers and the SIEM may be linked together with some type of dashboard, but you quickly discover the two-headed monster of two totally separate products under the covers. Similarly, log management vendors were trying to graft better analysis and correlation onto their existing products, resulting in a series of acquisitions that provided log management players with SIEM. Either way, you end up with two separate products trying to solve a single problem. This is not a happy “you got your chocolate in my peanut butter,” moment, and will continue to be a thorny issue for customers until vendors fully integrate their SIEM and log management offerings as opposed to marketing bandaids dashboards as integrated products.

Mesh

The last model we want to discuss is the mesh deployment. The mesh is a group of interrelated systems, each performing full log management and SIEM functions for a small part of the environment. Basically this is a cluster of SIEM/LM appliances; each a functional peer with full analysis, correlation, filtering, storage, and reporting for local events. The servers can all be linked together to form a mesh, depending on customer needs.

While this model is more complex to deploy and administer, and requires a purpose-built data store to manage high-speed storage and analysis, it does solve several problems. For organizations that require segregation of both data and duties, the mesh model is unmatched. It provides the ability to aggregate and correlate specific segments or applications on specific subsets of servers, making analysis and reporting flexible. Unlike the other models, it can divide and conquer processing and storage requirements flexibly depending on the requirements of the business, rather than the scalability limitations of the product being deployed.

Each vendor’s product is capable implementing two or more of these models, but typically not all of them. Each product’s technical design (particularly the datastore) dictates which deployment models are possible. Additionally, the level of integration between the SIEM and Log Management pieces has an effect as well. As we said in our introduction, every SIEM vendor offers some degree of log management capability, and most Log Management vendors offer SIEM functions. This does not mean that the offerings are fully integrated by any stretch. Deployment and management costs are clearly affected by product integration or lack thereof, so make sure to do your due diligence in the purchase process to understand the underlying product architecture and the limitations and compromises necessary to make the product work in your environment.

—Adrian Lane

Trustwave, Acquisitions, PCI, and Navigating Conflicts of Interest

By Rich

This morning Trustwave announced their acquisition of Breach Security, the web application firewall vendor.

Trustwave’s been on an acquisition streak for a while now, picking up companies such as Mirage (NAC), Vericept (DLP), BitArmor (encryption), and Intellitactics (log management/SIEM). Notice any trends? All these products have a strong PCI angles, none of the companies were seeing strong sales (Trustwave doesn’t do acquisitions for large multiples of sales), and all were more mid-market focused.

Adding a WAF to the mix makes perfect sense, especially since Trustwave also has web application testing (both controls meet PCI requirement 6.6). Trustwave is clearly looking to become a one-stop shop for PCI compliance. Especially since they hold the largest share of the PCI assessment market.

To be honest, there are concerns about Trustwave and other PCI assessment firms offering both the assessment and remediation services. You know, the old fox guarding the henhouse thing. There’s a reason regulations prohibit financial auditors from offering other services to their clients – the conflicts of interest are extremely difficult to eliminate or even keep under control. When the person making sure you are compliant also sells you tools to help become compliant, we should always be skeptical.

We all know how this goes down. Sales folks will do whatever it takes to hit their numbers (you know, they have BMW payments to make), and few of them have any qualms about telling a client they will be compliant if they buy both their assessment services and a nice package of security tools and implementation services. They’ll use words like “partners” and “holistic” to seem all warm and fuzzy.

We can’t really blame Trustwave and other firms for jumping all over this opportunity. The PCI Council shows no interest in controlling conflicts of interest, and when a breach does happen the investigation in the kangaroo court will show the company wasn’t compliant anyway.

But there is also an upside. We also know that every single client of every single PCI assessment, consulting, or product firm merely wants them to make PCI “go away”, especially in the mid-market. Having firms with a complete package of services is compelling, and companies with big security product portfolios like Symantec, McAfee, and IBM aren’t well positioned to provide a full PCI-related portfolio, even though they have many of the pieces.

If Trustwave can pull all these acquisitions together, make them good enough, and hit the right price point, the odds are they will make a killing in the market. They face three major challenges in this process:

  1. Failing to properly manage the conflicts of interest could become a liability. Unhappy customers could lead to either bad press and word of mouth, or even changes in PCI code to remove the conflicts, which they want to avoid at all costs. The actual assessors and consultants are reasonably well walled off, but they will need to aggressively manage their own sales forces to avoid problems. Ideally account execs will only sell one side of the product line, which could help manage the potential issues.
  2. Customers won’t understand that PCI compliance isn’t the same as general security. Trustwave may get the blame for non-PCI security breaches (never mind the real cardholder data breaches), especially given the PCI Council’s history of playing Tuesday morning QB and saying no breached organization could possibly be compliant (even if they passed their assessment).
  3. Packaging all this together at the right price point for the mid-market won’t be easy. Products need real integration, including leveraging a central management console and reporting engine. This is where the real leverage is – not merely services-based integration, which is not good enough for the mid-market.

So the Breach acquisition is a smart move for Trustwave, and might be good for the market. But as an assessor, Trustwave needs to carefully manage their acquisition strategy in ways mere product mashup shops don’t need to worry about.

—Rich

DB Quant: Protect Metrics, Part 2, Encryption

By Adrian Lane

Continuing the Protect phase, we now dig into one of the most important sections: encryption. There are several forms of database encryption, but the proposed process and associated metrics should encompass both transparent encryption and internal API calls to encrypt columns and tables. Keep in mind that for this project we are only accounting for work associated with the database, and this does not include time spent altering queries and applications to accomodate non-transparent encryption options.

In our previous post to introduce the process, we stated that the cost of acquiring encryption products was optional. Actually, that’s not true. You really don’t want to write your own, so you either need to purchase this capability, or already have purchased an encryption tool set. The same is true for external key management. If the products have already been purchased, you will need to make a decision to factor all or part of the cost into this estimate.

Our encryption process is:

  1. Evaluate
  2. Acquire
  3. Test & Approve
  4. Deploy & Integrate
  5. Document

Evaluate

Variable Notes
Time to confirm data security requirements
Time to identify encryption method and tools e.g., native database, OS
Time to identify integration requirements e.g., external key management, key rotation, disaster recovery considerations

Acquire

Variable Notes
Time to evaluate encryption tools
Cost to acquire encryption products/packages
Optional: Cost to acquire key management
Variable: Cost of maintenance and support licenses Native transparent encryption cost is likely to be in addition to base database license

Test & Approve

Variable Notes
Time to establish test environment
Time to install and configure encryption tool Test environment configuration
Time to test Verify data is encrypted, backup procedures still work, etc.
Time to establish disaster recovery process & procedures i.e., keys and supporting services need to be accounted for
Time to collect sign-offs and approval Verify efficacy of encryption, and that systems pass test cases
Time to create database archive Archive & verify production backup

Deploy & Integrate

Variable Notes
Time to install encryption into production environment
Time to install key management server (if used) and generate keys Keys need to be generated regardless
Time to deploy, encrypt data, and set authorization rights
Time to integrate with applications, backup, and authentication systems

Document

Variable Notes
Time to document configuration and key management settings

As we said in this introduction, we aren’t including the costs associated with application changes that may be required, depending on which encryption option you deploy. This is something you will definitely want to include in your metrics, even though they are beyond the scope of this framework.


Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access.
  30. DB Quant: Monitoring Metrics: Part 1, Database Activity Monitoring
  31. DB Quant: Monitoring Metrics, Part 2, Audit
  32. DB Quant: Protect Metrics, Part 1, DAM Blocking

—Adrian Lane

Monday, June 21, 2010

FireStarter: Is Full Disk Encryption without Pre-Boot Secure?

By Rich

This FireStarter is more of a real conversation starter than a definitive statement designed to rile everyone up.

Over the past couple months I’ve talked with a few organizations – some of them quite large – deploying full disk encryption for laptops but skipping the pre-boot environment.

For those of you who don’t know, nearly every full drive encryption product works by first booting up a mini-operating system. The user logs into this mini-OS, which then decrypts and loads the main operating system. This ensures that nothing is decrypted without the user’s credentials.

It can be a bit of a problem for installing software updates, because if the user isn’t logged in you can’t get to the operating system, and if you kick off a reboot after installing a patch it will stall at pre-boot. But every major product has ways to manage this. Typically they allow you to set a “log in once” flag to the pre-boot environment for software updates, but there are a couple others ways to deal with it. I consider this problem essentially solved, based on the user discussions I’ve had.

Another downside is that users need to log into pre-boot before the operating system. Some organizations deploy their FDE to require two logins, but many more synchronize the user’s Windows credentials to the pre-boot, then automatically log into Windows (or whatever OS is being protected). Both seem fine to me, and one of the differentiators between various encryption products is how well they handle user support, password changes, and other authentication issues in pre-boot.

But I’m now hearing of people deploying a FDE product without using pre-boot. Essentially (I think) they reverse the process I just described and automatically log into the pre-boot environment, then have the user log into Windows. I’m not talking about the tricky stuff a near-full-disk-encryption product like Credent uses, but skipping pre-boot altogether.

This seems fracking insane to me. You somewhat reduce the risk of a forensic evaluation of the drive, but lose most of the benefits of FDE.

In every case, the reason given is, “We don’t want to confuse our users.”

Am I missing something here? In my analysis this obviates most of the benefits of FDE, making it a big waste of cash.

Then again, let’s think about compliance. Most regulations say, “Thou shalt encrypt laptop drives.” Thus, this seems to tick the compliance checkbox, even if it’s a bad idea from a security perspective.

Also, realistically, the vast majority of lost drives don’t result in the compromise of data. I’m unaware of any non-targeted breach where a lost drive resulted in losses beyond the cost of dealing with breach reporting. I’m sure there have been some, but none that crossed my desk.

—Rich

Return of the Security Start-up?

By Mike Rothman

As Rich described on Friday, he, Adrian, and I were sequestered at the end of last week working on our evil plans for world domination. But we did take some time for meetings, and we met up with a small company, the proverbial “last company standing” in a relatively mature market. All their competitors have been acquired and every deal they see involves competing with a multi-billion dollar public company.

After a few beers, we reminisced about the good old days when it was cool to deal with start-ups. Where the big companies were at a disadvantage, since it was lame to buy from huge monoliths. I probably had dark hair back then, but after the Internet bubble burst and we went through a couple recessions, most end user organizations opt for big and stable vendors – not small and exciting.

This trend was compounded by the increasing value of suites in maturing markets, and most of security has been maturing rapidly. There is no award for doing system integration on the endpoint or the perimeter anymore. It’s just easier to buy integrated solutions which satisfy requirements from a single vendor. Add in the constant consolidation of innovative companies by the security and big IT aggregators, and there has been a real shift away from start-ups.

But there is a downside of this big company reign. Innovation basically stops at big companies because the aggregators are focused on milking the installed base and not necessarily betting the ranch on new features. Most of the big security companies aren’t very good at integrating acquired technology into their stacks either. So you take an exciting start-up, pay them a lot of money, and then let the technology erode as the big company bureaucracy brings the start-up to its knees. A majority of the brain power leaves and it’s a crap show.

Of course, not every deal goes down like this. But enough do that it’s the exception when an acquisition isn’t a total train wreck a year later.

So back to my small company friends. Winning as a small company is all about managing the perception of risk in doing business with them. There is funding/viability risk, as more than a couple small security companies have gone away over the past few years, leaving customers holding the bag. Most big companies take a look at the balance sheet of a start-up and it’s a horror show (at least relative to what they are used to), so the procurement group blows a gasket when asked to write a substantial check to a start-up. There is also technology risk, in that smaller companies can’t do everything so they might miss the next big thing. Small companies need good answers on both these fronts to have any shot of beating a large entrenched competitor. It’s commonly forgotten, but small companies do innovate, and that cliche about them being more nimble is actually true. Those advantages need to be substantiated during the sales cycle to address those risks.

But end users also face risks outside of the control of a small company. Things like acquisition risk, which is the likelihood of the small company being acquired and then going to pot. And integration risk, where the small company does not provide integration with the other solutions the end user needs, and has no resources to get it done. All of these are legitimate issues facing an end user trying to determine the right product to solve his/her problem.

As an end user, is it worth taking these risks on a smaller company? The answer depends on sophistication of the requirement. If the requirement can be met out-of-the box and the current generation of technology meets your needs, then it’s fine to go with the big company. The reality of non-innovation and crappy integration from a big company isn’t a concern. As long as the existing feature set solves your problems, you’ll be OK.

It’s when you are looking at either a less mature market or requirements that are not plain vanilla where the decision becomes a bit murky. Ultimately it rests on your organization’s ability to support and integrate the technology yourself, since you can’t guarantee that the smaller company will survive or innovate for any length of time. But there are risks in working with large companies as well. Don’t forget that acquired products languish or even get worse (relative to the market) once acquired, and the benefits of integration don’t necessarily materialize. So the pendulum swings both ways in evaluating risks relative to procurement.

And you thought risk management was only about dealing with the risk of attack?

There are some tactics end users can use to swing things the right way. Understand that while negotiating the original PO with a small company, you have leverage. You can get them to add features you need or throw in deployment resources or cut the price (especially at the end of the quarter). Once the deal closes (and the check clears), they’ll move onto the next big deal. They have to – the small company is trying to survive. So get what you can before you cut the check.

So back to the topic of this post: are we going to see a return of the security start-up? Can smaller security companies survive and prosper in the face of competition from multi-billion dollar behemoths? We think there is a role for the security start-up, providing innovation and responsiveness to customer needs – something big companies do poorly. But the secret is to find the small companies that act big. Not by being slow, lumbering, and bureaucratic, but by aligning with powerful OEM and reseller partners to broaden market coverage. And having strong technology alliances to deliver a broader product than a small company can deliver themselves.

Yes, it’s possible, but we don’t see a lot of it. There are very few small companies out there doing anything innovative. That’s the real issue. Even if you wanted to work with a small company, finding one that has the right mix of decent product in a growing market, non-horrifying balance sheet and funding prospects, and interesting roadmap is not easy. That’s the real downside of the big company/small company pendulum. For the last few years, fewer and fewer new security companies have been funded (as investors tried to make their existing investments work), and that’s resulted in fewer companies and (much) less innovation.

With the lack of liquidity (no IPO market, few high multiple M&A deals), it’s hard to see how this might change any time soon. VCs won’t jump back in until they think they can make money. There are still a lot of crappy small companies out there trying to get bought, so the buyers can be picky and drive hard bargains. That means end users will be working with bigger companies (with all the heartburn that entails) for the foreseeable future. The market could improve, welcoming small outfits and lots of innovation – it just doesn’t seem likely, at least for a couple of years.

—Mike Rothman

Friday, June 18, 2010

Friday Summary: June 18, 2009

By Rich

Dear Securosis readers,

The Friday Summary is currently unavailable. Our staff is at an offsite in an undisclosed location completing our world domination plans. We apologize for the inconvenience, and instead of our full summary of the week’s events here are a few links to keep you busy. If you need more, Mike Rothman suggests you “find your own &%^ news”.

Mike’s attitude does not necessarily represent Securosis, even though we give him business cards.

Thank you, we appreciate your support, and turn off your WiFi!

Securosis Posts

Other News

—Rich

Thursday, June 17, 2010

DB Quant: Protect Metrics, Part 1, DAM Blocking

By Rich

Now it’s time for the Protect phase, where we start applying database-specific preventative security controls. First up? Back to Database Activity Monitoring… this time in blocking mode.

Our DAM Blocking process is:

  1. Identify
  2. Define
  3. Deploy
  4. Document
  5. Manage

(Manage wasn’t in our original post, but we have added it after additional research and in response to your feedback).

Identify

Variable Notes
Time to identify databases
Time to identify activity to block Some of this assessment occurs in the Planning phase
Cost of DAM blocking tool May already be accounted for

Define

Variable Notes
Time to select blocking method
Time to create rules and policies
Time to specify incident handling and review

Deploy

Variable Notes
Time to integrate blocking
Time to configure and test rules May include time to build behavioral profiles
Time to deploy rules
Time to evaluate effectiveness

Document

Variable Notes
Time to document policies and event handling

Manage

Variable Notes
Time to handle incidents
Time to tune policies

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access.
  30. DB Quant: Monitoring Metrics: Part 2, Database Activity Monitoring
  31. DB Quant: Monitoring Metrics, Part 2, Audit

—Rich

NSO Quant: Manage IDS/IPS Process Map

By Mike Rothman

After posting half of the manage process map (Firewalls) earlier this week, now we move to managing IDS/IPS devices (remember monitoring servers is in scope, but managing servers is not). The first thing you’ll notice is this process is a bit more complicated, mostly because we aren’t just dealing with policies/rules, but also attack signatures and other heuristics used to detect attacks. That adds another layer of information required to build the policies that govern use of the device. So we have expanded the definition of the top area to Content Management, which includes both policies/rules and signatures.

Content Management

In this phase, we manage the content that underpins the IDS/IPS. This includes both attack signatures and the policies/rules control actions triggered by signature matches.

Policy Management Sub-Process

Policy Review

Given the number of potential monitoring and blocking policies available on an IDS/IPS, it’s important to keep the device up to date. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on each device. It is a best practice to periodically review firewall policy and prune rules that are obsolete, duplicative, risky provide unwanted exposures, or otherwise unneeded. Catalysts for policy review may include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below).

Define/Update Policies & Rules

Policy HierarchyInvolves defining the depth and breadth of the IDS/IPS policies, including the actions (block, alert, log) taken by the device in the event of a signature (or series of signatures) being triggered. Not that as the capabilities of IDS/IPS devices continue to expand, the term “signature” is generic to matching a specific attack condition. Time limited policies may also be deployed, to activate (or deactivate) certain policies that are short term in nature. Logging, alerting, and reporting policies are also defined in this step.

It’s important here to consider the hierarchy of policies that will be implemented on the devices. The chart at right shows a sample hierarchy with organizational policies at the highest level, which may then be supplemented (or even supplanted) by business unit or geographic policies. Those feed the specific policies and/or rules implemented at each location, which then filter down to a particular device. Designing a hierarchy to properly leverage policy inheritance can either dramatically increase or decrease the complexity of the device’s content.

Initial deployment of the policies should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally.

Document Policy Changes

As the planning stage is an ongoing process, documentation is important for operational and compliance purposes. This step lists and details whatever changes have been made to the policies and associated operational standards/guidelines/requirements.

Signature Management Sub-Process

Monitor for Release/Advisory

Identify signatures sources for the devices, and then monitor on an ongoing basis for new signatures. Since attacks emerge on a constant basis; it’s important to follow an ongoing process to keep the IDS/IPS devices current.

Evaluate

Perform the initial evaluation of the signature(s) to determine if it applies within your organization, what type of attack it detects, and if it’s relevant to your environment. This is the initial prioritization phase to determine the nature of the new/updated signature(s), its relevance and general priority for your organization, and any possible workarounds.

Acquire

Locate the signature, acquire it, and validate the integrity of the signature file(s). Since most signatures are downloaded these days, this is to ensure the download completed properly.

Change Management

This phase encompasses additions, deletions, and other changes to the IDS/IPS rules and signatures.

Change Request

Based on either a signature or a policy change within the Content Management process, a change to the IDS/IPS device(s) is requested.

Authorize

Authorization involves ensuring the requestor is allowed to request the change, as well as determining the relative priority of the change to slot into an appropriate change window. Prioritize based on the nature of the signature/policy update and potential risk of the attack occurring. Then build out a deployment schedule based on your prioritization, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if the change involves downtime or changes to application usage.

Test & Assess Impact

Develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact. Changes may be implemented in “log-only” mode to understand their impact before approving them for production deployment.

Approve

With an understanding of the impact of the change(s), the request is either approved or denied.

Deploy Change

Prepare the target device(s) for deployment, deliver the change, and install/activate.

Confirm

Verify that changes were properly deployed, including successful installation and operation. This might include use of vulnerability assessment tools or application test scripts to ensure production systems are not disrupted.

Emergency Update

In some cases, including data breach lockdowns and imminent zero-day attacks, a change to the IDS/IPS signature/policy base must be made immediately. A process to short-cut the full change process should be established and documented, ensuring proper authorization for immediate changes and that they can be rolled back in case of unintended consequences.

Other Considerations

Health Monitoring and Maintenance

This phase involves ensuring the IDS/IPS devices are operational and secure. This includes monitoring the devices for availability and performance. If performance measured here is inadequate, this may drive a hardware upgrade. Additionally, software patches (for either functionality or security) are implemented in this phase. We’ve broken out this step due to the operational nature of the function. This doesn’t relate directly to security or compliance, but can be a significant management cost for these devices, and thus should be modeled separately.

Incident Response/Management

For this Quant project, we are considering the monitoring and management processes as separate, although many organizations (especially managed service providers) consider device management a superset of device monitoring.

So the IDS/IPS management process flow does not include incident investigation, response, validation, or management. Please refer to the monitoring process flow for those activities.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.
  2. NSO Quant: Monitor Process Map
  3. NSO Quant: Manage Firewall Process Map

—Mike Rothman

Doing Well by Doing Good (and Protecting the Kids)

By Mike Rothman

My kids are getting more sophisticated in their computer usage. I was hoping I could put off the implementation of draconian security controls on their computers for a while. More because I’m lazy and it will dramatically increase the amount of time I spend supporting the in-house computers. But hope is not a strategy, my oldest will be 10 this year, and she is curious – so it’s time.

The first thing I did was configure the Mac’s Parental Controls on the kid’s machine. That was a big pile of fail. Locking down email pretty much put her out of business. All her email went to me, even when I whitelisted a recipient. The web whitelist didn’t work very well either. The time controls worked fine, but I don’t need those because the computer is downstairs. So I turned it off Apple’s Parental Controls.

I did some research into the parental control options out there. There are commercial products that work pretty well, as well as some free stuff (seems Blue Coat’s K9 web filter is highly regarded) that is useful. But surprisingly enough I agree with Ed over at SecurityCurve, Symantec is doing a good job with the family security stuff.

They have not only a lot of educational material on their site for kids of all ages, but also have a service called Norton Online Family. It’s basically an agent you install on your PCs or Macs and it controls web browsing and email, and can even filter outbound traffic to make sure private information isn’t sent over the wire. You set the policies through an online service and can monitor activity through the web site.

It’s basically centralized security and management for all your family computers. That’s a pretty good idea. And from what I’ve seen it works well. I haven’t tightened the controls yet to the point of soliciting squeals from the constituents, but so far so good.

But it does beg the question of why a company like Symantec would offer something like this for free? It’s not like companies like NetNanny aren’t getting consumers to pay $40 for the same stuff. Ultimately it’s about both doing the right thing in eliminating any cost barrier to protecting kids online, and building the Big Yellow brand.

Consumers have a choice with their endpoint security. Yes, the yellow boxes catch your eye in the big box retailers, but ultimately the earlier they get to kids and imprint their brand onto malleable brains, the more likely they are to maintain a favorable place there. My kids see a big orange building and think Home Depot. Symantec hopes they see a yellow box and think Symantec and Internet Security. Though more likely will think: that’s the company that doesn’t let me surf pr0n.

As cynical as I am, I join Ed in applauding Symantec, Blue Coat, and all the other companies providing parental control technology without cost.

—Mike Rothman

DB Quant: Monitoring Metrics, Part 2, Audit

By Rich

Our next step is the Monitor phase is Audit. While monitoring is a real-time activity that typically requires third-party products, auditing typically using native database features; DAM products also offer audit as a core function, but audit is available without them.

Our Audit process is:

  1. Scope
  2. Define
  3. Deploy
  4. Document and Report

Scope

Variable Notes
Time to identify databases
Time to determine audit requirements Some of this assessment occurrs in the Planning phase

Define

Variable Notes
Time to select data collection method
Time to identify users, objects, and transactions to monitor
Time to specify filtering
Cost of storage to support auditing

Deploy

Variable Notes
Time to set up and configure auditing
Time to integrate with existing systems e.g. SIEM, log management
Time to implement log file cleanup

Document and Report

Variable Notes
Time to document
Time to define reports
Time to generate reports Ongoing, depending on reporting cycle

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Monitoring Metrics: Part 2, Database Activity Monitoring

—Rich

Wednesday, June 16, 2010

Take Our Data Security Survey & Win an iPad

By Rich

One of the biggest problems in security is that we rarely have a good sense of which controls actually improve security outcomes. This is especially true for newer areas like data security, filled with tools and controls that haven’t been as well tested or widely deployed as things like firewalls.

Thanks to all the great feedback you sent in on our drafts, we are happy to kick off our big data security survey. This one is a bit different than most of the others you’ve seen floating around, because we are focusing more on effectiveness (technically perceived) of controls rather than losses & incidents. We do have some incident-related questions, but only what we need to feed into the effectiveness results.

As with most of our surveys, we’ve set this one up so you can take it anonymously, and all the raw results (anonymized, in spreadsheet format) will be released after our analysis.

Since we have a sponsor for this one (Imperva), we actually have a little budget and will be giving away a 32gb WiFi iPad to a random participant. You don’t need to provide an email address to take the survey, but you do if you want the iPad. If we get a lot of recipients (say over 200) we’ll cough up for more iPads so the odds stay better than the lottery.

Click here to take the survey, and please spread the word. We designed it to only take 10-20 minutes. Even if you aren’t doing a lot with data security, we need your responses to balance the results.

With our surveys we also use something called a “registration code” to keep track of where people found out about it. We use this to get a sense of which social media channels people use. If you take the survey based on this post, please use “Securosis”. If you re-post this link, feel free to make up your own code and email it to us, and we will let you know how many people responded to your referral – get enough and we can give you a custom slice of the data.

Thanks! Our plan is to keep this open for a few weeks.

—Rich

DB Quant: Monitoring Metrics, Part 1, DAM

By Rich

Now that we’ve completed the Secure phase, it’s time to move on to metrics for the Monitor phase. We break this into two parts: Database Activity Monitoring and Auditing.

We initially defined the Database Activity Monitoring process as:

  1. Define
  2. Develop Policies
  3. Deploy
  4. Document

But based on feedback and some overlap with the Planning section, we are updating it to:

  1. Prepare
  2. Deploy
  3. Document
  4. Manage

Prepare

Variable Notes
Cost of DAM tool
Time to identify and profile monitored database Identify the database to monitor and its configuration (e.g., DBMS, platform, connection methods)
Time to define rule set Based on policies determined in the Planning phase. If this wasn’t done during planning, move those metrics into this phase.

Deploy

Variable Notes
Time to deploy DAM tool
Time to configure policies
Time to test deployment

Document

Variable Notes
Time to document activation and deployed rules
Time to record code changes in source control system

Manage

Variable Notes
Time to monitor for policy violations
Time to handle incidents
Time to tune policies

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization
  27. DB Quant: Secure Metrics, Part 1, Patch
  28. DB Quant: Secure Metrics, Part 2, Configure
  29. DB Quant: Secure Metrics, Part 3, Restrict Access
  30. DB Quant: Secure Metrics, Part 4, Shield

—Rich