Login  |  Register  |  Contact
Monday, June 07, 2010

DB Quant: Secure Metrics, Part 2, Configure

By Adrian Lane

The next step in our Secure phase is to securely configure the database, as well as make needed changes to the underlying operating system if needed. Out of the box, databases are highly insecure, requiring significant tweaking; in practice, checking and adjusting configurations is an ongoing issue. Patches, new features, new attacks, and new functions all drive the need for periodic checks; so you should be rerunning the assessment and configuration processes at least quarterly.

The majority of the costs will be time to identify issues and appropriate settings to address. Once again, your vendor may offer tools to support configuration changes and administration, which are included as a capital investment, because they are used for general administration and support.

Our process is:

  1. Assess
  2. Prescribe
  3. Fix
  4. Rescan
  5. Document

Remember that this phase relies on the configuration assessment results from the Discovery phase, which is why we don’t include the full assessment here. For some of you, it may make sense to mix and match the process a little to better match how you actually work.

Assess

Variable Notes
Time to review assessment reports per database e.g., assessment scans from discovery phase
Time to identify policy/standards violations and incorrect settings

Prescribe

Variable Notes
Time to itemize issues For tracking/change management purposes
Time to select remediation option
Time to allocate resources, create work order, & create change script as needed

Fix

Variable Notes
Time to reconfigure database or OS
Time to implement changes and reboot (if necessary)
Time to test change and dependent applications/systems Confirm the expected behavior of the change and the effect on other applications/systems relying on the database

Rescan (optional)

Variable Notes
Variable: Re-assess database configuration Rerun scan portion of Assessment phase to verify changes were implemented

Document

Variable Notes
Time to document changes
Time to document accepted configuration variances
Time to specify changes to configuration policies or rules This is the appropriate time to note required changes to policy

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.
  27. DB Quant: Secure Metrics, Part 1, Patch.

—Adrian Lane

FireStarter: Get Ready for Oracle’s New WAF

By Adrian Lane

We have written a lot about Oracle’s acquisition of Secerno: the key points of the acquisition, the Secerno technology, and some of the business benefits Oracle gets with the Secerno purchase. We did so mainly because Database Activity Monitoring (DAM) is a technology that Rich and I are intimately familiar with, and this acquisition shakes up the entire market. But we suspect there is more. Rich and I have a feeling that this purchase signals Oracle’s mid-term security strategy, and the Secerno platforms will comprise the key component. We don’t have any inside knowledge, but there are too many signals to go unnoticed so we are making a prediction, and our analysis goes something like this:

Quick recap: Oracle acquired a Database Activity Monitoring vendor, and immediately marketed the product as a database firewall, rather than a Database Activity Monitoring product. What Oracle can do with this technology, in the short term, is:

  1. “White list” database queries.
  2. Provide “virtual patching” of the Oracle database.
  3. Monitor activity across most major relational database types.
  4. Tune policies based on monitored traffic.
  5. Block unwanted activity.
  6. Offer a method of analysis with few false positives.

Does any of this sound familiar?

What if I changed the phrase “white list queries” to “white list applications”? If I changed “Oracle database” to “Oracle applications”? What if I changed “block database threats” to “block application threats”?

Does this sound like a Web Application Firewall (WAF) to you?

Place Secerno in front of an application, add some capabilities to examine web app traffic, and it would not take much to create a Web Application Firewall to complement the “database firewall”. They can tackle SQL injection now, and provide very rudimentary IDS. It would be trivial for Oracle to add application white listing, HTML inspection, and XML/SOAP validation. Down the road you could throw in basic XSS protections and can call it WAF. Secerno DAM, plus WAF, plus the assessment capabilities already built into Oracle Management Packs, gives you a poor man’s version of Imperva.

Dude, you’re getting a WAF!

We won’t see much for a while yet, but when we do, it will likely begin with Oracle selling pre-tuned versions of Secerno for Oracle Applications. After a while we will see a couple new analysis options, and shortly thereafter we will be told this is not WAF, it’s better than WAF. How could these other vendors possibly know the applications as well as Oracle? How could they possibly protect them as accurately or efficiently? These WAF vendors don’t have access to the Oracle applications code, so how could they possibly deliver something as effective? We are not trying to be negative here, but we all know how Oracle markets, especially in security:

  1. Oracle is secure – you don’t need X. All vendors of X are irresponsible and beneath consideration.
  2. Oracle has purchased vendor Y in market X because Oracle cares about the security of its customers.
  3. Oracle is the leading provider of X.
  4. Buying anything other than Oracle’s X is irresponsible because other vendors use undocumented APIs and/or inferior techniques.
  5. Product X is now part of the new Oracle Suite and costs 50% more than before, but includes 100% more stuff that you don’t really need but we couldn’t sell stand-alone.

OK, so we went negative. Send your hate mail to Rich. I’ll field the hate mail from the technologists out there who are screaming mad, knowing that there is a big difference between WAF policies and traffic analysis and what Secerno does. Yes and no, but it’s irrelevant from a marketing standpoint. For those who remember Dell’s “Dude” commercials from the early 2000s, they made buying a computer easy and approachable. Oracle will do the same thing with security, making the choice simple to understand, and covering all their Oracle assets. They’d be crazy not to. Market this as a full-featured WAF, blocking malicious threats with “zero false positives”, for everything from Siebel to 11G. True or not, that’s a powerful story, and it comes from the vendor who sold you half the stuff in your data center. It will win the hearts of the security “Check the box” crowd in the short term, and may win the minds of security professionals in the long term.

Do you see it? Does it make sense? Tell me I am wrong!

—Adrian Lane

Friday, June 04, 2010

Friday Summary: June 4, 2010

By Rich

There’s nothing like a crisis to bring out the absolute stupidity in a person… especially if said individual works for a big company or government agency. This week alone we’ve had everything from the ongoing BP disaster (the one that really scares me) to the Israeli meltdown. And I’m sure Sarah Palin is in the mix there someplace.

Crisis communications is an actual field of study, with many examples of how to manage your public image even in the midst of a major meltdown. Heck, I’ve been trained on it as part of my disaster response work. But it seems that everyone from BP to Gizmodo to Facebook is reading the same (wrong) book:

  • Deny that there’s a problem.
  • When the first pictures and videos show up, state that there was a minor incident and you value your customers/the environment/the law/supporters/babies.
  • Quietly go to full lockdown and try to get government/law enforcement to keep people from finding out more.
  • When your lockdown attempts fail, go public and deny there was ever a coverup.
  • When pictures/video/news reports show everyone that this is a big fracking disaster, state that although the incident is larger than originally believed, everything is under control.
  • Launch an advertising campaign with a lot of flowers, babies, old people, and kittens. And maybe some old black and white pictures with farms, garages, or ancestors who would be the first to string you up for those immoral acts.
  • Get caught on tape or in an email/text blaming the kittens.
  • Try to cover up all the documentation of failed audits and/or lies about security and/or safety controls.
  • State that you are in full compliance with the law and take safety/security/fidelity/privacy/kittens very seriously.
  • As the incident blows completely out of control, reassure people that you are fully in control.
  • Get caught saying in private that you don’t understand what the big deal is. It isn’t as if people really need kittens.
  • Blame the opposing party/environmentalists/puppies/you business partners.
  • Lie about a bunch of crap that is really easy to catch. Deny lying, and ignore those pesky videos showing you are (still) lying.
  • State that your statements were taken out of context.
  • When asked about the context, lie.
  • Apologize. Say it will never happen again, and that you would take full responsibility, except your lawyers told you not to.
  • Repeat.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts


Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Michael O’Keefe, in response to Code Re-engineering.

Re-engineering can work, Spolsky inadvertently provides a great example of that, and proves himself wrong. I guess that’s the downside to blogs, and trying to paint things in a black or white manner. He had some good points, one was that when Netscape open sourced the code, it wasn’t working, so the project got off to a slow start. But the success of Mozilla (complete rewrite of Netscape) has since proved him wrong. Once Bill Gates realized the importance of the internet, and licensed the code from Spyglass (I think) for IE, MS started including it on every new release of Windows. In this typical fashion, they slowly whittled away at Netscape’s market share, so Netscape had to innovate. The existing code base was very difficult to work with, even the Netscape engineers admitted that. But when you’re trying to gain market share, speed counts, look at Facebook, and eBay. But eventually you have to make a change, if the code is holding you back. Look at how long it took IE to come out with tabs – ridiculous. And look at Apple’s ability to move to a BSD/Mach/Next (?) kernel. But the best example is still – Mozilla’s Firefox, still ahead of IE, in my opinion.

—Rich

Thursday, June 03, 2010

The Public/Private Pendulum Keeps Swinging

By Mike Rothman

They say the grass is always greener on the other side, and I guess for some folks it is. Most private companies (those which believe they have sustainable businesses, anyway) long for the day when they will be able to trade on the public markets. They know where the Ferrari deal is, and seem to dismiss the angst of Sarbanes-Oxley. On the other hand, most public companies would love the freedom of not having to deal with the quarterly spin cycle and those pesky shareholders who want growth now.

Two examples in the security space show the pendulum in action this week. First is Tripwire’s IPO filing. I love S-1 filings because companies must bare their innards to sell shares to public investors. You get to see all sorts of good stuff, like the fact that Tripwire has grown their business 20-30% annually over the past few years. They’ve been cash flow positive for 6 years, and profitable for the last two (2008 & 2009), although they did show a small loss for Q1 2010.

Given the very small number of security IPOs over the past few years, it’s nice to see a company with the right financial momentum to get an IPO done. But as everyone who’s worked for a public company knows, it’s really about growth – profitable growth. Does 20-30% growth on a fairly small revenue base ($74 million in 2009) make for a compelling growth story?

And more importantly for company analysis, what is the catalyst to increase that growth rate? In the S-1, Tripwire talks about expanding product offerings, growing their customer base, selling more stuff to existing customers, international growth, government growth, and selective M&A as drivers to increase the top line. Ho-hum. From my standpoint, I don’t see anything that gets the company from 20% growth to 50% growth. But that’s just me, and I’m not a stock analyst.

Being publicly listed will enable Tripwire to do deals. They did a small deal last year to acquire SIEM/Log Management technology, but in order to grow faster they need to make some bolder acquisitions. That’s been an issue with the other public security companies that are not Symantec and McAfee – they don’t do enough deals to goose growth enough to make the stock interesting. With Tripwire’s 5,400 customers, you’d figure they’ll make M&A and pumping more stuff into their existing base a key priority once they get the IPO done.

On the other side of the fence, you have SonicWall, which is being taken private by Thoma Bravo Group and a Canadian pension fund. The price is $717 million, about a 28% premium. SonicWall has been public for a long time and has struggled of late. Momentum seems to be returning, but it’s not going to be a high flyer any time soon. So the idea of becoming private, where they only have to answer to their equity holders, is probably attractive.

This is more important in light of SonicWall’s new push into the enterprise. They are putting a good deal of wood behind this Project SuperMassive technology architecture, but breaking into the enterprise isn’t a one-quarter project. It requires continual investment, and public company shareholders are notoriously impatient. SonicWall was subject to all sorts of acquisition rumors before this deal, so it wouldn’t be surprising to see Thoma Bravo start folding other security assets in with SonicWall to make a subsequent public offering, a few years down the line, more exciting.

So the pendulum swings back and forth again. You don’t have to be Carnac the Magnificent to figure there will be more deals, with the big getting bigger via consolidation and technology acquisitions. You’ll also likely see some of the smaller public companies take the path of SafeNet, WatchGuard, Entrust, Aladdin, and now SonicWall, in being taken private. The only thing you won’t see is nothing. The investment bankers have to keep busy, don’t they?

—Mike Rothman

DB Quant: Secure Metrics, Part 1, Patch

By Adrian Lane

Now we move past planning & discovery, and into the actual work of securing databases. The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is database patching.

For patching most of the costs are time and effort to evaluate, test, and apply a patch. Fixed costs are mostly support and maintenance contracts with the database vendor, if applicable (very few patch management products work with databases, so you are usually limited to the DBMS vendor’s tools). Your vendor may offer tools to support patch rollout and administration, which are included as a capital investment cost.

Our process is:

  1. Evaluate
  2. Acquire
  3. Test & Approve
  4. Confirm & Deploy
  5. Document

Evaluate

Variable Notes
Time to monitor for advisories per database type Vendor alerts and industry advisories announce patch availability
Time to identify appropriate patches
Time to identify workarounds Identify workarounds if available, and determine whether they are appropriate
Time to determine priority e.g., Is this a critical vulnerability; if so, when should you apply the patch?

Acquire

Variable Notes
Time to acquire patch(es)
Costs for maintenance, support, or additional patch management tools Optional: Updates to vendor maintenance contracts, if required

Test & Approve

Variable Notes
Time to create regression test cases and acceptance criteria i.e., How will you verify the patch does not break your applications?
Time to set up test environment Obtain servers, tools, and software for verification; then set up for testing
Time to run test Variable: may require multiple cycles, depending upon test cases
Time to analyze results
Time to create deployment packages Optional – if not using stock patches. Approve, label and archive the tested patch.

Confirm & Deploy

Variable Notes
Time to schedule and notify Schedule personnel & communicate downtime to users
Time to install Taking DB offline, back up, patch database, and restart
Time to verify Verify patch installed correctly and database services are available
Time to clean up Remove temp files

Document

Variable Notes
Time to document Close out trouble tickets and update workflow tracking

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.

—Adrian Lane

White Paper Released: Endpoint Security Fundamentals

By Mike Rothman

Endpoint Security is a pretty broad topic. Most folks associate it with traditional anti-virus or even the newfangled endpoint security suites. In our opinion, looking at the issue just from the perspective of the endpoint agent is myopic. To us, endpoint security is as much a program as anything else.

In this paper we discuss endpoint security from a fundamental blocking and tackling perspective. We start with identifying the exposures and prioritizing remediation, then discuss specific security controls (both process and product), and also cover the compliance and incident response aspects.

It’s a pretty comprehensive paper, which means it’s not short. But if you are trying to understand how to comprehensively protect your endpoint devices, this paper will provide a great perspective and allow you to put all your defenses into context. We assembled this document from the Endpoint Security Fundamentals series posted to the blog in early April, all compiled together, professionally edited, and prettified.

Special thanks to Lumension Security for licensing the report.

You can download the paper directly (PDF), or visit the landing page, where you can leave comments or criticism, and track revisions.

—Mike Rothman

Wednesday, June 02, 2010

NSO Quant: Monitor Process Map

By Mike Rothman

It’s been a while since you’ve heard anything about Network Security Operations Quant, but that doesn’t mean we haven’t had the sweatshop working overtime, doing primary research to figure out how organizations manage and monitor their network and security devices. To recap the project scope, we’ll have 5 different “threads”: Monitoring firewalls, IDS/IPS, and servers; and Managing firewalls and IDS/IPS. We know that is not an exhaustive list of everything you do operationally on a daily basis, but we figure this is a good place to start.

To make the content digestible, and also to start getting feedback, we are staring with the Monitor process map. When doing the research we found that there are many commonalities in what has to happen at the highest level to monitor these devices. As we dig into the subprocesses over the next few weeks we will uncover differences between firewalls, IDS/IPS, and servers.

Keep the philosophy of Quant in mind: the high level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything, but does mean this should be a fairly exhaustive list. Individual organizations can then pick and choose those steps which are appropriate for them. We could really use some feedback on how well this encompasses all the network and security device monitoring processes. We based this process on our own experience and primary research, but that doesn’t mean we haven’t missed something. If so, let us know in the comments.

Next week, we’ll post a similar process map for the Manage process. Actually two of them, since there are some fundamental differences between managing firewalls and IDS/IPS.

After the major processes are covered, we’ll dive into all the subprocesses within each of these major steps. Then we will decompose each subprocess and define some meaningful metrics to support process optimization.

Plan

In this phase, we define the depth and breadth of our monitoring activities. These are not one-time events, but a process to revisit every quarter, or after any incident that triggers a policy review.

  1. Enumerate: Find all the security, network, and server devices which are relevant to determining the security of the environment.
  2. Scope: Decide which devices are within the scope of monitoring activity. This involves identifying the asset owner; profiling the device to understand data, compliance and/or policy requirements; and assessing the feasibility of collecting data from it.
  3. Develop Policies: Determine the depth and breadth of the monitoring process, what data will be collected from the devices, and the frequency of collection. The process is designed to be extensible beyond firewall, IDS/IPS, and server monitoring (the scope of this project) to include any other kind of network, security, computing, application, or data capture/forensics device.

Policies

For device types in scope, alerting policies will be developed to identify potential incidents requiring investigation and validation. Defining the alerting policies involves a Q/A process to test the effectiveness of the alerts. A tuning process also needs to be built into the policy definitions, as over time the alert policies will need to be changed. The initial subprocesses in this step include:

  • Firewall Monitoring Policy
  • Firewall Alerting Policy
  • IDS/IPS Monitoring Policy
  • IDS/IPS Alerting Policy
  • Server Monitoring Policy
  • Server Alerting Policy

Finally, monitoring is part of a larger security operations process, so policies are required for workflow and incident response. These policies define how the monitoring information is leveraged by other operational teams; as well as how potential incidents are identified, validated, and investigated.

Monitor

In this phase the monitoring policies are put to use, gathering the data and analyzing it to identify areas for validation and potential investigation. All collected data is stored for compliance, trending, and reporting as well.

  1. Collect: Collect alerts and log records based on the policies defined in Phase 1. Can be performed within a single-element manager or abstracted into a broader Security Information and Event Management (SIEM) system for multiple devices and device types.
  2. Store: For both compliance and forensics purposes, the collected data must be stored for future access.
  3. Analyze: The collected data is then analyzed to identify potential incidents based on alerting policies defined in Phase 1. This may involve numerous techniques, including simple rule matching (availability, usage, attack traffic policy violations, time-based rules, etc.) and/or multi-factor correlation based on multiple device types (SIEM).

Action

If an alert fires in the analyze step, this phase kicks in to understand the issue and determine whether further action/escalation is necessary.

  1. Validate/Investigate: If and when an alert is generated, it must be investigated to validate the attack. Is it a false positive? Is it an issue that requires further action? If so, move to Action step in Phase 3. Determine whether policies need to be tuned based on the accuracy of the alert.
  2. Action/Escalate: Take action to remediate the issue. May involve hand-off/escalation to operations team.

After a certain number of alert validations, a feedback loop determines whether any of the policies must be changed and/or tuned. This is an ongoing process rather than a point-in-time activity, as the dynamic nature of networks and attacks requires ongoing diligence to ensure the monitoring and alerting policies remain relevant and sufficient.

This brings up two big questions we could use some help with:

  1. Does this structure work? At the highest level, we believe monitoring is pretty much monitoring. Is that correct? Or do you fundamentally monitor firewalls differently from IDS/IPS and from servers?
  2. Are we missing anything? Should we move anything? Insert, update, or delete?

We are looking forward to your comments and feedback. Fire away.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.

—Mike Rothman

Understanding and Selecting a SIEM/LM: Correlation and Alerting

By Adrian Lane

Continuing our discussion of core SIEM and Log Management technology, we now move into event correlation. This capability was the holy grail that drove most investment in early SIEM products, and probably the security technology creating the most consistent disappointment amongst its users. But ultimately the ability to make sense of the wide variety of data streams, and use them to figure out what is under attack or compromised, is essential to any security practice. This means that despite the disappointments, there will continue to be plenty of interest in correlation moving forward.

Correlation

Defining correlation is akin to kicking a hornet’s nest. It immediately triggers agitated debates because there are several published definitions and every expert has their favorite. As usual, we need to revisit the definitions and level-set, not to create controversy (though that tends to happen), but to make things easy to understand. As we search for a pragmatic definition, we need to simplify concepts to make subjects understandable to a wider audience at the expense of precision. We understand our community is not a bunch of shrinking violets, so we welcome your comments and suggestions to make our research more relevant.

Let’s get back to the end-user problem driving SIEM and log management. Ultimately the goal of this technology is to interpret security-related data to improve security, increase efficiency, and/or document security controls. If a single file contained all the information required for security analysis, we would not bother with the collection and association of events from multiple sources. The truth is that each log or event contains a piece of information, which forms part of the puzzle, but lacks context necessary to analyze the big picture. In order to make meaningful decisions about what is going on with our applications and within our network, we need to combine events from different sources. Which events we want, and what pieces of data from those events we need, vary based on the problem we are trying to solve.

So what is correlation? Correlation is the act of linking multiple events together to detect strange behavior. It is the association of different but related events to provide broader context than a single event can provide. Keep in mind that we are using a broad definition of ‘event’ because as the breadth of analysis increases, data may expand beyond traditional events. Seems pretty simple, eh?

Let’s look at an example of how correlation can help achieve one of our key use cases: increasing the efficiency of the security team. In this case an analyst gets events from multiple locations and device types (and/or applications), and is expected to figure out whether there is an issue. The attacker might first scan the perimeter and then target an externally facing web server with a series of known exploits. Upon successfully compromising the web server, the attacker sets up a new user account and start scanning internally to find more exploitable hosts.

The data is available to catch this attack, but not in a single place. The firewalls see the initial scans. The IDS/IPS sees the series of exploits. And the user directory sees the new account on the compromised server. The objective of correlation is to see all these events come through and recognize that the server has been compromised and needs immediate attention. Easy in concept, very hard in practice.

Historically, the ability to do near real time analysis and event correlation was one of the ways SIEM differed from log management, although the lines continue to blur. Most of the steps we have discussed so far (collecting data, then aggregating and normalizing it) help isolate the attributes that link events together to make correlation possible. Once data is in manageable form we apply rules to detect attacks and misuse. These rules are comprised of the granular criteria (e.g., specific router, user account, time, etc.), and determine if a series of events reaches a threshold requiring corrective action.

But the devil is in the details. The technology implements correlation as a linear series of events. Each comparison may be a simple case of “if X=Y, then” do something else, but we may need to string several of these comparisons together. Second, note that correlation is built on rules for known attack patterns. This means we need some idea of what we are looking for to create the correlation rules. We have to understand attack patterns or elements of a compliance requirement in order to determine which device and event types should be linked. Third, we have to factor in the time factor, because events do not happen simultaneously, so there is a window of time within which events are likely to be related. Finally the effectiveness of correlation also depends on the quality of data collection, normalization, and tagging or indexing of information to feed the correlation rules.

Development of rules takes time and understanding, as well as ongoing maintenance and tuning. Sure, your vendor will provide out-of-the-box policies to help get you started, but expect to invest significant time into tweaking existing rules for your environment, and writing new policies for security and compliance to keep pace with the very dynamic security environment. Further complicating matters: more rules and more potentially-linked events to consider increase computational load exponentially. There is a careful balancing act to be performed between the number of policies to implement, the accuracy of the results, and the throughput of the system. These topics may not immediately seem orthogonal, but generic rules detect more threats at a cost of more false positives. The more specific the rule, and the more precisely tailored to find specific threats, the less it will find new problems.

This is the difficulty in getting correlation working effectively in most environments. As described in the Network Security Fundamentals series, it’s important to define clear goals for any correlation effort and stay focused on them. Trying to boil the ocean always yields disappointing results.

Alerting

Once events are correlated, analysis performed, and weirdness discovered, what do we do? We want to quickly and automatically announce what was discovered, getting information to the right places so action can be taken. This is where alerting comes in.

During policy analysis, when we detect something strange occurred, the policy triggers a predefined response. Alerts are the actions we take when polices are violated. Where the alert gets sent, how it’s sent, what information is passed, and the criticality of the event are all definable within the system, and embodied in the rules that form our policies. During policy development we define the response for each suspect event. Tuning policies for compliance and operations management is a significant effort, but the investment is required in order to get SIEM/LM up and running and reap any benefit.

Alert messages are distributed in different ways. Real-time alerts, for rule violations which require immediate attention, can be sent via email, pager, or text message to IT staff. Some alerts are best addressed by programmatic response, and are sent via Simple Network Management Protocol (SNMP) packets, XML messages, or application API calls with sufficient information for the responding application to take instant corrective action. Non-critical events may be logged as informational within the SIEM or log management platform, or sent to workflow/trouble-ticketing systems for future analysis. In most cases alerts rely on additional tools and technologies for broadcast and remediation, but the SIEM platform is configured to provide just the right subset of data for each communication medium.

SIEM/LM platforms tightly associate alerts with the rules, even embedding the alert definitions within the policy management system. This way as rules are created their criticality and the appropriate response are defined at the same time. Not in a futile attempt to replace an analyst, but in order to make him/her more effective and efficient, which is the name of the game.

Selection

With SIEM, correlation and alerting are the first areas of the technology you will spend a great deal of time customizing for your organization. Collection, aggregation, and normalization are relatively static builtin features, with the main variances being number of data types, protocols, and automation supported – leaving little room for tuning and filtering. Correlation and alerting are different, and require much more tuning and configuration to fit business requirements. We will go into much more detail on what to look for during your selection process later in this series, but plan on dedicating a large portion of your proof-of-concept review (and initial installation) on building and tuning your correlation rule set.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.
  6. Aggregation, Normalization, and Enrichment.

—Adrian Lane

Thoughts on Privacy and Security

By Rich

I was catching up on my reading today, and this post by Richard Bejtlich reminded me of the tension we sometimes see between security and privacy. Richard represents the perspective of a Fortune 5 security operator who is tasked with securing customer information and intellectual property, while facing a myriad of international privacy laws – some of which force us to reduce security for the sake of privacy (read the comments).

I’ve always thought of privacy from a slightly different perspective. Privacy traditionally falls into two categories:

  • The right to be left alone (just ask any teenage boy in the bathroom).
  • The right to control what people know about you.

According to the dictionary on my Mac, privacy is:

the state or condition of being free from being observed or disturbed by other people : she returned to the privacy of her own home.

My understanding is that it is only fairly recently that we’ve added personal information into the mix. We are also in the midst of a massive upheaval of social norms enabled by technology and the distribution and collection of information that changes the scope of “free from being observed.”

Thus, in the information age, privacy is now becoming as much about controlling information about us as it is about physical privacy.

Now let’s mix in security, which I consider a mechanism to enforce privacy – at least in this context. If we think about our interactions with everyone from businesses and governments to other individuals, privacy consists of three components:

  1. Intent: What I intend to do with the information you give me, whether it is the contents of a personal conversation or a business transaction.
  2. Communication: What I tell you I intend to do with said information.
  3. Capability: My ability to maintain and enforce the social (or written) contract defined by my intent and communications.

Thus I see security as a mechanism of capability. The role of “security” is to maintain whatever degree of protection around personal information the organization intends and communicates through their privacy policy – which might be the best or worst in the world, but the role of security is to best enforce that policy, whatever it is.

Companies tend to get into trouble either when they fail to meet their stated policies (due to business or technical/security reasons), or when their intent is incompatible with their legal requirements.

This is how I define privacy on the collection side – but it has nothing to do with protecting or managing your own information, nor does it address the larger societal issues such as changing ownership of information, changing social mores, changes in personal comfort over time, or collection of information in non-contracted situations (e.g., public movement).

The real question then emerges: is privacy even possible?

  • As Adam Shostack noted, our perceptions of privacy change over time. What I deem acceptable to share today will change tomorrow.
  • But once information is shared, it is nearly impossible to retract. Privacy decisions are permanent, no matter how we may feel about them later.
  • There is no perfect security, but once private information becomes public, it is public forever.
  • Isolated data will be aggregated and correlated. It used to require herculean efforts to research and collect public records on an individual. Now they are for sale. Cheap. Online. To anyone.

We share information with everyone, from online retailers, to social networking sites, to the blogs we read. There is no way all of these disparate organizations can effectively protect all our information, even if we wanted them to. Privacy decisions and failures are sticky.

I believe we are in the midst of a vast change in our how society values and defines privacy – one that will evolve over years. This doesn’t mean there’s no such thing as privacy, but does mean that today we do lack consistent mechanisms to control what others know about us.

Without perfect security there cannot be complete privacy, and there is no such thing as perfect security. Privacy isn’t dead, but it is most definitely changing in ways we cannot fully predict.

My personal strategy is to compartmentalize and use a diverse set of tools and services, limiting how much any single one collects on me. It’s probably little more than privacy theater, but it helps me get through the day as I stroll toward an uncertain future.

—Rich

Incite 6/2/2010: Smuggler’s Blues

By Mike Rothman

Given the craziness of my schedule, I don’t see a lot of movies in the theater anymore. Hard to justify the cost of a babysitter for a movie, when we can sit in the house and watch movies (thanks, Uncle Netflix!). But the Boss does take the kids to the movies because it’s a good activity, burns up a couple hours (especially in the purgatory period between the end of school and beginning of camp), and most of the entertainment is pretty good.

Lots of miles on this leather... Though it does give me some angst to see two credit card receipts from every outing. The first is the tickets, and that’s OK. The movie studios pay lots to produce these fantasies, so I’m willing to pay for the content. It’s the second transaction, from the snack bar, that makes me nuts. My snack bar tab is usually as much as the tickets. Each kid needs a drink, and some kind of candy and possibly popcorn. All super-sized, of course.

And it’s not even the fact that we want to get super sizes of anything. That’s the only option. You can pay $4 for a monstrous soda, which they call small. Or $4.25 for something even bigger. If you can part with $4.50, then you get enough pop to keep a village thirst-free for a month.

And don’t get me started on the popcorn. First of all, I know it’s nutritionally terrible. They may use different oil now, but in the portions they sell, you could again feed a village. But don’t think the movie theaters aren’t looking out for you. If you get the super-duper size, you get free refills of both popcorn and soda. Of course, you’d need to be the size of an elephant to knock down more than two gallons of soda and a feedbag of popcorn, but at least they are giving something back.

So we’re been trying something a bit different, born of necessity. The Boss can’t eat the movie popcorn due to some food allergies, so she smuggles in her own popcorn. And usually a bottle of water. You know what? It works. It’s not like the 14 year old ticket attendant is going to give me a hard time.

I know, it’s smuggling, but I don’t feel guilty at all. I’d be surprised if the monstrous soda cost the theater more than a quarter, but they charge $4. So I’m not going to feel bad about sneaking in a small bag Raisinettes or Goobers with a Diet Coke. I’ll chalk it up to a healthy lifestyle. Reasonable portions and lighter on my wallet. Sounds like a win-win to me.

– Mike.

Photo credits: “Movie Night Party” originally uploaded by Kid’s Birthday Parties


Incite 4 U

  1. Follow the dollar, not the SLA – Great post by Justin James discussing the reality of service level agreements (SLAs). I know I’ve advised many clients to dig in and get preferential SLAs to ensure they get what they contract for, but ultimately it may be cheaper for the service provider to violate the SLA (and pay the fine) than it is to meet the agreement. I remember telling the stories of HIPAA compliance, and the reality that some health care organizations faced millions of dollars of investment to get compliant. But the fines were five figures. Guess what they chose to do. Yes, Bob, the answer was roll the dice. Same goes for SLAs, so there are a couple lessons here. 1) Try to get teeth in your SLA. The service provider will follow the money, so if the fine costs them more, they’ll do the right thing. 2) Have a Plan B. Contingencies and containment plans are critical, and this is just another reason why. When considering services, you cannot make the assumption that the service provider will be acting in your best interest. Unless your best interest is aligned with their best interest. Which is the reality of ‘cloud’. – MR

  2. It just doesn’t matter – I’m always pretty skeptical of poorly sourced articles on the Internet, which is why the Financial Times report of Google ditching Microsoft Windows should be taken with a grain of salt. While I am sometimes critical of Google, I can’t imagine they would really be this stupid. First of all, at least some of the attacks they suffered from China were against old versions of Windows – as in Internet Explorer 6, which even isolated troops of Antarctic chimpanzees know not to touch. Then, unless you are running some of the more-obscure ultra-secure Unix variants, no version of OS X or Linux can stand up to a targeted attacker with the resources of a nation state. Now, if they want some diversity, that’s a different story, but the latest versions of Windows are far more hardened than most of the alternatives – even my little Cupertino-based favorite.– RM

  3. Hack yourself, even if it’s unpopular… – I’ve been talking about security assurance for years. Basically this is trying to break your own defenses and seeing where the exposures are, by any means necessary. That means using live exploits (with care) and/or leveraging social engineering tactics. But when I read stories like this one from Steve Stasiukonis where there are leaks, and the tests are compromised, or the employees actually initiate legal action against the company and pen tester, I can only shake my head. Just to reiterate” the bad guys don’t send message to the chairman saying “I IZ IN YER FILEZ, READIN YER STUFFS!” They don’t worry about whether their tactics are “illegal human experiments,” they just rob you blind and pwn your systems. Yes, it may take some political fandango to get the right folks on board with the tests, but the alternative is to clean up the mess later. – MR

  4. Walk the walk – A while back we were talking about getting started in security over at The Network Security Podcast, and one bit of consensus was that you should try and spend some time on a help desk, as a developer, or as a systems or network administrator, before jumping into security. Basically spend some time in the shoes of your eventual clients. Jack Daniels suggests going a step further and “think like a defender”. Whenever I see someone whining about how bad we are at security, or how stupid someone is for not making “X” threat their top priority, odds are they either never spent time in an operational IT position, or have since forgotten what it’s like. And for those defenders, quite a few seem to forget the practical realities of keeping users up and running on a daily basis. Hell, same goes for researchers who forget the pressures of developing on budget and target. Whatever your role in security, try to understand what it is like on the other side.– RM

  5. Good enough needs to be good enough… – Interesting and short piece on fudsec.com this week from Duncan Hoopes addressing whether this concept of good enough permeating the web world is a good or bad thing for security. At times like these, the pragmatist in me bubbles to the surface. We have to work with our budgets and resources as they are. We could always use more, but probably aren’t going to get it. So we rely on “good enough” by necessity, not as primary goal. But the reality is we can never really be done, right? So our constant focus on reacting faster and incident response are driven by the reality that no matter how much we do, it’s not enough. Gosh, it would be great to have HiFi security. You know, whatever you need to really solve the problem. But that never lasts, and soon enough you’d need an AM radio with a single speaker because that’s all the money left in the budget. – MR

  6. Carry on To my mind, David Mortman’s post on Broken Promises and Mike Rothman’s post on In Search of … Solutions are two parts of the same idea. Does a technology solve, partially or completely, the business problem it’s positioned to solve? But Mike complains that vendors trying to pass off a mallet as a mouse trap just doesn’t cut it, and customers need to ask for a better mouse trap. Mort is saying: stop bitching about the mouse trap because it isn’t perfect but at least solves much of the problem. These posts, along with Jack Daniel’s post on Time for a new mantra, are more about the frustrations of the security community’s inability to make meaningful changes. Seriously, being a security professional today is like being an anti-smoking advocate … in 1955. It’s difficult for the business community to care about unknown consequences or unknown damages, or even to believe proposed security precautions will help. But security professionals self-flagellate over our inability to get management to understand the problem, and vendors’ failure to make better products, and IT departments failure to efficiently implement security programs. Ultimately security teams and vendors are not the agents of change – the business has to be, and it will be long time before businesses embrace security as a required function. –AL

  7. The more social, the less secure – Later today Rich will post some of his ideas on privacy vs. security. So without stealing any of his thunder, let’s take a look specifically at Facebook. Boaz examines the privacy and security debate by candidly assessing what Facebook does or does not need to do relative to security. A vociferous few are calling for Zuckerberg’s head on a stick because monetizing eyeballs usually involves some erosion of privacy. But in reality, whether Facebook’s privacy policy is right or wrong, not restrictive enough, or whatever, like with most other security, 99.99% of users just don’t care. You are dealing with asshats who constantly post pictures and comments that put themselves in compromising positions. You can talk until you are blue in the face, but they won’t change because they don’t see a problem. Maybe they will someday, and maybe they won’t. We security folks see the issue differently, but we are literally the lunatic fringe here. As Boaz says, “For individuals, the risks of collaborative web services are far outweighed by the benefits.” From an enterprise perspective, we must continue to do the right thing to protect our users’ data, but in reality most don’t care until their nudie pictures show up on tmz.com, and then some of them will tell all their friends. – MR

—Mike Rothman

Tuesday, June 01, 2010

DB Quant: Discovery Metrics, Part 4, Access and Authorization

By Adrian Lane

At this point we have set up the access controls strategy in the Planning phase, and collected information on the databases and applications under our control. Now we analyze existing access control and authorization settings. There are two basic efforts in this phase: 1) determining how the system that implements access controls is configured, and 2) determining how permissions are granted by that system. Permissions analysis may be a bit more difficult in this phase, depending on which access control methods you are using. Things are a bit more complicated if you are using domain or local system credentials than just internal database credentials, which themselves may also be mapped differently within the database than they appear externally, for example a standard user account on the domain which has administrative privileges within the database.

Groups and roles; how each is configured; and how permissions are allocated to applications, service accounts, end users, and administrators; each require considerable discovery work and analysis. For all but the smallest organizations, these review items can take weeks to cover. Once again, this task can be performed manually, but we strongly advise vulnerability and configuration assessment tools to support your efforts.

We’ve slightly updated our process to:

  1. Determine Scope
  2. Set up
  3. Scan
  4. Analyze & Report

Determine Scope

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to determine authorization methods Database, domain, local, and mixed mode are common options

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to request and obtain access permissions
Time to establish baselines for group and role configurations Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may require tuning for your internal requirements and environment.
Time to create custom report templates to review permissions Data privacy, operational control, and security require different views of settings to verify authorization settings

Scan

Variable Notes
Time to enumerate groups, roles, and accounts
Time to scan database and domain access configuration
Time to scan password configuration Aging policies, reuse, failed login, and inactivity lockouts
Time to scan passwords for compliance Optional
Time to record results

Analyze & Report

Variable Notes
Time to map admin roles Verify DBA permissions are divided among separate roles
Time to review service account and application access rights Time to verify DB system mapping to domain access
Time to evaluate user accounts and privileges Verify users are assigned the correct groups and roles, and groups and roles have reasonable access

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment

—Adrian Lane

On “Security engineering: broken promises”

By David Mortman

Recently Michael Zalewski posted a rant about the state of security engineering in Security engineering: broken promises. I posted my initial response to this on Twitter: “Great explanation of the issue, zero thoughts on solutions. Bored now.” I still stand behind that response. As a manager, problems without potential solutions are useless to me. The solutions don’t need to be deep technical solutions – sometimes the solution is to monitor or audit. Sometimes the solution is to do nothing, accept the risk, and make a note of it in case it comes up in conversation or an audit.

But as I’ve mulled over this post over the last two weeks, there is more going on here. There seems to be a prevalent attitude among security practitioners in general, and researchers in particular, that if they can break something it’s completely useless. There’s an old Yiddish saying that loosely translates to: “To a thief there is no lock.” We’re never going to have perfect security, so picking on something for being imperfect is just disingenuous and grandstanding.

We need to be asking ourselves a pragmatic question: Does this technology or process make things better? Just about any researcher will tell you that Microsoft’s SDL has made their lives much harder, and they have to work a lot more to break stuff. Is it perfect? No, of course not! But is it a lot better then it used to be for all involved (except the researchers Microsoft created the SDL to impede)? You betcha. Are CWE and CVSS perfect? No! Were they intended to be? No! But again, they’re a lot better than what we had before. Can we improve them? Yes, CVSS continues to go through revisions and will get better. As will the Risk Management frameworks.

So really, while bitching is fun and all, if you’re not offering improvements, you’re just making things worse.

—David Mortman

FireStarter: In Search of… Solutions

By Mike Rothman

A holy grail of technology marketing is to define a product category. Back in the olden days of 1998, it was all about establishing a new category with interesting technology and going public, usually on nothing more than a crapload of VC money and a few million eyeballs.

Then everything changed. The bubble popped, money dried up, and all those companies selling new products in new categories went bust. IT shops became very risk averse – only spending money on established technologies. But that created a problem, in that analysts had to sell more tetragon reports, which requires new product categories.

My annoyance with these product categories hit a fever pitch last week when LogLogic announced a price decrease on their SEM (security event management) technology. Huh? Seems they dusted off the SEM acronym after years on the shelf. I thought Gartner had decreed that it was SIEM (security information and event management) when it got too confusing between the folks who did SEM and SIM (security information management) – all really selling the same stuff. Furthermore, log management is now part of that deal. Do they dare argue with the great all-knowing oracles in Stamford?

Not that this expanded category definition is controversial. We’ve even posted that log management or SIEM isn’t a stand-alone market – rather it’s the underlying storage platform for a number of applications for security and ops professionals.

The lesson most of us forget is that end users don’t care what you call the technology, as long as you solve their problems. Maybe the project is compliance automation or incident investigation. SIEM/Log Management can be used for both. IT-GRC solutions can fit into the first bucket, while forensic toolkits fit into the latter. Which of course confuses the hell out of most end users. What do they buy? And don’t all the vendors say they do everything anyway?

The security industry – along with the rest of technology – focuses on products, not solutions. It’s about the latest flashing light in the new version of the magic box. Sure, most of the enterprise companies send their folks to solution selling school. Most tech company websites have a “solution” area, but in reality it’s always an afterthought.

Let’s consider the NAC (network access control) market as another example. Lots of folks think Cisco killed the NAC market by making big promises and not delivering. But ultimately, end users didn’t care about NAC – they cared about endpoint assessment and controlling guest access, and they solved those problems through other means.

Again, end users need to solve problems. They want answers and solutions, but they get a steady diet of features and spiels on why one box is better than the competitors. They get answers to questions they don’t ask. No wonder most end users turn off their phones and don’t respond to email.

Vendors spin their wheels talking about product category leadership. Who cares? Actually, Rich reminded me that the procurement people seem to care. We all know how hard it is to get a vendor in the wrong quadrant (or heaven forbid no quadrant at all) through the procurement gauntlet. Although the users are also to blame for accepting this behavior, and the dumb and lazy ones even like it. They wait for a vendor to come in and tell them what’s important, as opposed to figuring out what problem needs to be solved. From where I sit, the buying dynamic is borked, although it’s probably just as screwy in other sectors.

So what to do? That’s a good question, and I’d love your opinion. Should vendors run the risk of not knowing where they fit by not identifying with a set of product categories – and instead focus on solutions and customer problems? Should users stop sending out RFPs for SIEM/Log Management, when what they are really buying is compliance automation? Can vendors stop reacting to competitive speeds and feeds? Can users actually think more strategically, rather than whether to embrace the latest shiny upgrade from the default vendor?

I guess what I’m asking is whether it’s possible to change the buying dynamic. Or should I just quiet down, accept the way the game is played, and try to like it?

—Mike Rothman

Friday, May 28, 2010

The Hidden Costs of Security

By Mike Rothman

When I was abroad on vacation recently, the conversation got to the relative cost of petrol (yes, gasoline) in the States versus pretty much everywhere else. For those of you who haven’t travelled much, fuel tends to be 70-80% more expensive elsewhere. Why is that?

It comes down to the fact that the US Government bears many of real costs of providing a sufficient stream of petroleum. Those look like military, diplomatic, and other types of spending in the Middle East to keep the oil flowing. I’m not going to descend into either politics or energy dynamics here, but suffice it to say we’d be investing a crapload more money in alternative energy if US consumers had to directly bear the full brunt of what it costs to pull oil out of the Middle East.

With that thought in the back of my mind, I checked out one of Bejtlich’s posts last weekend which talked about the R&D costs of the bad guys. Basically these folks run businesses like anyone else. They have to invest in their ‘product’, which is finding new vulnerabilities and exploiting them. They also have to invest in “customer service,” which is basically staying invisible once they are inside to avoid detection.

And these costs are significant, but compared to the magnitude of the ‘revenue’ side of their equation, I’m sure they are happy to make the investment. Cyber-fraud is big business.

But what about other hidden costs of providing security? We had a great discussion on Monday with the FireStarter talking about value/loss metrics, but do these risk models take into account some of the costs we don’t necessarily see as part of security?

Like our network traffic. How much bandwidth is wasted on reconnaissance traffic looking for holes in our perimeters? What about the amount of your inbound pipe congested with spam, which you need to analyze and then drop. One of the key reasons anti-spam services took off is because the bandwidth demand of spam was transferred to the service provider.

What would we do differently if we had to allocate those hidden costs to the security team? I know, at the end of the day it’s all just overhead, but what if? Would it change our behavior or our security architectures? I suspect we’d focus much more on providing clean pipes and having more of our security done in the cloud, removing some of these hidden costs from our IT stack. That makes economic sense, and we all know most of what we do ultimately is driven by economics.

How about the costs of cleaning up an incident? Yes, there are some security costs in there from the standpoint of investigation and forensics, but depending on the nature of the attack there will be legal and HR resources required, which usually don’t make it into the incident post-mortem. Or what about the opportunity cost of 1,000 folks losing their authentication tokens and being locked out of the network? Or the time it takes a knowledge worker to jump through hoops to get around aggressive web filtering rules? Or the cost of false positives on the IPS that block legitimate business traffic and break critical applications?

We know how big the security budget is, but we don’t have a firm grasp of what security really costs our businesses. If we did, what would we do differently? I don’t necessarily have an answer, but it’s an interesting question. As we head into Memorial Day weekend here in the US, we need to remember obviously, all the soldiers who give all. But we also need to remember the ripple effect of every action and reaction to the bad guys. Every time I go through a TSA checkpoint in an airport, I’m painfully aware of the billions spent each month around the world to protect air travel, regardless of whether terrorists will ever attack air travel again. I guess the same analogy can be used with security. Regardless of whether you’re actually being attacked, the costs of being secure add up. Score another one for the bad guys.

—Mike Rothman

DB Quant: Discovery and Assessment Metrics, Part 3, Assess Vulnerabilities and Configuration

By Adrian Lane

By this point we have discovered all databases and identified our key databases based on the sensitivity of their data, importance to business units, and connected applications. Now it’s time to find potential security issues, and decide whether the databases meet our security and configuration requirements. Some of this can be performed manually, but as with network security we strongly advise vulnerability and configuration assessment tools.

The cost metrics associated with configuration and vulnerability analysis typically run higher the first time the process is put in place. Investigating polices, installing tools, and implementing rules are all time-consuming. Once the process is established the total amount of work falls off dramatically, with relatively small incremental investments of time for each round scanning.

As a reminder, the process is:

  1. Define Scans
  2. Setup
  3. Scan
  4. Distribute Results

Define Scans

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to gather internal requirements Security, operations, and internal audit groups. These should feed directly from the standards established in the Plan phase
Time to identify tasks/workflow Should be a one-time effort
Time to collect updated vulnerability lists CERT or other threat alerts
Time to collect configuration requirements You should have this from the Plan phase, but may need to update or refine. Also, these need to be updated regularly to account for software patches. This includes patch levels, security checklists from database vendors, and checklists from third parties such as NIST and the Center for Internet Security.

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to contact database owners to obtain access
Time to update externally supplied policies and rules Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may requiring tuning for your internal requirements and environment.
Time to create custom rules from internal and external policies Additional policies and rules not provided by an outside party

Scan

Variable Notes
Time to run active scan
Time to scan host configuration This is the host system for the database
Time to scan database patches
Time to scan database configuration Internal scan of database settings
Time to scan database for vulnerabilities (Internal) e.g., access settings, admin roles, use of encryption
Time to scan database for vulnerabilities (External) e.g., network settings, external stored procedures
Variable: Time to rerun scans

Distribute Results

Variable Notes
Time to save scan results
Time to filter and prioritize scan results by requirements Divide data by stakeholder (security, ops, audit)
Time to generate report(s) and distribute

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps

—Adrian Lane