Login  |  Register  |  Contact
Thursday, June 03, 2010

DB Quant: Secure Metrics, Part 1, Patch

By Adrian Lane

Now we move past planning & discovery, and into the actual work of securing databases. The Secure phase is where we implement many of the preventative security measures and establish the secure baseline for database operations. First up is database patching.

For patching most of the costs are time and effort to evaluate, test, and apply a patch. Fixed costs are mostly support and maintenance contracts with the database vendor, if applicable (very few patch management products work with databases, so you are usually limited to the DBMS vendor’s tools). Your vendor may offer tools to support patch rollout and administration, which are included as a capital investment cost.

Our process is:

  1. Evaluate
  2. Acquire
  3. Test & Approve
  4. Confirm & Deploy
  5. Document

Evaluate

Variable Notes
Time to monitor for advisories per database type Vendor alerts and industry advisories announce patch availability
Time to identify appropriate patches
Time to identify workarounds Identify workarounds if available, and determine whether they are appropriate
Time to determine priority e.g., Is this a critical vulnerability; if so, when should you apply the patch?

Acquire

Variable Notes
Time to acquire patch(es)
Costs for maintenance, support, or additional patch management tools Optional: Updates to vendor maintenance contracts, if required

Test & Approve

Variable Notes
Time to create regression test cases and acceptance criteria i.e., How will you verify the patch does not break your applications?
Time to set up test environment Obtain servers, tools, and software for verification; then set up for testing
Time to run test Variable: may require multiple cycles, depending upon test cases
Time to analyze results
Time to create deployment packages Optional – if not using stock patches. Approve, label and archive the tested patch.

Confirm & Deploy

Variable Notes
Time to schedule and notify Schedule personnel & communicate downtime to users
Time to install Taking DB offline, back up, patch database, and restart
Time to verify Verify patch installed correctly and database services are available
Time to clean up Remove temp files

Document

Variable Notes
Time to document Close out trouble tickets and update workflow tracking

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment
  26. DB Quant: Discovery Metrics, Part 4, Access and Authorization.

—Adrian Lane

White Paper Released: Endpoint Security Fundamentals

By Mike Rothman

Endpoint Security is a pretty broad topic. Most folks associate it with traditional anti-virus or even the newfangled endpoint security suites. In our opinion, looking at the issue just from the perspective of the endpoint agent is myopic. To us, endpoint security is as much a program as anything else.

In this paper we discuss endpoint security from a fundamental blocking and tackling perspective. We start with identifying the exposures and prioritizing remediation, then discuss specific security controls (both process and product), and also cover the compliance and incident response aspects.

It’s a pretty comprehensive paper, which means it’s not short. But if you are trying to understand how to comprehensively protect your endpoint devices, this paper will provide a great perspective and allow you to put all your defenses into context. We assembled this document from the Endpoint Security Fundamentals series posted to the blog in early April, all compiled together, professionally edited, and prettified.

Special thanks to Lumension Security for licensing the report.

You can download the paper directly (PDF), or visit the landing page, where you can leave comments or criticism, and track revisions.

—Mike Rothman

Wednesday, June 02, 2010

NSO Quant: Monitor Process Map

By Mike Rothman

It’s been a while since you’ve heard anything about Network Security Operations Quant, but that doesn’t mean we haven’t had the sweatshop working overtime, doing primary research to figure out how organizations manage and monitor their network and security devices. To recap the project scope, we’ll have 5 different “threads”: Monitoring firewalls, IDS/IPS, and servers; and Managing firewalls and IDS/IPS. We know that is not an exhaustive list of everything you do operationally on a daily basis, but we figure this is a good place to start.

To make the content digestible, and also to start getting feedback, we are staring with the Monitor process map. When doing the research we found that there are many commonalities in what has to happen at the highest level to monitor these devices. As we dig into the subprocesses over the next few weeks we will uncover differences between firewalls, IDS/IPS, and servers.

Keep the philosophy of Quant in mind: the high level process framework is intended to cover all the tasks involved. That doesn’t mean you need to do everything, but does mean this should be a fairly exhaustive list. Individual organizations can then pick and choose those steps which are appropriate for them. We could really use some feedback on how well this encompasses all the network and security device monitoring processes. We based this process on our own experience and primary research, but that doesn’t mean we haven’t missed something. If so, let us know in the comments.

Next week, we’ll post a similar process map for the Manage process. Actually two of them, since there are some fundamental differences between managing firewalls and IDS/IPS.

After the major processes are covered, we’ll dive into all the subprocesses within each of these major steps. Then we will decompose each subprocess and define some meaningful metrics to support process optimization.

Plan

In this phase, we define the depth and breadth of our monitoring activities. These are not one-time events, but a process to revisit every quarter, or after any incident that triggers a policy review.

  1. Enumerate: Find all the security, network, and server devices which are relevant to determining the security of the environment.
  2. Scope: Decide which devices are within the scope of monitoring activity. This involves identifying the asset owner; profiling the device to understand data, compliance and/or policy requirements; and assessing the feasibility of collecting data from it.
  3. Develop Policies: Determine the depth and breadth of the monitoring process, what data will be collected from the devices, and the frequency of collection. The process is designed to be extensible beyond firewall, IDS/IPS, and server monitoring (the scope of this project) to include any other kind of network, security, computing, application, or data capture/forensics device.

Policies

For device types in scope, alerting policies will be developed to identify potential incidents requiring investigation and validation. Defining the alerting policies involves a Q/A process to test the effectiveness of the alerts. A tuning process also needs to be built into the policy definitions, as over time the alert policies will need to be changed. The initial subprocesses in this step include:

  • Firewall Monitoring Policy
  • Firewall Alerting Policy
  • IDS/IPS Monitoring Policy
  • IDS/IPS Alerting Policy
  • Server Monitoring Policy
  • Server Alerting Policy

Finally, monitoring is part of a larger security operations process, so policies are required for workflow and incident response. These policies define how the monitoring information is leveraged by other operational teams; as well as how potential incidents are identified, validated, and investigated.

Monitor

In this phase the monitoring policies are put to use, gathering the data and analyzing it to identify areas for validation and potential investigation. All collected data is stored for compliance, trending, and reporting as well.

  1. Collect: Collect alerts and log records based on the policies defined in Phase 1. Can be performed within a single-element manager or abstracted into a broader Security Information and Event Management (SIEM) system for multiple devices and device types.
  2. Store: For both compliance and forensics purposes, the collected data must be stored for future access.
  3. Analyze: The collected data is then analyzed to identify potential incidents based on alerting policies defined in Phase 1. This may involve numerous techniques, including simple rule matching (availability, usage, attack traffic policy violations, time-based rules, etc.) and/or multi-factor correlation based on multiple device types (SIEM).

Action

If an alert fires in the analyze step, this phase kicks in to understand the issue and determine whether further action/escalation is necessary.

  1. Validate/Investigate: If and when an alert is generated, it must be investigated to validate the attack. Is it a false positive? Is it an issue that requires further action? If so, move to Action step in Phase 3. Determine whether policies need to be tuned based on the accuracy of the alert.
  2. Action/Escalate: Take action to remediate the issue. May involve hand-off/escalation to operations team.

After a certain number of alert validations, a feedback loop determines whether any of the policies must be changed and/or tuned. This is an ongoing process rather than a point-in-time activity, as the dynamic nature of networks and attacks requires ongoing diligence to ensure the monitoring and alerting policies remain relevant and sufficient.

This brings up two big questions we could use some help with:

  1. Does this structure work? At the highest level, we believe monitoring is pretty much monitoring. Is that correct? Or do you fundamentally monitor firewalls differently from IDS/IPS and from servers?
  2. Are we missing anything? Should we move anything? Insert, update, or delete?

We are looking forward to your comments and feedback. Fire away.


Network Security Operations Quant posts

  1. Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.

—Mike Rothman

Understanding and Selecting a SIEM/LM: Correlation and Alerting

By Adrian Lane

Continuing our discussion of core SIEM and Log Management technology, we now move into event correlation. This capability was the holy grail that drove most investment in early SIEM products, and probably the security technology creating the most consistent disappointment amongst its users. But ultimately the ability to make sense of the wide variety of data streams, and use them to figure out what is under attack or compromised, is essential to any security practice. This means that despite the disappointments, there will continue to be plenty of interest in correlation moving forward.

Correlation

Defining correlation is akin to kicking a hornet’s nest. It immediately triggers agitated debates because there are several published definitions and every expert has their favorite. As usual, we need to revisit the definitions and level-set, not to create controversy (though that tends to happen), but to make things easy to understand. As we search for a pragmatic definition, we need to simplify concepts to make subjects understandable to a wider audience at the expense of precision. We understand our community is not a bunch of shrinking violets, so we welcome your comments and suggestions to make our research more relevant.

Let’s get back to the end-user problem driving SIEM and log management. Ultimately the goal of this technology is to interpret security-related data to improve security, increase efficiency, and/or document security controls. If a single file contained all the information required for security analysis, we would not bother with the collection and association of events from multiple sources. The truth is that each log or event contains a piece of information, which forms part of the puzzle, but lacks context necessary to analyze the big picture. In order to make meaningful decisions about what is going on with our applications and within our network, we need to combine events from different sources. Which events we want, and what pieces of data from those events we need, vary based on the problem we are trying to solve.

So what is correlation? Correlation is the act of linking multiple events together to detect strange behavior. It is the association of different but related events to provide broader context than a single event can provide. Keep in mind that we are using a broad definition of ‘event’ because as the breadth of analysis increases, data may expand beyond traditional events. Seems pretty simple, eh?

Let’s look at an example of how correlation can help achieve one of our key use cases: increasing the efficiency of the security team. In this case an analyst gets events from multiple locations and device types (and/or applications), and is expected to figure out whether there is an issue. The attacker might first scan the perimeter and then target an externally facing web server with a series of known exploits. Upon successfully compromising the web server, the attacker sets up a new user account and start scanning internally to find more exploitable hosts.

The data is available to catch this attack, but not in a single place. The firewalls see the initial scans. The IDS/IPS sees the series of exploits. And the user directory sees the new account on the compromised server. The objective of correlation is to see all these events come through and recognize that the server has been compromised and needs immediate attention. Easy in concept, very hard in practice.

Historically, the ability to do near real time analysis and event correlation was one of the ways SIEM differed from log management, although the lines continue to blur. Most of the steps we have discussed so far (collecting data, then aggregating and normalizing it) help isolate the attributes that link events together to make correlation possible. Once data is in manageable form we apply rules to detect attacks and misuse. These rules are comprised of the granular criteria (e.g., specific router, user account, time, etc.), and determine if a series of events reaches a threshold requiring corrective action.

But the devil is in the details. The technology implements correlation as a linear series of events. Each comparison may be a simple case of “if X=Y, then” do something else, but we may need to string several of these comparisons together. Second, note that correlation is built on rules for known attack patterns. This means we need some idea of what we are looking for to create the correlation rules. We have to understand attack patterns or elements of a compliance requirement in order to determine which device and event types should be linked. Third, we have to factor in the time factor, because events do not happen simultaneously, so there is a window of time within which events are likely to be related. Finally the effectiveness of correlation also depends on the quality of data collection, normalization, and tagging or indexing of information to feed the correlation rules.

Development of rules takes time and understanding, as well as ongoing maintenance and tuning. Sure, your vendor will provide out-of-the-box policies to help get you started, but expect to invest significant time into tweaking existing rules for your environment, and writing new policies for security and compliance to keep pace with the very dynamic security environment. Further complicating matters: more rules and more potentially-linked events to consider increase computational load exponentially. There is a careful balancing act to be performed between the number of policies to implement, the accuracy of the results, and the throughput of the system. These topics may not immediately seem orthogonal, but generic rules detect more threats at a cost of more false positives. The more specific the rule, and the more precisely tailored to find specific threats, the less it will find new problems.

This is the difficulty in getting correlation working effectively in most environments. As described in the Network Security Fundamentals series, it’s important to define clear goals for any correlation effort and stay focused on them. Trying to boil the ocean always yields disappointing results.

Alerting

Once events are correlated, analysis performed, and weirdness discovered, what do we do? We want to quickly and automatically announce what was discovered, getting information to the right places so action can be taken. This is where alerting comes in.

During policy analysis, when we detect something strange occurred, the policy triggers a predefined response. Alerts are the actions we take when polices are violated. Where the alert gets sent, how it’s sent, what information is passed, and the criticality of the event are all definable within the system, and embodied in the rules that form our policies. During policy development we define the response for each suspect event. Tuning policies for compliance and operations management is a significant effort, but the investment is required in order to get SIEM/LM up and running and reap any benefit.

Alert messages are distributed in different ways. Real-time alerts, for rule violations which require immediate attention, can be sent via email, pager, or text message to IT staff. Some alerts are best addressed by programmatic response, and are sent via Simple Network Management Protocol (SNMP) packets, XML messages, or application API calls with sufficient information for the responding application to take instant corrective action. Non-critical events may be logged as informational within the SIEM or log management platform, or sent to workflow/trouble-ticketing systems for future analysis. In most cases alerts rely on additional tools and technologies for broadcast and remediation, but the SIEM platform is configured to provide just the right subset of data for each communication medium.

SIEM/LM platforms tightly associate alerts with the rules, even embedding the alert definitions within the policy management system. This way as rules are created their criticality and the appropriate response are defined at the same time. Not in a futile attempt to replace an analyst, but in order to make him/her more effective and efficient, which is the name of the game.

Selection

With SIEM, correlation and alerting are the first areas of the technology you will spend a great deal of time customizing for your organization. Collection, aggregation, and normalization are relatively static builtin features, with the main variances being number of data types, protocols, and automation supported – leaving little room for tuning and filtering. Correlation and alerting are different, and require much more tuning and configuration to fit business requirements. We will go into much more detail on what to look for during your selection process later in this series, but plan on dedicating a large portion of your proof-of-concept review (and initial installation) on building and tuning your correlation rule set.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.
  6. Aggregation, Normalization, and Enrichment.

—Adrian Lane

Thoughts on Privacy and Security

By Rich

I was catching up on my reading today, and this post by Richard Bejtlich reminded me of the tension we sometimes see between security and privacy. Richard represents the perspective of a Fortune 5 security operator who is tasked with securing customer information and intellectual property, while facing a myriad of international privacy laws – some of which force us to reduce security for the sake of privacy (read the comments).

I’ve always thought of privacy from a slightly different perspective. Privacy traditionally falls into two categories:

  • The right to be left alone (just ask any teenage boy in the bathroom).
  • The right to control what people know about you.

According to the dictionary on my Mac, privacy is:

the state or condition of being free from being observed or disturbed by other people : she returned to the privacy of her own home.

My understanding is that it is only fairly recently that we’ve added personal information into the mix. We are also in the midst of a massive upheaval of social norms enabled by technology and the distribution and collection of information that changes the scope of “free from being observed.”

Thus, in the information age, privacy is now becoming as much about controlling information about us as it is about physical privacy.

Now let’s mix in security, which I consider a mechanism to enforce privacy – at least in this context. If we think about our interactions with everyone from businesses and governments to other individuals, privacy consists of three components:

  1. Intent: What I intend to do with the information you give me, whether it is the contents of a personal conversation or a business transaction.
  2. Communication: What I tell you I intend to do with said information.
  3. Capability: My ability to maintain and enforce the social (or written) contract defined by my intent and communications.

Thus I see security as a mechanism of capability. The role of “security” is to maintain whatever degree of protection around personal information the organization intends and communicates through their privacy policy – which might be the best or worst in the world, but the role of security is to best enforce that policy, whatever it is.

Companies tend to get into trouble either when they fail to meet their stated policies (due to business or technical/security reasons), or when their intent is incompatible with their legal requirements.

This is how I define privacy on the collection side – but it has nothing to do with protecting or managing your own information, nor does it address the larger societal issues such as changing ownership of information, changing social mores, changes in personal comfort over time, or collection of information in non-contracted situations (e.g., public movement).

The real question then emerges: is privacy even possible?

  • As Adam Shostack noted, our perceptions of privacy change over time. What I deem acceptable to share today will change tomorrow.
  • But once information is shared, it is nearly impossible to retract. Privacy decisions are permanent, no matter how we may feel about them later.
  • There is no perfect security, but once private information becomes public, it is public forever.
  • Isolated data will be aggregated and correlated. It used to require herculean efforts to research and collect public records on an individual. Now they are for sale. Cheap. Online. To anyone.

We share information with everyone, from online retailers, to social networking sites, to the blogs we read. There is no way all of these disparate organizations can effectively protect all our information, even if we wanted them to. Privacy decisions and failures are sticky.

I believe we are in the midst of a vast change in our how society values and defines privacy – one that will evolve over years. This doesn’t mean there’s no such thing as privacy, but does mean that today we do lack consistent mechanisms to control what others know about us.

Without perfect security there cannot be complete privacy, and there is no such thing as perfect security. Privacy isn’t dead, but it is most definitely changing in ways we cannot fully predict.

My personal strategy is to compartmentalize and use a diverse set of tools and services, limiting how much any single one collects on me. It’s probably little more than privacy theater, but it helps me get through the day as I stroll toward an uncertain future.

—Rich

Incite 6/2/2010: Smuggler’s Blues

By Mike Rothman

Given the craziness of my schedule, I don’t see a lot of movies in the theater anymore. Hard to justify the cost of a babysitter for a movie, when we can sit in the house and watch movies (thanks, Uncle Netflix!). But the Boss does take the kids to the movies because it’s a good activity, burns up a couple hours (especially in the purgatory period between the end of school and beginning of camp), and most of the entertainment is pretty good.

Lots of miles on this leather... Though it does give me some angst to see two credit card receipts from every outing. The first is the tickets, and that’s OK. The movie studios pay lots to produce these fantasies, so I’m willing to pay for the content. It’s the second transaction, from the snack bar, that makes me nuts. My snack bar tab is usually as much as the tickets. Each kid needs a drink, and some kind of candy and possibly popcorn. All super-sized, of course.

And it’s not even the fact that we want to get super sizes of anything. That’s the only option. You can pay $4 for a monstrous soda, which they call small. Or $4.25 for something even bigger. If you can part with $4.50, then you get enough pop to keep a village thirst-free for a month.

And don’t get me started on the popcorn. First of all, I know it’s nutritionally terrible. They may use different oil now, but in the portions they sell, you could again feed a village. But don’t think the movie theaters aren’t looking out for you. If you get the super-duper size, you get free refills of both popcorn and soda. Of course, you’d need to be the size of an elephant to knock down more than two gallons of soda and a feedbag of popcorn, but at least they are giving something back.

So we’re been trying something a bit different, born of necessity. The Boss can’t eat the movie popcorn due to some food allergies, so she smuggles in her own popcorn. And usually a bottle of water. You know what? It works. It’s not like the 14 year old ticket attendant is going to give me a hard time.

I know, it’s smuggling, but I don’t feel guilty at all. I’d be surprised if the monstrous soda cost the theater more than a quarter, but they charge $4. So I’m not going to feel bad about sneaking in a small bag Raisinettes or Goobers with a Diet Coke. I’ll chalk it up to a healthy lifestyle. Reasonable portions and lighter on my wallet. Sounds like a win-win to me.

– Mike.

Photo credits: “Movie Night Party” originally uploaded by Kid’s Birthday Parties


Incite 4 U

  1. Follow the dollar, not the SLA – Great post by Justin James discussing the reality of service level agreements (SLAs). I know I’ve advised many clients to dig in and get preferential SLAs to ensure they get what they contract for, but ultimately it may be cheaper for the service provider to violate the SLA (and pay the fine) than it is to meet the agreement. I remember telling the stories of HIPAA compliance, and the reality that some health care organizations faced millions of dollars of investment to get compliant. But the fines were five figures. Guess what they chose to do. Yes, Bob, the answer was roll the dice. Same goes for SLAs, so there are a couple lessons here. 1) Try to get teeth in your SLA. The service provider will follow the money, so if the fine costs them more, they’ll do the right thing. 2) Have a Plan B. Contingencies and containment plans are critical, and this is just another reason why. When considering services, you cannot make the assumption that the service provider will be acting in your best interest. Unless your best interest is aligned with their best interest. Which is the reality of ‘cloud’. – MR

  2. It just doesn’t matter – I’m always pretty skeptical of poorly sourced articles on the Internet, which is why the Financial Times report of Google ditching Microsoft Windows should be taken with a grain of salt. While I am sometimes critical of Google, I can’t imagine they would really be this stupid. First of all, at least some of the attacks they suffered from China were against old versions of Windows – as in Internet Explorer 6, which even isolated troops of Antarctic chimpanzees know not to touch. Then, unless you are running some of the more-obscure ultra-secure Unix variants, no version of OS X or Linux can stand up to a targeted attacker with the resources of a nation state. Now, if they want some diversity, that’s a different story, but the latest versions of Windows are far more hardened than most of the alternatives – even my little Cupertino-based favorite.– RM

  3. Hack yourself, even if it’s unpopular… – I’ve been talking about security assurance for years. Basically this is trying to break your own defenses and seeing where the exposures are, by any means necessary. That means using live exploits (with care) and/or leveraging social engineering tactics. But when I read stories like this one from Steve Stasiukonis where there are leaks, and the tests are compromised, or the employees actually initiate legal action against the company and pen tester, I can only shake my head. Just to reiterate” the bad guys don’t send message to the chairman saying “I IZ IN YER FILEZ, READIN YER STUFFS!” They don’t worry about whether their tactics are “illegal human experiments,” they just rob you blind and pwn your systems. Yes, it may take some political fandango to get the right folks on board with the tests, but the alternative is to clean up the mess later. – MR

  4. Walk the walk – A while back we were talking about getting started in security over at The Network Security Podcast, and one bit of consensus was that you should try and spend some time on a help desk, as a developer, or as a systems or network administrator, before jumping into security. Basically spend some time in the shoes of your eventual clients. Jack Daniels suggests going a step further and “think like a defender”. Whenever I see someone whining about how bad we are at security, or how stupid someone is for not making “X” threat their top priority, odds are they either never spent time in an operational IT position, or have since forgotten what it’s like. And for those defenders, quite a few seem to forget the practical realities of keeping users up and running on a daily basis. Hell, same goes for researchers who forget the pressures of developing on budget and target. Whatever your role in security, try to understand what it is like on the other side.– RM

  5. Good enough needs to be good enough… – Interesting and short piece on fudsec.com this week from Duncan Hoopes addressing whether this concept of good enough permeating the web world is a good or bad thing for security. At times like these, the pragmatist in me bubbles to the surface. We have to work with our budgets and resources as they are. We could always use more, but probably aren’t going to get it. So we rely on “good enough” by necessity, not as primary goal. But the reality is we can never really be done, right? So our constant focus on reacting faster and incident response are driven by the reality that no matter how much we do, it’s not enough. Gosh, it would be great to have HiFi security. You know, whatever you need to really solve the problem. But that never lasts, and soon enough you’d need an AM radio with a single speaker because that’s all the money left in the budget. – MR

  6. Carry on To my mind, David Mortman’s post on Broken Promises and Mike Rothman’s post on In Search of … Solutions are two parts of the same idea. Does a technology solve, partially or completely, the business problem it’s positioned to solve? But Mike complains that vendors trying to pass off a mallet as a mouse trap just doesn’t cut it, and customers need to ask for a better mouse trap. Mort is saying: stop bitching about the mouse trap because it isn’t perfect but at least solves much of the problem. These posts, along with Jack Daniel’s post on Time for a new mantra, are more about the frustrations of the security community’s inability to make meaningful changes. Seriously, being a security professional today is like being an anti-smoking advocate … in 1955. It’s difficult for the business community to care about unknown consequences or unknown damages, or even to believe proposed security precautions will help. But security professionals self-flagellate over our inability to get management to understand the problem, and vendors’ failure to make better products, and IT departments failure to efficiently implement security programs. Ultimately security teams and vendors are not the agents of change – the business has to be, and it will be long time before businesses embrace security as a required function. –AL

  7. The more social, the less secure – Later today Rich will post some of his ideas on privacy vs. security. So without stealing any of his thunder, let’s take a look specifically at Facebook. Boaz examines the privacy and security debate by candidly assessing what Facebook does or does not need to do relative to security. A vociferous few are calling for Zuckerberg’s head on a stick because monetizing eyeballs usually involves some erosion of privacy. But in reality, whether Facebook’s privacy policy is right or wrong, not restrictive enough, or whatever, like with most other security, 99.99% of users just don’t care. You are dealing with asshats who constantly post pictures and comments that put themselves in compromising positions. You can talk until you are blue in the face, but they won’t change because they don’t see a problem. Maybe they will someday, and maybe they won’t. We security folks see the issue differently, but we are literally the lunatic fringe here. As Boaz says, “For individuals, the risks of collaborative web services are far outweighed by the benefits.” From an enterprise perspective, we must continue to do the right thing to protect our users’ data, but in reality most don’t care until their nudie pictures show up on tmz.com, and then some of them will tell all their friends. – MR

—Mike Rothman

Tuesday, June 01, 2010

DB Quant: Discovery Metrics, Part 4, Access and Authorization

By Adrian Lane

At this point we have set up the access controls strategy in the Planning phase, and collected information on the databases and applications under our control. Now we analyze existing access control and authorization settings. There are two basic efforts in this phase: 1) determining how the system that implements access controls is configured, and 2) determining how permissions are granted by that system. Permissions analysis may be a bit more difficult in this phase, depending on which access control methods you are using. Things are a bit more complicated if you are using domain or local system credentials than just internal database credentials, which themselves may also be mapped differently within the database than they appear externally, for example a standard user account on the domain which has administrative privileges within the database.

Groups and roles; how each is configured; and how permissions are allocated to applications, service accounts, end users, and administrators; each require considerable discovery work and analysis. For all but the smallest organizations, these review items can take weeks to cover. Once again, this task can be performed manually, but we strongly advise vulnerability and configuration assessment tools to support your efforts.

We’ve slightly updated our process to:

  1. Determine Scope
  2. Set up
  3. Scan
  4. Analyze & Report

Determine Scope

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to determine authorization methods Database, domain, local, and mixed mode are common options

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to request and obtain access permissions
Time to establish baselines for group and role configurations Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may require tuning for your internal requirements and environment.
Time to create custom report templates to review permissions Data privacy, operational control, and security require different views of settings to verify authorization settings

Scan

Variable Notes
Time to enumerate groups, roles, and accounts
Time to scan database and domain access configuration
Time to scan password configuration Aging policies, reuse, failed login, and inactivity lockouts
Time to scan passwords for compliance Optional
Time to record results

Analyze & Report

Variable Notes
Time to map admin roles Verify DBA permissions are divided among separate roles
Time to review service account and application access rights Time to verify DB system mapping to domain access
Time to evaluate user accounts and privileges Verify users are assigned the correct groups and roles, and groups and roles have reasonable access

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps
  25. DB Quant: Discovery Metrics, Part 3, Config and Vulnerability Assessment

—Adrian Lane

On “Security engineering: broken promises”

By David Mortman

Recently Michael Zalewski posted a rant about the state of security engineering in Security engineering: broken promises. I posted my initial response to this on Twitter: “Great explanation of the issue, zero thoughts on solutions. Bored now.” I still stand behind that response. As a manager, problems without potential solutions are useless to me. The solutions don’t need to be deep technical solutions – sometimes the solution is to monitor or audit. Sometimes the solution is to do nothing, accept the risk, and make a note of it in case it comes up in conversation or an audit.

But as I’ve mulled over this post over the last two weeks, there is more going on here. There seems to be a prevalent attitude among security practitioners in general, and researchers in particular, that if they can break something it’s completely useless. There’s an old Yiddish saying that loosely translates to: “To a thief there is no lock.” We’re never going to have perfect security, so picking on something for being imperfect is just disingenuous and grandstanding.

We need to be asking ourselves a pragmatic question: Does this technology or process make things better? Just about any researcher will tell you that Microsoft’s SDL has made their lives much harder, and they have to work a lot more to break stuff. Is it perfect? No, of course not! But is it a lot better then it used to be for all involved (except the researchers Microsoft created the SDL to impede)? You betcha. Are CWE and CVSS perfect? No! Were they intended to be? No! But again, they’re a lot better than what we had before. Can we improve them? Yes, CVSS continues to go through revisions and will get better. As will the Risk Management frameworks.

So really, while bitching is fun and all, if you’re not offering improvements, you’re just making things worse.

—David Mortman

FireStarter: In Search of… Solutions

By Mike Rothman

A holy grail of technology marketing is to define a product category. Back in the olden days of 1998, it was all about establishing a new category with interesting technology and going public, usually on nothing more than a crapload of VC money and a few million eyeballs.

Then everything changed. The bubble popped, money dried up, and all those companies selling new products in new categories went bust. IT shops became very risk averse – only spending money on established technologies. But that created a problem, in that analysts had to sell more tetragon reports, which requires new product categories.

My annoyance with these product categories hit a fever pitch last week when LogLogic announced a price decrease on their SEM (security event management) technology. Huh? Seems they dusted off the SEM acronym after years on the shelf. I thought Gartner had decreed that it was SIEM (security information and event management) when it got too confusing between the folks who did SEM and SIM (security information management) – all really selling the same stuff. Furthermore, log management is now part of that deal. Do they dare argue with the great all-knowing oracles in Stamford?

Not that this expanded category definition is controversial. We’ve even posted that log management or SIEM isn’t a stand-alone market – rather it’s the underlying storage platform for a number of applications for security and ops professionals.

The lesson most of us forget is that end users don’t care what you call the technology, as long as you solve their problems. Maybe the project is compliance automation or incident investigation. SIEM/Log Management can be used for both. IT-GRC solutions can fit into the first bucket, while forensic toolkits fit into the latter. Which of course confuses the hell out of most end users. What do they buy? And don’t all the vendors say they do everything anyway?

The security industry – along with the rest of technology – focuses on products, not solutions. It’s about the latest flashing light in the new version of the magic box. Sure, most of the enterprise companies send their folks to solution selling school. Most tech company websites have a “solution” area, but in reality it’s always an afterthought.

Let’s consider the NAC (network access control) market as another example. Lots of folks think Cisco killed the NAC market by making big promises and not delivering. But ultimately, end users didn’t care about NAC – they cared about endpoint assessment and controlling guest access, and they solved those problems through other means.

Again, end users need to solve problems. They want answers and solutions, but they get a steady diet of features and spiels on why one box is better than the competitors. They get answers to questions they don’t ask. No wonder most end users turn off their phones and don’t respond to email.

Vendors spin their wheels talking about product category leadership. Who cares? Actually, Rich reminded me that the procurement people seem to care. We all know how hard it is to get a vendor in the wrong quadrant (or heaven forbid no quadrant at all) through the procurement gauntlet. Although the users are also to blame for accepting this behavior, and the dumb and lazy ones even like it. They wait for a vendor to come in and tell them what’s important, as opposed to figuring out what problem needs to be solved. From where I sit, the buying dynamic is borked, although it’s probably just as screwy in other sectors.

So what to do? That’s a good question, and I’d love your opinion. Should vendors run the risk of not knowing where they fit by not identifying with a set of product categories – and instead focus on solutions and customer problems? Should users stop sending out RFPs for SIEM/Log Management, when what they are really buying is compliance automation? Can vendors stop reacting to competitive speeds and feeds? Can users actually think more strategically, rather than whether to embrace the latest shiny upgrade from the default vendor?

I guess what I’m asking is whether it’s possible to change the buying dynamic. Or should I just quiet down, accept the way the game is played, and try to like it?

—Mike Rothman

Friday, May 28, 2010

The Hidden Costs of Security

By Mike Rothman

When I was abroad on vacation recently, the conversation got to the relative cost of petrol (yes, gasoline) in the States versus pretty much everywhere else. For those of you who haven’t travelled much, fuel tends to be 70-80% more expensive elsewhere. Why is that?

It comes down to the fact that the US Government bears many of real costs of providing a sufficient stream of petroleum. Those look like military, diplomatic, and other types of spending in the Middle East to keep the oil flowing. I’m not going to descend into either politics or energy dynamics here, but suffice it to say we’d be investing a crapload more money in alternative energy if US consumers had to directly bear the full brunt of what it costs to pull oil out of the Middle East.

With that thought in the back of my mind, I checked out one of Bejtlich’s posts last weekend which talked about the R&D costs of the bad guys. Basically these folks run businesses like anyone else. They have to invest in their ‘product’, which is finding new vulnerabilities and exploiting them. They also have to invest in “customer service,” which is basically staying invisible once they are inside to avoid detection.

And these costs are significant, but compared to the magnitude of the ‘revenue’ side of their equation, I’m sure they are happy to make the investment. Cyber-fraud is big business.

But what about other hidden costs of providing security? We had a great discussion on Monday with the FireStarter talking about value/loss metrics, but do these risk models take into account some of the costs we don’t necessarily see as part of security?

Like our network traffic. How much bandwidth is wasted on reconnaissance traffic looking for holes in our perimeters? What about the amount of your inbound pipe congested with spam, which you need to analyze and then drop. One of the key reasons anti-spam services took off is because the bandwidth demand of spam was transferred to the service provider.

What would we do differently if we had to allocate those hidden costs to the security team? I know, at the end of the day it’s all just overhead, but what if? Would it change our behavior or our security architectures? I suspect we’d focus much more on providing clean pipes and having more of our security done in the cloud, removing some of these hidden costs from our IT stack. That makes economic sense, and we all know most of what we do ultimately is driven by economics.

How about the costs of cleaning up an incident? Yes, there are some security costs in there from the standpoint of investigation and forensics, but depending on the nature of the attack there will be legal and HR resources required, which usually don’t make it into the incident post-mortem. Or what about the opportunity cost of 1,000 folks losing their authentication tokens and being locked out of the network? Or the time it takes a knowledge worker to jump through hoops to get around aggressive web filtering rules? Or the cost of false positives on the IPS that block legitimate business traffic and break critical applications?

We know how big the security budget is, but we don’t have a firm grasp of what security really costs our businesses. If we did, what would we do differently? I don’t necessarily have an answer, but it’s an interesting question. As we head into Memorial Day weekend here in the US, we need to remember obviously, all the soldiers who give all. But we also need to remember the ripple effect of every action and reaction to the bad guys. Every time I go through a TSA checkpoint in an airport, I’m painfully aware of the billions spent each month around the world to protect air travel, regardless of whether terrorists will ever attack air travel again. I guess the same analogy can be used with security. Regardless of whether you’re actually being attacked, the costs of being secure add up. Score another one for the bad guys.

—Mike Rothman

DB Quant: Discovery and Assessment Metrics, Part 3, Assess Vulnerabilities and Configuration

By Adrian Lane

By this point we have discovered all databases and identified our key databases based on the sensitivity of their data, importance to business units, and connected applications. Now it’s time to find potential security issues, and decide whether the databases meet our security and configuration requirements. Some of this can be performed manually, but as with network security we strongly advise vulnerability and configuration assessment tools.

The cost metrics associated with configuration and vulnerability analysis typically run higher the first time the process is put in place. Investigating polices, installing tools, and implementing rules are all time-consuming. Once the process is established the total amount of work falls off dramatically, with relatively small incremental investments of time for each round scanning.

As a reminder, the process is:

  1. Define Scans
  2. Setup
  3. Scan
  4. Distribute Results

Define Scans

Variable Notes
Time to list databases This may be a subset of databases, preferably prioritized
Time to gather internal requirements Security, operations, and internal audit groups. These should feed directly from the standards established in the Plan phase
Time to identify tasks/workflow Should be a one-time effort
Time to collect updated vulnerability lists CERT or other threat alerts
Time to collect configuration requirements You should have this from the Plan phase, but may need to update or refine. Also, these need to be updated regularly to account for software patches. This includes patch levels, security checklists from database vendors, and checklists from third parties such as NIST and the Center for Internet Security.

Setup

Variable Notes
Capital and time costs to acquire and install tools for automated assessments Optional
Time to contact database owners to obtain access
Time to update externally supplied policies and rules Policy is the high-level requirement; rule is the technical query for inspection. Vendors provide these with the tools, but they may requiring tuning for your internal requirements and environment.
Time to create custom rules from internal and external policies Additional policies and rules not provided by an outside party

Scan

Variable Notes
Time to run active scan
Time to scan host configuration This is the host system for the database
Time to scan database patches
Time to scan database configuration Internal scan of database settings
Time to scan database for vulnerabilities (Internal) e.g., access settings, admin roles, use of encryption
Time to scan database for vulnerabilities (External) e.g., network settings, external stored procedures
Variable: Time to rerun scans

Distribute Results

Variable Notes
Time to save scan results
Time to filter and prioritize scan results by requirements Divide data by stakeholder (security, ops, audit)
Time to generate report(s) and distribute

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases
  24. DB Quant: Discovery Metrics, Part 2, Identify Apps

—Adrian Lane

Friday Summary: May 28, 2010

By Adrian Lane

We get a lot of requests to sponsor this blog. We got several this week. Not just the spammy “Please link with us,” or “Host our content and make BIG $$$” stuff. And not the PR junk that says “We are absolutely positive your readers would just love to hear what XYZ product manager thinks about data breaches,” or “We just released 7.2.2.4 version of our product, where we changed the order of the tabs in our web interface!” Yeah, we get fascinating stuff like that too. Daily. But that’s not what I am talking about. I am talking about really nice, personalized notes from vendors and others interested in supporting the Securosis site. They like what we do, they like that we are trying to shake things up a bit, and they like the fact that we are honest in our opinions. So they write really nice notes, and they ask if they can give us money to support what we do.

To which we rather brusquely say, “No”.

We don’t actually enjoy doing that. In fact, that would be easy money, and we like as much easy money as we can get. More easy money is always better than less. But we do not accept either advertising on the site or sponsorship because, frankly, we can’t. We just cannot have the freedom to do what we do, or promote security in the way we think best, if we accept payments from vendors for the blog. It’s like the classic trade-off in running your own business: sacrifice of security for the freedom to do things your own way. We don’t say “No,” to satisfy some sadistic desire on our part to be harsh. We do it because we want the independence to write what we want, the way we want.

Security is such a freakin’ red-headed stepchild that we have to push pretty hard to get companies, vendors, and end users to do the right thing. We are sometimes quite emphatic to knock someone off the rhythm of that PowerPoint presentation they have delivered a hundred times, somehow without ever critically examining its content or message. If we don’t they will keep yakking on and on about how they address “Advanced Persistant Threats.” Sometimes we spotlight the lack of critical reasoning on a customer’s part to expose the fact that they are driven by politics without a real plan for securing their environment. We do accept sponsorship of events and white papers, but only after the content has gone through community review and everyone has had a chance to contribute. Many vendors and a handful of end-users who talk with us on the phone know we can be pretty harsh at times, and they still ask if they economically support our research. And we still say, “No”. But we appreciate the interest, and we thank you all for for participating in our work.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Rich: Code Re-engineering. This applies to so much more than code. I’ve been on everything from mountain rescues to woodworking projects where the hardest decision is to stop patching and nuke it from orbit. We are not mentally comfortable throwing away hours, days, or years of work; and the ability to step back, analyze, and start over is rare in any society.
  • Mike Rothman: Code Re-engineering. Adrian shows his development kung fu. He should get pissed off more often.
  • David Mortman: Gaming the Tetragon.
  • Adrian Lane: The Secerno Technology. Just because you need to understand what this is now that Oracle has their hands on it.

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jack, in response to FireStarter: The Only Value/Loss Metric That Matters.

All of the concerns that have been raised about estimating impact are legitimate. Part of the problem with many approaches to-date, however, is that they’ve concentrated on asset value and not clearly differentiated that from asset liability. Another challenge is that we tend to do a poor job of categorizing how loss materializes.

What I’ve had success with in FAIR is to carve loss into two components–Primary and Secondary. Primary loss occurs directly as a result of an event (e.g., productivity loss due to an application being down, investigation costs, replacement costs, etc.), while Secondary loss occurs as a consequence of stakeholder reactions to the event (e.g., fines/judgments, reputation effects, the costs associated with managing both of those, etc.). I also sub-categorize losses as materializing in one or more of six forms (productivity, response, replacement, competitive advantage, fines/judgments, and reputation).

With the clarity provided by differentiating between the Primary and Secondary loss components, and the six forms of loss, I find it much easier to get good estimates from the business subject matter experts (e.g., Legal, Marketing, Operations, etc.). To make effective use of these estimates we use them as input to PERT distribution functions, which then become part of a Monte Carlo analysis.

Despite what some people might think, this is actually a very straightforward process, and simple spreadsheet tools remove the vast majority of the complexity. Besides results that stand up to scrutiny, another advantage is that a lot of the data you get from the business SME’s is reusable from analysis to analysis, which streamlines the process considerably.

—Adrian Lane

Thursday, May 27, 2010

Understanding and Selecting SIEM/LM: Aggregation, Normalization, and Enrichment

By Adrian Lane

In the last post on Data Collection we introduced the complicated process of gathering data. Now we need to understand how to put it into a manageable form for analysis, reporting, and long-term storage for forensics.

Aggregation

SIEM platforms collect data from thousands of different sources because these events provide the data we need to analyze the health and security of our environment. In order to get a broad end-to-end view, we need to consolidate what we collect onto a single platform. Aggregation is the process of moving data and log files from disparate sources into a common repository. Collected data is placed into a homogenous data store – typically purpose-built flat file repositories or relational databases – where analysis, reporting, and forensics occur; and archival policies are applied.

The process of aggregation – compiling these dissimilar event feeds into a common repository – is fundamental to Log Management and most SIEM platforms. Data aggregation can be performed by sending data directly into the SIEM/LM platform (which may be deployed in multiple tiers), or an intermediary host can collect log data from the source and periodically move it into the SIEM system. Aggregation is critical because we need to manage data in a consistent fashion: security, retention, and archive policies must be systematically applied. Perhaps most importantly, having all the data on a common platform allows for event correlation and data analysis, which are key to addressing the use cases we have described.

There are some downsides to aggregating data onto a common platform. The first is scale: analysis becomes exponentially harder as the data set grows. Centralized collection means huge data stores, greatly increasing the computational burden on the SIEM/LM platform. Technical architectures can help scale, but ultimately these systems require significant horsepower to handle an enterprise’s data. Systems that utilize central filtering and retention policies require all data to be moved and stored – typically multiple times – increasing the burden on the network.

Some systems scale using distributed processing, where filtering and analysis occur outside the central repository, typically at the distributed data collection point. This reduces the compute burden on the central server and allows processing to occur on smaller, more manageable data sets. It does require that policies, along with the code to process them, be distributed and kept current throughout the network. Distributed agent processes are a handy way to “divide and conquer”, but increase IT administration requirements. This strategy also adds a computational burden o the data collection points, degrading their performance and potentially slowing enough to drop incoming data.

Data Normalization

If the process of aggregation is to merge dissimilar event feeds into one common platform, normalization takes it one step further by reducing the records to just common event attributes. As we mentioned in the data collection post, most data sources collect exactly the same base event attributes: time, user, operation, network address, and so on. Facilities like syslog not only group the common attributes, but provide means to collect supplementary information that does not fit the basic template. Normalization is where known data attributes are fed into a generic template, and anything that doesn’t fit is simply omitted from the normalized event log. After all, to analyze we want to compare apple to apples, so we throw away an oranges for the sake of simplicity.

Depending upon the SIEM or Log Management vendor, the original non-normalized records may be kept in a separate repository for forensics purposes prior to later archival or deletion, or they may simply be discarded. In practice, discarding original data is a bad idea, since the full records are required for any kind of legal enforcement. Thus, most products keep the raw event logs for a user-specified period prior to archival. In some cases, the SIEM platform keeps a link to the original event in the normalized event log which provides ‘drill-down’ capability to easily reference extra information collected from the device.

Normalization allows for predicable and consistent storage for all records, and indexes these records for fast searching and sorting, which is key when battling the clock in investigating an incident. Additionally, normalization allows for basic and consistent reporting and analysis to be performed on every event regardless of the data source. When the attributes are consistent, event correlation and analysis – which we will discuss in our next post – are far easier.

Technically normalization is no longer a requirement on current platforms. Normalization was a necessity in the early days of SIEM, when storage and compute power were expensive commodities, and SIEM platforms used relational database management systems for back-end data management. Advances in indexing and searching unstructured data repositories now make it feasible to store full source data, retaining original data, and eliminating normalization overhead.

Enriching the Future

In reality, we are seeing a number of platforms doing data enrichment, adding supplemental information (like geo-location, transaction numbers, application data, etc.) to logs and events to enhance analysis and reporting. Enabled by cheap storage and Moore’s Law, and driven by ever-increasing demand to collect more information to support security and compliance efforts, we expect more platforms to increase enrichment. Data enrichment requires a highly scalable technical architecture, purpose-built for multi-factor analysis and scale, making tomorrow’s SIEM/LM platforms look very similar to current business intelligence platforms.

But that just scratches the surface in terms of enrichment, because data from the analysis can also be added to the records. Examples include identity matching across multiple services or devices, behavioral detection, transaction IDs, and even rudimentary content analysis. It is somewhat like having the system take notes and extrapolate additional meaning from the raw data, making the original record more complete and useful. This is a new concept for SIEM, so the enrichment will ultimately encompass is anyone’s guess. But as the core functions of SIEM have standardized, we expect vendors to introduce new ways to derive additional value from the sea of data they collect.


Other Posts in Understanding and Selecting SIEM/LM

  1. Introduction.
  2. Use Cases, Part 1.
  3. Use Cases, part 2.
  4. Business Justification.
  5. Data Collection.

—Adrian Lane

Wednesday, May 26, 2010

DB Quant: Discovery And Assessment Metrics (Part 2) Identify Apps

By Adrian Lane

Now that we know where the databases are located, we need to find sensitive data inside them, determine how applications connect to databases, and what database features and functions the applications depend on. Applications are often inflexible, requiring particular user accounts or connection types to function properly. They may even be coded to use database features that are considered vulnerabilities by the security team. Data discovery is key, because of course it’s necessary to know the type and location of sensitive data before controls can be established. The entire scanning process require special access provided by the owners of the databases, as well as the platforms and networks that support them.

For some of you in small and medium businesses, especially in cases where you are the sole database administrator, these granular steps will seem like overkill. For mid-to-large enterprises, with hundreds of databases supporting thousands of applications with sensitive data scattered throughout them, these steps are necessary for forming security policies and meeting compliance. Also consider that some of the automated scanning tools behave like a virus or an attacker, requiring both credentials to access the DB and coordination with security countermeasures and staff.

As a reminder, the process is as follows:

  1. Plan
  2. Setup
  3. Identify Dependent Applications
  4. Identify Database Owners
  5. Discover Data
  6. Document

Plan

Variable Notes
Time to assemble list of databases Feeds from the Enumerate Databases step
Time to define data types of interest The sensitive data you want to discover, such as credit card numbers
Time to map locations and schedule scans Databases will reside on different domains, subnets, etc. This is the time to develop a scanning plan based on location

Setup

Variable Notes
Capital and time to acquire tools for discovery automation Optional – DB discovery tools from previous phase may provide this
Time to define patterns, expressions, and signatures e.g., what sensitive data looks like
Time to contact business units & network staff
Time to configure discovery tool Optional

Identify Dependent Applications

Variable Notes
Time to schedule and perform review/run scan
Time to identify applications using the database Based on connections and/or service account credentials
Time to catalog application dependencies and connection types Most items can be discovered without DB credentials
Time to repeat steps As needed

Identify Database Owners

Variable Notes
Time to identify database owners The real-world owner, not just the DBA account name
Time to obtain access and credentials Usually a dedicated account is established for this analysis

Discover Data

Variable Notes
Time to schedule and run scan For automated scans
Time to compile table/schema locations For manual discovery
Time to examine schema and data For manual discovery
Time to adjust rules and repeat scans For automated scans

Document

Variable Notes
Time to filter results and compile report Gather data names, types, and location
Time to generate report(s)

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. DB Quant: Planning Metrics, Part 1
  20. DB Quant: Planning Metrics, Part 2
  21. DB Quant: Planning Metrics, Part 3
  22. DB Quant: Planning Metrics, Part 4
  23. DB Quant: Discovery Metrics, Part 1, Enumerate Databases

—Adrian Lane

Quick Wins with DLP Presentation

By Rich

Yesterday I gave this presentation as a webcast for McAfee, but somehow my last 8 slides got dropped from the deck. So, as promised, here is a PDF of the slides.

McAfee is hosting the full webcast deck over at their blog. Since we don’t host vendor materials here at Securosis, here is the subset of my slides. (You might still want to check out their full deck, since it also includes content from an end user).

Presentation: Quick Wins with DLP

—Rich