Login  |  Register  |  Contact
Wednesday, May 12, 2010

Incite 5/12/2010: the Power of Unplugging

By Mike Rothman

I’m crappy at vacations. It usually takes me a few days to unwind and relax, and then I blink and it’s time to go home and get back into the mess of daily life. But it’s worse than that – even when I’m away, I tend to check email and wade through my blog posts and basically not really disconnect. So the guilt is always there. As opposed to enjoying what I’m doing, I’m worried about what I’m not doing and how much is piling up while I’m away. This has to stop. It’s not fair to the Boss or the kids or even me. I drive pretty hard and I’ve always walked the fine line between passion and burnout. I’m happy to say I’m making progress, slowly but surely.

Hard to find the right place for this... Thanks to Rich and Adrian, you probably didn’t notice I’ve been out of the country for the past 12 days and did zero work. But I was and it was great. Leaving the US really forces me to unplug, mostly because I’m cheap. I don’t want to pay $1.50 a minute for cell service and I don’t want to pay the ridonkulous data roaming fees. So I don’t. I just unplug.

OK, not entirely. When we get to the hotel at night, I usually connect to the hotel network to clean out my email, quickly peruse the blog feeds and call the kids (Skype FTW). Although WiFi is usually $25-30 per day and locked to one device. So I probably only connected half the days we were away.

The impact on my experience was significant. When I was on the tour bus, or at dinner with my friends, or at an attraction – I didn’t have my head buried in the iWhatever. I was engaged. I was paying attention. And it was great.

I always prided myself on being able to multi-task, which really means I’m proficient at doing a lot of things poorly at the same time. When you don’t have the distractions or interruptions or other shiny objects, it’s amazing how much richer the experience is. No matter what you are doing.

Regardless of the advantages, I suspect unplugging will always remain a battle for me, even on vacation. Going out of the US makes unplugging easy. The real challenge will be later this summer, when we do a family vacation. I may just get a prepay phone and forward my numbers there, so I have emergency communications, but I don’t have the shiny objects flashing at me…

But now that I’m thinking about it, why don’t more of us unplug during the week? Not for days at a time, but hours. Why can’t I take a morning and turn off email, IM, and even the web, and just write. Or think. Or plan world domination. Right, the only obstacle is my own weakness. My own need to feel important by getting email and calls and responding quickly.

So that’s going to be my new thing. For a couple-hour period every week, I’m going to unplug. Am I crazy? Would that work for you? It’s an interesting question. Let’s see how it goes.

– Mike

Photo credits: “Unplug for safety” originally uploaded by mag3737


Incite 4 U

  1. Attack of the Next Generation Firewalls… – Everyone hates the term ‘next generation’, but every vendor seems to want to convince the market they’ve got the next best widget and it represents the new new thing. Example 1 is McAfee’s announcement of the next version of Firewall Enterprise, which adds application layer protection. Not sure why that’s next generation, but whatever. It makes for good marketing. Example 2 is SonicWall’s SuperMassive project, which is a great name, but seems like an impedance mismatch, given SonicWall’s limited success in the large enterprise. And it’s the large enterprise that needs 40Gbps throughput. My point isn’t to poke at marketing folks. OK, maybe a bit. But for end users, you need to parse and purge any next generation verbiage and focus on your issues. Then deploy whatever generation addresses the problems. – MR

  2. Cry Havok and Let Slip the Lawyers – I really don’t know what to think of the patent system anymore. On one hand are the trolls who buy IP, wait for someone else to actually make a product, and then sue their behinds. On the other is the fact that patents do serve a valuable role in society to provide economic incentive for innovation, but only when managed well. I’m on the road and thus haven’t had a chance to dig into F5’s lawsuit against Imperva for patent infringement on the WAF. Thus I don’t know if this is the real deal or a play to bleed funds or sow doubt with prospects, but I do know who will win in the end… the lawyers. – RM

  3. Bait and Switch – According to The Register, researchers have successfully exercised an attack to bypass all AV protection. “It works by sending them a sample of benign code that passes their security checks and then, before it’s executed, swaps it out with a malicious payload.” and “If a product uses SSDT hooks or other kind of kernel mode hooks on similar level to implement security features it is vulnerable.” I do not know what the real chances for success are, but the methodology is legit. SSDT has been used for a while now as an exploit path, but this is the first time that I have heard of someone tricking what are essentially non-threadsafe checker utilities. A simple code change to the scheduler priorities will fix the immediate issue, but undoubtedly with side effects to application responsiveness. What most interests me about this is that it illustrates a classic problem we don’t see all that often: timing attacks. Typically this type of hack requires intimate knowledge of how the targeted code works, so it is less common. I am betting we’ll see this trick applied to other applications in the near future. – AL

  4. Just Do Something… – I get a lot of questions about how to get started in information security, like most of you. For some reason, if you are reasonably high profile in the business, folks think we know some kind of shortcut to get established. We’ve already talked about the benefits of social networking, but ultimately this post from Adam nails it. Just do something. Volunteer at your church. Help out the kid’s nursery school or your favorite charity. They have computers and Internet connectivity, so they’ve got security problems. If you are willing to trade time for experience, then you can learn and get established in this space. But certainly not if you view getting a security job as a chicken/egg problem. – MR

  5. It’s not what you know, it’s what you think you know – Hilarious post by James Iry titled “A Brief, and Mostly Wrong History of Programming Languages. I especially love the comments on COBOL. But I think the post is missing a couple important landmarks:

    1. September 1973, Lotfi Zadeh creates a paper on Fuzzy Logic. An inadvertent side effect is discovery of Zadeh’s theorem, proving it is possible to simultaneously be a supergenius and the village idiot.
    2. April 1994, Kernighan and Ritchie finally admit that Unix and C are a hoax: “We stopped when we got a clean compile on the following syntax: for(;P(“n”),R-;P(“|”))for(e=C;e-;P(“_”+(*u++/8)%2))P(“| “+(*u/4)%2);”.
    3. November 1995, James Gosling quietly released “Oak white paper” and no one notices. After scolding by marketing executives “What kind of a stupid $^@&#% name is ‘Oak’?”, the white paper was re-launched as “Java white paper” in December of that year to international acclaim.
    4. April 1974, honorable mention to Professor Stonebreaker, who launches the Ingres Relational Database with QUEL and SQL programming languages. Ingres hires dedicated programmer-monks to fulfill revolutionary vision. Resultant code is so dazzling and stupendous that they forget to hire sales team and go bankrupt. – AL
  6. More Who DAT Fail Impact – It’s been a few weeks since the McAfee DAT update fiasco; and as I was out of pocket for two weeks, I’m catching up on it, but I wonder if anyone took Rob Graham up on his offer to analyze the real number of failed machines. We also saw McAfee’s financial results suffer (earnings transcript) and you have to wonder whether customers looking at big McAfee renewals will look elsewhere. Finally, McAfee is going to help customers clean up, which seems either like a blank check (if done right) or a marketing ploy (if done wrong), but either way the old adage about it taking years to build credibility and only seconds to lose it is reality here. Set your clocks for three months from now: MFE’s next financial announcement should be interesting. – MR

  7. Happy Birthday LoveBug – Can it really be 10 years since the ILOVEYOU virus hit… hard? I still tell the story about getting the virus sent to me by the Chairman of RSA (Chuck Stuckey) and it keeps geting big laughs. But what have we learned over the past decade? We live in a dynamic world. Once we close one attack vector, the bad guys find the next. It’s an arms race, baby, and there is no end in sight. So remember LoveBug, get nostalgic for a minute, and then get back to work. Because blended threats won’t wait and zombies don’t sleep. We need more than a can of Raid to deal with today’s bugs. – MR

  8. Thoughts on Minimalism – Being out of the country always gives me perspective on the “reality” that is life in the US. Just driving around my neighborhood really brought it home. We’ve got space, we live in relatively big houses, we’ve got relative wealth, and we are still unhappy. At least most of us. So stumbling across this post on the ZenHabits blog about minimalism provided a good reminder that stuff doesn’t make us happy. The point here is to be happy with what you have and stop making yourself crazy trying to get that stuff you probably don’t need anyway. I talk about this a lot, and I don’t do particularly well in practicing what I preach, but at least I recognize where I’m trying to go, and maybe one day I’ll even get there. – MR

—Mike Rothman

Monday, May 10, 2010

DB Quant: Planning Metrics (Part 2)

By Adrian Lane

Today we will identify key cost metrics for planning Authentication, Access, and Authorization (AAA). Crafting access strategies is time-consuming, and it is difficult to provide data security without imposing overly burdensome setup and management tasks. Meeting compliance requirements, and implementing segregation of duties to prevent fraud, make the process even more demanding. While the process we described for the planning phase strategic, there are still plenty of moving parts to account for.

We previously defined the process as:

  1. Determine Requirements: Figure out internal functions and external compliance requirements (e.g., authentication for PCI systems).
  2. Define Policies: For users, objects and repositories.
  3. Define Implementation
    1. Define approved authentication mechanisms.
    2. Define allowed access control mechanisms.
  4. Document

After some more review, we have refined it a bit:

<

div class=”bodyTable”>

Determine Requirements

Variable Notes
Time to identify business groups and functions
Time to locate internal business requirements for Access/Authentication/Authorization
Time to identify/gather external security and compliance requirements

Define Policies for Users, Objects, and Repositories

Variable Notes
Time to specify business functions Only major business functions, e.g., accounting for General Ledger access vs. AR
Time to map business functions to logical roles
Time to determine object and data ownership Again, only for major applications. Typically this is ERP, CRM, and HR
Time to determine necessary administrative roles The different DBA accounts needed to support segregation of duties

Define Implementation Strategy

Variable Notes
Time to identify which organizations will be supported Both internal and external
Time to define approved authentication mechanisms
Time to define allowed access control mechanisms
Time to identify legitimate and unwanted access methods The different ways of connecting with the DB – e.g., ODBC over SSL with approved port numbers
Time to define database administrator roles

Document

Variable Notes
Time to document standard
Time to distribute standard and educate team members

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management
  19. Planning Metrics, Part 1

—Adrian Lane

FireStarter: Secure Development Lifecycle—You’re Doing It Wrong

By Adrian Lane

I wrote last Monday’s FireStarter on Process and Peer Pressure because there were a few things bothering me that I needed to get out of my system, but I saved a lot for later. I didn’t really intend to write this followup so soon, but I saw that Cisco announced their own Software Development Lifecycle. I wanted to make some statements on SDL later this year when I begin publishing more concrete Secure Software Development Lifecycle (SSDL in Securosis parlance, SDL for most organizations) guidelines, but Cisco’s announcement changes things. I worry that sheer inertia will prompt the industry as a whole to rubber-stamp SDLs. Before you know it, HR reps will be including “SDL certification” requirements on every engineering job description, without a clue what they are demanding or why, so let’s stop this train before it runs too far off the tracks.

If you are thinking about incorporating Secure Development Lifecycle practices for software development, that’s great. If you have read about Microsoft’s SDL, witnessed Microsoft’s success, seen Cisco’s endorsement, and believe their model will work for you, just stop. It’s not going to work for you. It’s based on a lot of factors and assumptions that do not pertain to you. It’s not a template for your requirements.

Adopting MS-SDL wholesale is a little like a child putting on adult clothes because they want to be ‘big’. You cannot drop that particular process into your development organization and have it fit. More likely you will break everything. Your team will need to change their skills and priorities, and though it sounds cliche, people are resistant to change. Existing processes need to be adjusted to accommodate secure development processes and techniques. You will need new tools, or to augment existing ones. You will need a whole new class of metrics and tracking. And everything you pick the first time will need several iterations of alteration and adjustment before you get it right – this isn’t Microsoft’s first attempt either.

It’s not that the SDL is bad – it isn’t. Microsoft did an excellent job with their SDL. It’s very well thought out, incorporates most effective defect detection techniques, has clearly evolved through several revisions, and includes intelligent tradeoffs in places where there is no single ‘right’ answer. But it is their SDL, not yours. If you take the SDL Microsoft has described and try to implement it, you will fail. I am talking to the 99% of people out there who would think about implementing SDL and think “Hey, Microsoft published this new thingie for free; let’s use it and save ourselves the time and money!” Wrong.

Here’s why:

  1. Too Big: This process is geared for very large firms, with lots of resources, and a genuine desire to get better. You may not need all of it and frankly it would be overwhelming to start with. This is huge – I mean really huge. You are not going to swallow this elephant in a single gulp.
  2. Organic Evolution: Microsoft’s success is not just the introduction of process and techniques. It was not just hiring a handful of really good people and helping to educate the development staff. The MS-SDL reflects several years of focused evolution, and in software that is a lot. They spent a long time looking at the code and figuring out what was wrong. They developed their own tools to help discover problems. They developed software to help track their progress and provide metrics to demonstrate what worked and what did not. They evolved their own threat modeling. They tried, revised, fixed and re-implemented most of what they do several times over. Don’t think their publishing a guide can save you this pain – it cannot.
  3. Resources: People, Tools, and Time are the three classic resources you have when you build code. Resources are scare. Always. OK, if you have billions of dollars in the bank, or you are a bank, you might not be quite as pinched for resources. But developing quality code is expensive. Microsoft had the money to hire some of the best people, to buy or build the best tools, a willingness to take additional time for security before releasing software, and then hire some more of the best people. Your developers work nights and weekends to get the release out the door and collapse in a heap, dreaming about all the things they wanted to do before the code was released. Cisco? Yeah, they can do this. You? You don’t have the resources to do everything, so you need to pick and choose.
  4. Appropriateness of Techniques: Your program calls for white box testing. Great, but you don’t own critical code you rely on. You leverage open source where you can get code, but off-the-shelf software and even Microsoft tools do not provide source code. If you have four thousand web pages, and most of them don’t filter input values, do you really think you are going to fix this in the current release cycle, or are you going to deploy a WAF? If you are starting an application from scratch, your first step will be threat modeling. If you have a huge existing application, forget threat modeling for now – pen testing is probably much more effective and efficient. And it’s not just which techniques, but how you use them. Within all these techniques, there are many variations and supporting requirements that need tweaking so they can work for you. We discuss these tradeoffs in the Use Case portion of Understanding and Selecting a Web Application Security Program white paper, but the point is that the right choice for you is different than the right choice for Microsoft or Cisco, and you can’t discover what’s right for your environment by reading their SDLs.
  5. Do what Microsoft did, not what they do: Using the SDL as your program is a really bad thing to do. I really hope people don’t take this as a slam against Microsoft – my point is to follow Microsoft’s example, rather than their SDL. You need to do what Microsoft did, because you cannot simply jump ahead to what Microsoft is doing today. Microsoft’s journey to where they are now is far more interesting and useful than the specific tools and techniques they eventually settled on. You can learn by example, sure, but you need to answer the same questions – sometimes with different answers, as dictated by your own constraints – to evolve your own process from the beginning. That comes from hard work and analysis, with lots of trial and error mixed in.

There is no shortcut, no secret sauce, no package of Instant Code-Be-Secure. One size does not fit all. Admire what Microsoft has done, for their customers and for the community, learn, and then figure what is relevant to you. You don’t have to completely start from scratch, but you have got work to do in figuring out how you are going to build your own Secure Development Lifecycle.

—Adrian Lane

Friday, May 07, 2010

Friday Summary: May 7, 2010

By Rich

Yesterday I finished up a presentation for the Secure360 Conference: “Putting the Fun in Dysfunctional – How the Security Industry Works, and Why It’s Your Fault”. This is a combination of a bunch of things I’ve been thinking about for a while, mostly focused on cognitive science and economics. Essentially, security makes a heck of a lot more sense once you start trying to understand why people make the decisions they do, which is a combination of their own internal workings and external forces. Since it’s very hard to change how people think (in terms of process, not opinion), the best way to induce change is to modify the forces that drive their decision making.

I have a section in the presentation on cognitive bias, which is our tendency to make errors in judgement due to how our brains work. It’s pretty fascinating stuff, and essential knowledge for anyone who wants to improve their critical thinking. Here are some examples relevant to the practice of security (from Wikipedia):

  • Framing by using a too-narrow approach and description of the situation or issue.
  • Hindsight bias, sometimes called the “I-knew-it-all-along” effect, is the inclination to see past events as being predictable.
  • Confirmation bias is the tendency to search for or interpret information in a way that confirms one’s preconceptions – this is related to cognitive dissonance.
  • Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests.
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink, herd behavior, and mania.
  • Base rate fallacy: ignoring available statistical data in favor of particulars.
  • Focusing effect: prediction bias which occurs when people place too much importance on one aspect of an event – this causes errors when attempting to predict the utility of a future outcome.
  • Loss aversion: “the disutility of giving up an object is greater than the utility associated with acquiring it”.
  • Outcome bias: the tendency to judge a decision based its eventual outcome, rather than by the information available when it was made.
  • Post-purchase rationalization: the tendency to persuade oneself that a purchase was a good value.
  • Status quo bias: our preference for to stay the same (see also loss aversion and endowment effect).
  • Zero-risk bias: preference for reducing a small risk to zero, over a greater reduction in a larger risk.

Cognitive bias also has interesting ties to logical fallacies, another essential area for any good security pro or skeptic.

Not that understanding psychology and economics solves all our problems, but they sure help reduce the frustration. And applied to ourselves, understanding can really improve our ability to analyze information and make decisions. Cool stuff.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: 2010 DBIR to include cases from U.S. Secret Service This is simply awesome! The Secret Service is analyzing all their cases from the past couple years using Verizon’s framework. This is a gold mine for those of us who care about real world security (disclosure – I’m on the board of the VERIS project for Verizon, but I am not compensated in any way).
  • Adrian Lane: What Egress Filters Should I Use? Branden Williams offers a pragmatic discussion of egress filtering.

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Betsy Nichols, in response to Thoughts on Data Breach History.

Very interesting presentation. The OSF is doing amazing work in two areas: data breaches and vulnerabilities. It is amazing what they have accomplished with a volunteer community. They are definitely a worthwhile cause that merits broad support from all of us who benefit from their work.

You and other interested folks in the Securosis community may be interested in some of the quantitative analysis I have done using the OSF DataLossDB. You can see it at www.metricscenter.net. (No login necessary.) Just go to the Dashboards area of the site. I have posted two that are based on the DataLossDB.

The first dashboard is titled Public Data Breaches which is solely based on the DataLossDB and presents some basic stats.

The second dashboard is titled Stock Price Impact. This looks at mashing up data from the DataLossDB with Google Finance data to get insight on the question “What is the impact of a breach on a public company’s stock price”.

—Rich

Thursday, May 06, 2010

DB Quant: Planning Metrics (Part 1)

By Rich

We are finally starting to roll into the final phase of our Project Quant for Databases project. So far we have defined a database security process framework, and rolled through each of the phases to detail the steps. We have mentioned dozens of ways you can invest time and money, and we need to create specific measurements for each. As a reminder, we are keeping this more abstract than Project Quant for Patch Management, because more processes are included, and we learned that key metrics are more important to most of you than the list of hundreds of metrics we’d otherwise end up with.

This final step is actually the real meat of the project: we start defining the metrics that align with the process cycles. Today we’ll open with the first series of metrics for the Plan phase. As a reminder, this phase includes:

  • Configuration Standards
  • Authentication, Access, and Authorization Standards (AAA)
  • Monitoring Policies
  • Classification Policies

Today, we’ll focus only on the Configuration Standards steps, which we initially identified as:

  1. Identify Standards
  2. Specify Internal Standard
  3. Choose Strategy
  4. Document

Those still seem to work, so let’s dig into the steps and metrics.

Identify Standards


Determine Requirements

Variable Notes
Time to identify and collect configuration sources e.g., CIS benchmarks, NIST guidelines, vendor configuration guides
Time to locate any existing internal standards
Time to identify/gather internal security requirements
Time to identify/gather compliance requirements
Time to research practices to meet requirements

Specify Internal Standard

Variable Notes
Time to determine standards requirements

Choose Implementation

Variable Notes
Time to determine settings, controls, and configurations to meet standard
Time to determine controls priorities
Time to determine responsible party
Time to determine verification method

Document

Variable Notes
Time to document standard
Time to distribute standard and educate team members

Other Posts in Project Quant for Database Security

  1. An Open Metrics Model for Database Security: Project Quant for Databases
  2. Database Security: Process Framework
  3. Database Security: Planning
  4. Database Security: Planning, Part 2
  5. Database Security: Discover and Assess Databases, Apps, Data
  6. Database Security: Patch
  7. Database Security: Configure
  8. Database Security: Restrict Access
  9. Database Security: Shield
  10. Database Security: Database Activity Monitoring
  11. Database Security: Audit
  12. Database Security: Database Activity Blocking
  13. Database Security: Encryption
  14. Database Security: Data Masking
  15. Database Security: Web App Firewalls
  16. Database Security: Configuration Management
  17. Database Security: Patch Management
  18. Database Security: Change Management

—Rich

Help Build the Mother of All Data Security Surveys

By Rich

I spend a heck of a lot of time researching, writing, and speaking about data security. One area that’s been very disappointing is the quality of many of the surveys. Most either try to quantify losses (without using a verifiable loss model), measure general attitudes to inspire some BS hype press release, or assess some other fuzzy aspect you can spin any way you want.

This bugs me, and it’s been on my to-do list to run a better survey myself. When a vendor (Imperva) proposed the same thing back at RSA (meaning we’d have funding) and agreed to our Totally Transparent Research process, it was time to promote it to the top of the stack.

So we are kicking off our first big data security study. Following in the footsteps of the one we did for patch management, this survey will focus on hard metrics – our goal is to avoid general attitude and unquantifiable loss guesses, and focus on figuring out what people are really doing about data security.

As with all our surveys, we are soliciting ideas and feedback before we run it, and will release all the raw results.

Here are my initial ideas on how we might structure the questions:

  • We will group the questions to match the phases in the Pragmatic Data Security Cycle, since we need some structure to start with.
  • For each phase, we will list out the major technologies and processes, then ask which one organizations have adopted.
  • For technologies, we will ask which they’ve researched, budgeted for, purchased, deployed in a limited manner (such as testing), deployed in initial production, and deployed in full production (organization wide).
  • For processes, we will ask about maturity from ad-hoc through fully formalized and documented, similar to what we did for patch management.
  • For the tools and processes, we’ll ask if they were implemented due to a specific compliance deficiency during an assessment.

I’m also wondering if we ask should how many breaches or breach disclosures were directly prevented by the tool (estimates). I’m on the fence about this, because we would need to tightly constrain the question to avoid the results being abused in some way.

Those are my rough ideas – what do you think? Anything else you want to see? Is this even in the right direction? And remember – raw (anonymized) results will be released, so it’s kind of like your chance to run a survey and have someone else bear the costs and do all the work…

FYI The sponsor gets an exclusive on the raw results for 45 days or so, but they will be released free after that. We have to pay for these things somehow.

—Rich

Wednesday, May 05, 2010

Download Our Kick-Ass Database Encryption and Tokenization Paper

By Rich

It’s kind of weird, but our first white paper to remain unsponsored is also the one I consider our best yet. Adrian and I have spent nearly two years pulling this one together – with more writes, re-writes, and do-overs than I care to contemplate.

We started with a straight description of encryption options, before figuring out that it’s all too complex, and what people really need is a better way to make sense of the options and figure out which will work best in their environments. So we completely changed our terminology, and came up with an original way to describe and approach the encryption problem – we realized that deciding how to best encrypt a database really comes down to managing credentialed vs. non-credentialed users.

Then, based on talking with users & customers, we noticed that tokenization was being thrown into the mix, so we added it to the “decision tree” and technology description sections. And to help it all make sense, we added a bunch of use cases (including a really weird one based on an actual situation Adrian found himself in).

We are (finally) pretty darn happy with this report, and don’t want to leave it in a drawer until someone decides to sponsor.

On the landing page you can leave comments, or you can just download the paper.

We could definitely use some feedback – we expect to update this material fairly frequently – and feel free to spread the word…

—Rich

Tuesday, May 04, 2010

Database Security Fundamentals: Encryption

By Adrian Lane

Continuing our theme of quick and effective database security measures, we now move into the data protection phase. The most common (and potentially most effective) security measure for data at rest is encryption. Since we are shooting for fast and effective, we are looking at some form of transparent encryption. Almost every database has transparent encryption built in, and it is effective for securing data files and archives from snooping. Several vendors also offer forms of transparent encryption at the OS/file system level, which behave in a very similar manner, so we will consider those options as well. It’s ironic that I am writing this post today, as I just completed the final editorial sweep through the Securosis Database Encryption & Tokenization paper. Rich and I will be releasing it tomorrow (Cinco de Mayo), so if you want a much deeper dive into the technology tradeoffs and variations, check it paper out when it becomes available (Shameless plug: If you are interested in sponsoring the paper, let us know).

There are a handful of business reasons to use data encryption for databases: to buttress access controls in order to protect against unwanted insider access, to protect data at rest, or to comply with an industry or government regulation. Only the last two are covered by transparent encryption, as the former requires encryption at the application layer. Application level encryption requires code changes, database changes, and application recertification, so I exclude it from this Fundamentals series. Encryption embedded within disk drives is transparent, and it protects files on the disk as well. However, purchasing encrypted drives is a significant investment, does not protect exports or tape archives, and does not protect databases moving around virtual environments. Since we are focused on quick wins here, I am limiting the discussion to transparent database options – either using native database capabilities, or through OS/file system support.

Native database encryption features are embedded within the database. The encryption operations are handled behind the scenes, with no changes to the tables, columns, indices, or queries. Enabling the feature is at most an add-on package, but in some cases as simple as a handful of DDL statements. The database encrypts the data just prior to writing to disk, and decrypts when processing authenticated queries for encrypted data. Key management is either handled internally (with keys stored within system tables and only accessible by DBAs), or externally (with a dedicated key management server). Internal key storage is easier to manage, and simpler in disaster recovery scenarios, at the expense of weaker security. In either case, keys are used without the end user interaction (or even knowledge).

File/OS encryption works by intercepting the database’s writes to disk and encrypting data blocks before storing them. Conversely, data is decrypted as the database requests information from disk. Keys are stored within key management services embedded within the encryption product rather than the database, or provided by external key management products. Keep in mind that this type of product can applied to on specific folders where the data is stored and not just database files. File/OS encryption is attractive for its ability to address both database non-database data security issues.

Two options are not a lot, but both transparent options are effective and offer the same business benefits. The choice comes down to four factors, in order of importance: performance, cost, versatility, and comfort level.

  1. How much does the solution impact transactional throughput?
  2. How much does it cost?
  3. How many different problems does it solve?
  4. How easy is it to use?

Or at least this should be the order of importance, but from experience I know some people reverse that order because they know the database and are comfortable with a particular UI.

If you are the sole DBA, how comfortable you are with the interface, or how easy it is for to use, will be the biggest factor because your time is more important than the other factors. If you have been using Sybase for years and are happy with their tools, odds are you will choose that. Regardless, if you have the opportunity, running a couple performance benchmarks is very handy for getting an idea of how much impact encryption will have. It may be 3%, or 12%. Nobody notices 3%, but 12% may mean calls from users. Run some basic performance tests between a) your unaltered database, b) the database vendor’s solution active, and c) an external tool. Understanding the impact on typical database transaction processing really helps with decision making. Get some pricing estimates from vendors. If there are others in your IT organization who already use file/OS encryption, ask them about usage and performance. Yes, this makes this a two-day task instead of a one-day implementation, but it’s worth it. Testing setup and execution will take at least a day, but will give you greater confidence in your decision and make the final rollout a lot easier.

  1. Select: The question of what type of transparent encryption to select – internal database native or external file/OS – is a murky one. Weigh your options and make your selection. Acquire the tool or licence.
  2. Define Scope: Column level, table level, or entire database? Understand what data you will apply encryption to, read the documentation, and generate your configuration scripts.
  3. Configure & Install: Once you have reached this step, you should be able to implement database encryption within an afternoon. Obviously, the first step in the process is to make sure you have a verified backup prior to the installation process. Once you have installed or configured the encryption engine, the first major step will be to generate the keys. Select a good passphrase (not password) to protect the keys. Produce a verified backup of the key archive. If the keys are stored in a system table, take a fresh backup of the database. If the keys are in an external key management service, before you go any further, make sure you have that backed up and can restore successfully.
  4. Encrypt: You have everything set up, so now you need to encrypt the data: turn on encryption. This will take some time, as your chosen tool must read, encrypt, then rewrite every single block of data to be encrypted. A large database means you may have the database offline for several hours, so plan accordingly. Once encrypted, data will be automatically encrypted as it is written to disk, so there is very little need for you do anything else – except wait until the initial encryption process is complete.
  5. Verify: Now that the database is encrypted, bring the database back online. Verify that applications continue to function normally. You should also perform a test recovery of the backup in a test environment to ensure that the database archive, key management and access controls can be properly synchronized in a disaster recovery situation.
  6. Document: At this point you are done except a few clerical tasks. If you applied encryption to a subset of the database, document which tables or columns. If the passphrase for the keys needs to be entrusted to someone in case you are hit by a bus on the way home from work, do so now. If transparent encryption is being used for regulatory compliance, document how the solution is being used and inform auditors, so they can complete their checklist.

I am a big fan of transparent encryption. It is fast, easy, and effective for addressing one or two very real threats. That said, it’s not a secret that I don’t always see eye-to-eye with the vendor community. I have strong opinions, having worked both sides of the fence, and my comments on transparent database solutions tend to generate friction. When I say that Type A transparent encryption is not always the best answer, I get the same reaction as if I told someone their baby was profoundly ugly. When I say that one option performs better than another – and we are talking about minute differences of a few percentage points of CPU overhead – I get email trying to “educate me” on “the real story”. Blah! The fight for market share between the various vendors can be tenacious, with the players uncharacteristically vociferous over minor points. Don’t let the rhetoric fool you, and don’t base your decision on FUD. Choose the solution that best satisfies your business drivers. If two solutions are basically equivalent, choose the one you are most comfortable with and move forward. You are not really going to make a bad choice, and lock-in of technologies is pretty minimal with transparent solutions, so you can always swap down the road. Regardless, transparent encryption is a very good solution for media protection, and it’s a quick way to satisfy most PCI auditors.

Lastly, both internal and external options allow for encrypting the entire database, or selected columns/tables. While it sounds better to only encrypt the minimum amount of data possible to reduce overhead, in practice there is not much performance gain in limiting what you encrypt. In my experience, once you reach three columns or tables, performance is about the same as encrypting everything. Second, it is possible that other tables or views contain sensitive information, or indicies to leak information. For this fundamentals series, encrypt the entire database. It’s easier and there are fewer chances for mistakes.


Index to other posts in the Database Security Fundamentals series.

  1. Introduction.
  2. Access and Authorization.
  3. Connections and Access Points.
  4. Patching.
  5. Configuration.
  6. Transaction Audit.
  7. Event Monitoring.

—Adrian Lane

Thoughts on Data Breach History

By Rich

I’ve been writing about data breaches for a long time now – ever since I received my first notification (from egghead.com) in 2002. For about 4 or 5 years now I’ve been giving various versions of my “Involuntary Case Studies in Data Breaches” presentation, where we dig into the history of data breaches and spend time detailing some of the more notable ones, from breach to resolution.

2 weeks ago I presented the latest iteration at the Source Boston conference (video here), and it is materially different than the version I gave at the first Source event. I did some wicked cool 3D visualization in the presentation, making it too big to post, so I thought I should at least post some of the conclusions and lessons. (I plan to make a video of the content, but that’s going to take a while).

Here are some interesting points that arise when we look over the entire history of data breaches:

  • Without compliance, there are no economic incentives to report breaches. When losing personally identifiable information (PII) the breached entity only suffers losses from fines and breach reporting costs. The rest of the system spreads out the cost of the fraud. For loss of intelectual property, there is no incentive to make the breach public.
  • Lost business is a myth. Consumers rarely change companies after a breach, even if that’s what they claim when responding to surveys.
  • I know of no cases where a lost laptop, backup tape, or other media resulted in fraud, even though that’s the most commonly reported breach category. Web application hacking and malware are the top categories for breaches that result in fraud.
  • SQL injection using xp_cmdshell was the source of the biggest pre-TJX credit card breach (CardSystems Solutions in 2004: 40 million transactions). This is the same technique Albert Gonzales used for Heartland, Hannaford, and a handful of other companies in 2008. We never learn, even when there are plenty of warning signs.
  • Our controls are poorly aligned with the threat – for example, nearly all DLP deployments focus on email, even though that’s one of the least common vectors for breaches and other losses.
  • The more a company tries to spin and wheedle out of a breach, the worse the PR (and possibly legal) consequences.
  • We will never be perfect, but most of our security relies on us never making a mistake. Defense in depth is broken, since every layer is its own little spear to the heart.
  • Most breaches are discovered by outsiders – not the breached company (real breaches, not lost media).

The history is pretty clear – we have no chance of being perfect, and since we focus too much on walls and not not enough on response, the bad guys get to act with near impunity. We do catch some of them, but only in the biggest breaches and mostly due to greed and mistakes (just like meatspace crime).

If you think this is interesting, I highly recommend you support the Open Security Foundation, which produces the DataLossDB. I found out only a handful of hard-working volunteers maintains our only public record of breaches. Once I get our PayPal account fixed (it’s tied to my corporate credit card, which was used in some fraud – ironic, yes, I know!) we’ll be sending some beer money their way.

—Rich

Monday, May 03, 2010

Understanding and Selecting SIEM/LM: Use Cases, Part 2

By Adrian Lane

Use Case #2: Improve Efficiency

Turn back the clock about 5 months – you were finalizing your 2010 security spending, and then you got the news: budgets are going down again. At least they didn’t make you cut staff during the “right-sizing” at the end of 2008, eh? Of course, budget and resources be damned, you are still on the hook to secure the new applications, which will require some new security gadgets and generate more data.

And we cannot afford to forget the audit deficiencies detailed in your friendly neighborhood assessor’s last findings. Yes, those have to be dealt with too, and sometime in the first quarter, because the audit is scheduled for early May. This may seem like an untenable situation, but it’s all too real. Security professionals now must continue looking for opportunities to improve efficiency and do more with less.

As we look deeper into this scenario, there are a couple of inevitable situations we have got to deal with:

  • Compliance requirements: Government and industry regulations force us to demonstrate compliance – requiring gathering log files, parsing unneeded events, and analyzing transactions into human-readable reports to prove you’re doing things right. IT and Security must help Audit determine which events are meaningful, so regulatory controls are based upon complete and accurate information, and internal and external audit teams define how this data is presented.
  • Nothing gets shut down: No matter how hard we try, we cannot shut down old security devices that protect a small portion of the environment. Thus every new device and widget increases the total amount of resources required to keep the environment operational. Given the number of new attack vectors clamoring for new protection mechanisms, this problem is going to get worse, and may never get better.
  • Cost center reality: Security is still an overhead function and as such, it’s expected to work as efficiently as possible. That means no matter what the demands, there will always be pressure to cut costs.

So this use case is all about how SIEM/LM can improve efficiency of existing staff, allowing them to manage more devices which are detecting more attacks, all while reducing the time from detection to remediation. A tall order, sure, but let’s look at the capabilities we have to accomplish this:

  • Data aggregation: Similar to our react faster use case, having access to more data means less time is wasted moving between systems (swivel chair management). This increases efficiency and should allow security analysts to support more devices.
  • Dashboards: Since a picture is worth a thousand words, a well architected security dashboard has to be worth more than that. When trying to support an increasing number of systems, the ability to see what’s happening and gain context with an overview of the big picture is critical.
  • Alerts: When your folks need to increase their efficiency, they don’t have a lot of time to waste chasing down false positives and investigating dead ends. So having the ability to fire alerts based on real events rather than gut feel will save everyone a lot of time.
  • Forensic investigations: Once the problem is verified, it becomes about finding root cause as quickly as possible. The SIEM/LM solution can provide the context and information needed to dig into the attack and figure out the extent of the damage – it’s about working smarter, not harder.
  • Automated policy implementation: Some SIEM/LM tools can build automated policies based on observed traffic. This baseline (assuming it represents normal and healthy traffic) enables the system to start looking for _not normal activity, which then may require investigation.

This use case is really about doing more with what you already have, which has been demanded of security professionals for years. There have been no lack of tools and products to solve problems, but the resources and expertise to take best advantage of those capabilities can be elusive. Without a heavy dose of automation, and most importantly a significant investment to get the SIEM/LM system configured appropriately, there is no way we can keep up with the bad folks.


Use Case #3: Compliance Automation

You know the feeling you get when you look at your monthly calendar, and it shows an upcoming audit? Whatever you were planning to do goes out the window, as you spend countless hours assembling data, massaging it, putting it into fancy checklists and pie charts, and getting ready for the visit from the auditor.

Some organizations have folks who just focus on documenting security controls, but that probably isn’t you. So you’ve got to take time from the more strategic or even generally operational tasks you’ve been working on to prepare for the audit. And it gets worse, since every regulation has its own vernacular and rule set – even though they are talking about the same sets of security controls. So there is little you can leverage from last month’s PCI audit to help prepare for next month’s HIPAA assessment.

And don’t forget that compliance is not just about technology. There are underlying business processes in play that can put private data at risk, which have to be documented and substantiated as well. This requires more domain expertise than any one person or team possesses. The need to collaborate on a mixture of technical and non-technical tasks makes preparing for an audit that much harder and resource intensive.

Also keep in mind the opportunity cost of getting ready for audits. For one, time spent in Excel and PowerPoint massaging data is time you aren’t working on protecting information or singing the praises of your security program. And managing huge data sets for multi-national organizations across potentially hundreds of sites requires ninja-level Microsoft Office skills. Drat, don’t have that.

As if things weren’t hard enough, regulatory audits tend to be more subjective than objective, which means your auditor’s opinion will make the difference between the rubber stamp and a book of audit deficiencies that will keep your team busy for two years. So getting as detailed as possible and backing up your interpretations of the regulations with data helps make your case. And providing that data takes time. Right, time you don’t have.

So this use case focuses on the need to automate compliance, provide mechanisms to automate preparation to the greatest degree possible, and standardize the formats of the reports based on what works. We are trying to move from many audits and many redundant preparations, to one control and one report supporting many regulations/audits.

The features in most SIEM/LM sets to address this use case are:

  • Data aggregation: Once again, having centralized access to data from many devices and computing platforms dramatically reduces the need to manually gather information, and lets you start focusing on analysis as quickly as possible.
  • Pre-built compliance reports & polices: Of course, you aren’t the only company dealing with PCI, so these vendors have built reports for the leading regulations directly into their products. To be clear, it’s not like you can hit a button and make the auditor go away. But you at least have a place to start with data types mapped to specific regulations.
  • Secure archival of events: Substantiation is all about the opinion of the auditor and your ability to convince him/her that the controls are in place and effective. Having an archive of relevant events and other analysis provides a means to use data (as opposed to speculation) to prove your point.
  • Workflow and collaboration with SoD: Compliance reporting is a process which requires management and collaboration. SIEM/LM tools generally have some simple workflow built in to track who is doing what, and make sure folks don’t step on each other’s toes during preparation. They also help enforce separation of duties (SoD) to ensure there is no question of the integrity of the reporting.

Based on what we are seeing, most SIEM/LM projects aim to address one of these three scenarios. But knowing what problem you are trying to solve is only the first requirement before you can select a product. You need to get everyone else on board with the decision, and that requires business justification, which is our next topic.

—Adrian Lane

You Should Ignore the NetworkWorld DLP Review

By Rich

I’m catching up on my reading, and finally got a chance to peruse the NetworkWorld DLP Review. Here’s why I think you need to toss this one straight into the hopper:

  1. It only includes McAfee and Sophos – other vendors declined to participate.
  2. The reviewers state the bulk of their review was focused on test driving the management interface.
  3. The review did not test accuracy.
  4. The review did not test performance.
  5. The review did not compare “like” products – even the McAfee and Sophos offerings are extremely different, and little effort was made to explain these differences and what they mean to real world deployments.

In other words, this isn’t really a review and should not inform buying decisions. This is like trying to decide which toaster to buy based on someone else’s opinion of how pretty the knobs are.

I’m not saying anything about the products themselves, and don’t read anything between lines that isn’t there. This is about NetworkWorld publishing a useless review that could mislead readers.

—Rich

Optimism and Cautions on OpenDLP

By Rich

I’m starting to think I shouldn’t take vacations. Aside from the Symantec acquisition of PGP and GuardianEdge last week, someone went off and released the first open source DLP tool.

It’s called OpenDLP, and version 0.1 is currently available over Google Code. People have asked me for a long time why there aren’t any FOSS DLP options out there, and it’s nice to finally see someone put in the non-trivial effort and release a tool. DLP isn’t easy to create, and Andrew Gavin deserves major credit for kicking off the project.

First, let’s classify OpenDLP. It is an agent-based content discovery/data-at-rest tool. You install an agent on endpoints, which then scans local storage and sends results to a central management server. The agent is a C program, and the management server runs on Apache/MySQL. The tool supports regular expressions and scanning of plain text files.

Benefits

  • Free.
  • You can customize the code.
  • Communications are encrypted with SSL.
  • Supports any version of Windows you are likely to run.
  • Includes agent management, and the agent is designed to be non-intrusive.
  • Supports full regular expressions for building policies.

Limitations

  • Scans stored data on endpoints only. Might be usable on Windows servers, but I would test very carefully first.
  • Unable to scan non-plain-text or compressed files, including current versions of Office (the .XXXx XML formats).
  • No advanced content analysis – regex only, which limits the types of content this will work for.
  • Requires NetBIOS… which some environments ban.
  • I have been told via email (not from a DLP vendor, for the record) that the code may be a bit messy… which I’d consider a security concern.

Thus this is a narrow implementation of DLP – that’s not a criticism, just a definition.

I don’t have a large enough environment to give this a real test, but considering that it is a 0.1 version I think we should give it a little breathing space to improve. The to-do list already includes adding .zip file support, for example. I think it’s safe to say that (assuming the project gathers support) we will see it improve over time.

In summary, this is too soon to deploy in any production capacity, but definitely worth checking out and contributing to. I really hope the project succeeds and matures.

—Rich

FireStarter: For Secure Code, Process Is a Placebo—It’s All about Peer Pressure

By Adrian Lane

The other day it hit me: Process is not that important to secure code development. Waterfall? Doesn’t matter. Agile process? Secondary. They only frame the techniques that create success. Saying a process helps create secure code is like saying a cattle chute tames a wild Brahma bull. Guidelines, steps, and procedures do little to alter code security, only which code gets worked on. To motivate developers to improve security, try less carrot and more stick. Heck, process is not even a carrot – it’s more like those nylon dividers at the airport to keep polite people from pushing and shoving to the front of the line. No, if you want to developers to write secure code, use peer pressure.

Peer pressure is the most effective technique we have for producing secure code. That’s it. Use it every chance to you get. It’s the right thing to do.

Don’t believe me? You think pair coding is about cross training? Please. It’s about peer pressure. Co-workers will realize you suck at coding, and publicly ridicule you for failing to validate input variables. So you up your game and double-check what you are supposed to deliver. Quality assurance teams point out places in the code that you screwed up, and bug counts come up during your raise review. Peer pressure. No developer wants his or her API banned because hackers trampled over it like fans at a Who concert.

If you have taken management classes, you have heard about the Hawthorne Effect, discovered through studies in the 1920s and ’30s. In attempts to increase factory worker output, they adjusted working conditions, specifically looking for optimal lighting that produced the highest productivity. What they found, however, was that productivity has nothing to do with the light level per se, but went up whenever the light level changed. It was a study, so supervisors paid attention when the light changed to monitor the results. When the workers knew they were being watched, their productivity went up. Peer pressure.

Why do you think we have daily scrum meetings? We do it so you remember what you are supposed to be working on, and we do it in front of all your peers so you feel the shame of falling behind. That’s why we ask everyone in the room to participate. These little sessions are especially helpful at waking up those 20-something team members who were up all night partying with their ‘bros’, or drinking Guinness and watching Manchester United till the wee hours of the morning. You know who you are.

We have ‘Sprints’ for the same reason universities have exams: to get you to do the coursework. It’s your opportunity to say, “Oh, S$^)#, I forgot to read those last 8 chapters,” and start cramming for the exam. Only at work you start cramming from the deadline. 30 day sprints just provide more opportunities to prod developers with the stick than, say, 180 day waterfall cycles.

I think Kent Beck had it wrong when he said that unacknowledged fear is the root cause of all software project failures. I think fear of the wrong things causes project failures. We specify priorities so we understand the very minimum we are responsible for, and we work like crazy to get the basics done. Specify security as the primary requirement, verify people are doing their jobs, and you get results.

External code review? Peer pressure. Quality assurance? Peer pressure. Automated build failures? Peer pressure. The Velocity concept? Peer pressure. Testers fuzzing your code? Still peer pressure. Sure, creating stories, checklists, milestones, and threat analysis set direction – but none of those is a driver. Process frame the techniques we use, and the techniques alter behavior. The techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have.

Disagree? Tell me why.

—Adrian Lane

Friday, April 30, 2010

Understanding and Selecting SIEM/LM: Use Cases, Part 1

By Adrian Lane

When you think about it, security success in today’s environment comes down to a handful of key imperatives. First we need to improve the security of our environment. We are losing ground to the bad guys, and we’ve got to make some inroads on more quickly figuring out what’s being attacked and stopping it.

Next we’ve got to do more with less. Yes, it seems the global economy is improving, but we can’t expect to get back to the halcyon days of spend first, ask questions later – ever. With more systems under management we have more to worry about and less time to spend poring over reports, looking for the proverbial needle in the haystack. Given the number of new attacks – counted by any measure you like – we’ve got to increase the efficiency of our resource utilization.

Finally, auditors show up a few times a year, and they want their reports. Summary reports, detail reports, and reports that validate other reports. The entire auditor dance focuses on convincing the audit team that you have the proper security controls implemented and effective. That involves a tremendous amount of data gathering, analysis, and reporting just to set up; with continued tweaking over time. It’s basically a full time job to get ready for the audit, dropped on folks who already have full time jobs. So we’ve got to automate those functions to the greatest degree possible.

Yes, there are lots of other reasons organizations embrace SIEM and Log Management technology, but these three make up the vast majority of the projects we see funded. So let’s dig into each use case and understand exactly what problem we are trying to solve.

Use Case #1: React Faster

Imagine the typical day of a security analyst. They sit down at their desk, check out their monitors, and start seeing events scroll past. A lot of events, probably millions. Their job is to look at that information and figure out what’s wrong and identify the root cause of each problem.

They probably have alerts set up to report critical issues within their individual system consoles, in an effort to cull down the millions of events into some finite set of things to investigate – per system. So the analyst goes back and forth between the firewall, IPS, and network traffic analysis consoles. If a WAF is deployed, or a database activity monitoring product, they have to deal with that as well. An office chair that swivels easily is a good investment to keep your neck from wearing out.

Security analysts tend to be pretty talented folks, so they do find stuff based on their understanding of the networks and devices and their own familiarity with normal, which allows them to recognize not normal. There are some events that just look weird but cannot be captured in a policy or rule. Successful reviews rise from the ability of the human analyst to interpret the alerts between the various systems and identify attacks.

The issues with this scenario are numerous:

  • Too much data, not enough information: With anywhere from 10-2000 devices to monitor, each generating a couple thousand logs and/or alerts a day, there is plenty of data. The analyst has to turn that data into information, which is a tall order for anyone.
  • High signal to noise ratio: With that much data, the analyst is likely only going to investigate the most obvious attacks. And without some way to reduce the number of alerts to deal with, there will be lots of false positives to wade through, impacting productivity.
  • No “situational awareness”: The new new term in security circles is situational awareness; the concept that anomalous situations are lost in a sea of detail unless the bigger business context in considered. With only events to wade through, a human analyst will lose context and not be able to keep track of the big picture.
  • Too many tools to isolate root cause: Without centralizing data from multiple systems, there is no way to know if an IPS alert was related to a web attack or some other issue. So the analyst needs to quickly move from system to system to validate and confirm the attack, and to understand the depth of the issue. That approach isn’t particularly efficient and in an incident situation, time is the enemy.

We’ve written on numerous occasions about the need to react faster, since we can’t predict where the next attack is coming from. The promise of SIEM and Log Management solutions is to help us react faster – and better – and make the world a better place, right? The features and functions a security analyst will employ are:

  • Data aggregation: SIEM/LM solutions aggregate data from many sources, including network, security, servers, databases, applications, etc. – providing the ability to monitor everything. Having all of the events in one place helps avoid missing subtle but important ones.
  • Correlation: Correlation looks for common attributes, and links events together into meaningful bundles. Being able to look at all events in a particular window of time, or everything a specific user did, gives us a meaningful way to investigate security events. This technology provides the ability to perform a variety of correlation techniques to integrate different sources, in order to turn data into useful information. Check out our more detailed view of correlation.
  • Alerting: Automated analysis of correlated events can produce more substantial and detailed alerts, and help identify what needs to be investigated right now.
  • Dashboards: With liberal use of eye candy, SIEM/LM tools take event data and turn it into fancy charts. These charts can assist the analyst in seeing patterns, and more importantly in seeing activity that is not a standard pattern, or not visible when looking at individual log entries.

So ultimately this use case provides the security analyst with a set of automatic eyes and ears to wade through all the data and help identify what’s most important and requires attention now.

This is the first white paper that Mike and I have written together, and as you can tell, we’re kinda verbose. As such I am splitting this post into two segments, with the other use cases coming Monday; we will follow up with the business justification. Later in this series, we’ll discuss specifically how to address this use case using the SIEM/LM toolset, and manage expectations for the amount of time and effort required to build the system and to feed it on an ongoing basis.

—Adrian Lane

Friday Summary: April 30, 2010

By Adrian Lane

Project Management Judo

In It’s not about risk, Shrdlu got me thinking about the problem of perception. A few years back, I noticed one of my IT staff doing something odd. Every couple weeks, over a period of many months, I would see this person walk into a room with marketing and sales people to attend a half-hour meeting. I was pretty sure the IT staffer did not know these people and had nothing to do with marketing or sales efforts. We were not running any joint projects at the time, so I could not figure out why he was meeting with these other teams. At some point curiosity overcame me and I asked what was going on and the IT guy told me they were figuring out how to set up credit card purchases for online software sales. Uh, what?

It had started innocently enough. Someone in sales asked the IT guy if they could have some space on a public FTP server, outside the firewall, to host customer reference documents and user guides. Just benign PDF files. Eager to help, IT made it happen. And it was a success. Soon a sales manager asked for a ‘help’ email account, so an email server was set up on the same box. Marketing got wind of this, and placed their own sales support docs on the server, but asked for a web interface to the documents. Done. A few months later the VP of sales thought there was a lead generation opportunity, so he asked for a sign-in page with logins forwarded to the sales team. Marketing asked if it was possible to simply share the marketing folder to the collateral server to make it easier to push content, and it was finished by day’s end. Each new request was completed as asked. Customers said it would be great if they could pay for some of our upgrades online, so someone in sales said “Absolutely!” and asked the IT guy how quickly taking credit cards could be set up. This is the point I enter the story.

I call this a “lose-lose, with a side of bad news” situation. I found that I had an unsecured server outside the firewall, with FTP, email, file sharing, and a web server, opening a gaping hole into the network. Worse, the service was already a success, with several groups dependent upon it. I was about to shut down this entire unsanctioned and insecure operation and piss off sales and marketing, and gently admonish an employee who really did nothing but try to be helpful. To further tweak everyone involved, I am playing scrooge, and killing off their Christmas dreams of generating Internet sales before the end of Q4.

What started as a simple repository rapidly evolved into a full-service portal, with each step introducing visible benefits, but security threats not entirely obvious to those requesting the services. And honestly, they did not care, as the customers were happy. Marketing was happy. Sales was happy. IT Guy was happy. Me? Not so much.

Shrdlu points out that “The onus to demonstrate benefit is on those who propose the action be taken.” I get this. In spades. The side of the coin opposite “Mr. Happy Go-getter” is “Mr. Negative Boat-anchor”. It sucks to be the boat anchor. But someone has to be the adult and say ‘No’. Or maybe not say ‘No’ out loud, but make someone else say it for you. There are ways to do this without being labelled “not a team player”. It’s really quite easy to dream up new ways to generate revenue, and everyone wants to make more money. You want to make more money for the company, don’t you? (Try answering that Porcupine Question , in front of your CEO, when a sales guy drops it into your lap). Pointing out the flaws and telling people this is a bad idea makes you the bad guy who keeps the company from being successful. Or you are positioned as the impediment to success. But asking the right questions or providing alternative perspectives – in a positive way – can make you seem like the smart, cautious person who saved the company from serious problems. It’s tough to sit through project scoping meetings and think about what could go wrong when your peers are all wide-eyed and dreamy about some cool new web service.

Based on some hard-learned lesions, I would modify Shrdlu’s point to say you need to find clever ways to make the presenter of the action address the risks. You need to develop some IT Project Judo moves to place both the good and the bad at the feet of those who propose the actions. It’s all in how you go about it.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Project Quant Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Anton Chuvakin, answering Adrian’s comment on Understanding and Selecting SIEM/Log Management: Introduction.

Do you know of a SIEM vendor that does not offer Log Management today?

No, there isn’t any. They all learned the lessons and build/bought LM (all except vendor N, I think :-)). Everything else you say is 100% true, IMHO. However, the opposite is just not true. A lot of smaller log mgt tools vendors have truly nothing to do with a grand vision of SIEM. Think Prism, GFI, even Sawmill, and many others. So, there is no credible SIEM without LM, but there is plenty of LM without SIEM. As I said in the recent paper, “everybody who has logs needs LM”, but not everybody is mature enough to use a SIEM. Even splunk is very useful for LM and is clearly not a SIEM.

—Adrian Lane