Login  |  Register  |  Contact
Saturday, February 21, 2009

Friday Summary, February 20, 2009

By Rich

<

div class=”wiki_entry”>

Last Friday Adrian sent me an IM that he was just about finished with the Friday summary. The conversation went sort of like this:

Me: I thought it was my turn? Adrian: It is. I just have a lot to say.

It’s hard to argue with logic like that.

This is a very strange week here at Securosis Central. My wife was due to deliver our first kid a few days ago, and we feel like we’re now living (and especially sleeping) on borrowed time. It’s funny how procreation is the most fundamental act of any biological creature, yet when it happens to you it’s, like, the biggest thing ever! Sure, our parents, most of our siblings, and a good chunk of our friends have already been through this particular rite of passage, but I think it’s one of those things you can never understand until you go through it, no matter how much crappy advice other people give you or books you read.

Just like pretty much everything else in life.

I suppose I could use this as a metaphor to the first time you suffer a security breach or something, but it’s Friday and I’ll spare you my over-pontification. Besides, there’s all sorts of juicy stuff going on out there in the security world, and far be it from me to waste you time with random drivel when I already do that the other 6 days of the week. Especially since you need to go disable Javascript in Adobe Acrobat.

Onto the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences:

Favorite Securosis Posts:

Favorite Outside Posts:

Top News and Posts:

Blog Comment of the Week: Sharon on New Database Configuration Assessment Options

IMO mValent should be compared with CMDB solutions. They created a compliance story which in those days (PCI) resonates well. You probably know this as well as I (now I”m just giving myself some credit ) but database vulnerability assessment should go beyond the task of reporting configuration options and which patches are applied. While those tasks are very important I do see the benefits of looking for actual vulnerabilities. I do not see how Oracle will be able to develop (or buy), sell and support a product that can identify security vulnerabilities in its own products. Having said that, I am sure that many additional customers would look and evaluate mValent. The CMDB giants (HP, IBM and CA) should expect more competitive pressure.

–Rich

Wednesday, February 18, 2009

New Database Configuration Assessment Options

By Adrian Lane

Oracle has acquired mValent, the configuration management vendor. mValent provides an assessment tool to examine the configuration of applications. Actually, they do quite a bit more than that, but I wanted to focus on the value to database security and compliance in this post. This is a really good move on Oracle’s part as it fills a glaring hole that they have had for some time in their security and compliance offerings. I have never understood why Oracle did not provide this as part of OEM as every Oracle event I have been to in the last 5 years has sessions where DBA’s are swapping scripts to assess their database. Regardless, they have finally filled the gap. It provides them with a platform to implement their own best practice guidelines, and gives customers a way to implement their own security, compliance and operational policies around the database and (I assume) other application platforms. Sadly, many companies have not automated their database configuration assessments, and the market remains wide open, and this is a timely acquisition.

While the value proposition for this technology will be spun by Oracle’s marketing team in a few dozen different ways (change management, compliance audits, regulatory compliance, application controls, application audits, compliance automation, etc), don’t get confused by all of the terms. When it comes down to it, this is an assessment of application configuration. And it does provide value in a number of ways: security, compliance and operations management. The basic platform can be used in many different ways all depending upon how you bundle the policy sets and distribute reports.

Also keep in mind that a ‘database audit’ and ‘database auditing’ are two completely different things. Database auditing is about examining transactions. What we are talking about here is how the database is configured and deployed. To avoid the deliberate market confusion on the vendors part, here at Securosis we will stick to the terms Vulnerability Assessment and Configuration Assessment to describe the work that is being performed.

Tenable Network Security has also announced on their blog that they now have the ability to perform credentialed scans of the database. This means that Nessus is no longer just a pen-test style patch level checker, but a credentialed/peer based configuration assessment. By ‘Credentialed’ I mean that the scanning tool has a user name and password with some access rights the database. This type of assessment provides a lot more functionality because there is a lot more information available to you that is not available through a penetration test. This is necessary progression for the product as the ports, quite specifically the database ports, no longer return sufficient information for a good assessment of patch levels, or any of the important information for configuration.

If you want to produce meaningful compliance reports, this is the type of scan you need to provide. While I occasionally rip Tenable Security as this is something they should have done two years ago, it is really a great advancement for them as it opens up the compliance and operation management buying centers. Tenable must be considered a serious player in this space as this is a low cost, high value option. They will continue to win market share as they flesh out the policy set to include many of the industry best practices and compliance tests.

Oracle will represent an attractive option for many customers, and they should be able to immediately leverage their existing relations. While not cutting edge or best-of -breed in this class, I expect many customers will adopt as it will be bundled with what they are already buying, or the investment is considered lower risk as you are going with the worlds largest business software vendors. On the opposite end of the spectrum, companies who do not view this as business critical but still want thorough scans will employe the cost effective Tenable solution. Vendors like Fortinet, with their database security appliance, and Application Security’s AppDetective product, will be further pressed to differentiate their offerings to compete with the perceived top end and bottom ends of the market. Things should get interesting in the months to come.

–Adrian Lane

A Small, Necessary, Legal Change For National Cybersecurity

By Rich

I loved being a firefighter. In what other job do you get to speed around running red lights, chops someone’s door down with an axe, pull down their ceiling, rip down their walls, cut holes in their roof with a chainsaw, soak everything they own with water, and then have them stop by the office a few days later to give you the cookies they baked for you.

TOPOFF2 010_2.jpg

Now, if you try and do any of those things when you’re off duty and the house isn’t on fire, you tend to go to jail. But on duty and on fire? The police will arrest the homeowner if they get in your way.

Society has long accepted that there are times when the public interest outweighs even the most fundamental private rights. Thus I think it is long past time we applied this principle to cybersecurity and authorized appropriate intervention in support of national (and international) security.

One of the major problems we have in cybersecurity today is that the vulnerabilities of the many are the vulnerabilities of everyone. All those little unpatched home systems out there are the digital equivalent of burning houses in crowded neighborhoods. Actually, it’s probably closer to a mosquito-infested pool an owner neglects to maintain. Whatever analogy you want to use, in all cases it’s something that, if it were the physical world, someone would come to legally take care of, even if the owner tried to stop them.

But we know of multiple cases on the Internet where private researchers (and likely government agencies) have identified botnets or other compromised systems being used for active attack, yet due to legal fears they can’t go and clean the systems. Even when they know they have control of the botnet and can erase it and harden the host, they legally can’t. Our only option seems to be individually informing ISPs, which may or may not take action, depending on their awareness and subscriber agreements.

Here’s what I propose. We alter the law and empower an existing law enforcement agency to proactively clean or isolate compromised systems. This agency will be mandated to work with private organizations who can aid in their mission. Like anything related to the government, it needs specific budget, staff, and authority that can’t be siphoned off for other needs.

When a university or other private researcher discovers some botnet they can shut down and clean out, this law enforcement agency can review and authorize action. Everyone involved is shielded from being sued short of gross negligence. The same agency will also be empowered to work with international (and national) ISPs to take down malicious hosting and service providers (legally, of course). Again, this specific mission must be mandated and budgeted, or it won’t work.

Right now the bad guys operate with impunity, and law enforcement is woefully underfunded and undermandated for this particular mission. By engaging with the private sector and dedicating resources to the problem, we can make life a heck of a lot harder for the bad guys. Rather than just trying to catch them, we devote as much or more effort to shutting them down.

Call me an idealist.

(I don’t have any digital pics from firefighting days, so that’s a more-recent hazmat photo. The banda

a is to keep sweat out of my eyes; it’s not a daily fashion choice).

–Rich

Tuesday, February 17, 2009

Selective Inverse Recency Bias In Security

By Rich

Nate Silver is one of those rare researchers with the uncanny ability to send your brain spinning off on unintended tangents totally unrelated to the work he’s actually documenting. His work is fascinating more for its process than its conclusions, and often generates new introspections applicable to our own areas of expertise. Take this article in Esquire where he discusses the concept of recency bias as applied to financial risk assessments.

Recency bias is the tendency to skew data and analysis towards recent events. In the economic example he uses he compares the risk of a market crash in 2008 using data from the past 60 years vs. the past 20. The difference is staggering; from one major downturn every 8 years Lion (using 60 years of data) vs. a downturn every 624 years (using only 20 years of data). As with all algorithms, input selection deeply skews output results, with the potential for cataclysmic conclusions.

In the information security industry I believe we just as frequently suffer from selective inverse recency bias- giving greater credence to historical data over more recent information, while editing out the anomalous events that should drive our analysis more than the steady state. Actually, I take that back, it isn’t just information security, but safety and security in general, and it is likely of a deep evolutionary psychological origin. We cut out the bits and pieces we don’t like, while pretending the world isn’t changing.

Here’s what I mean- in security we often tend to assume that what’s worked in the past will continue to work in the future, even though the operating environment around us has completely changed. At the same time, we allow recency bias to intrude and selectively edit out our memories of negative incidents after some arbitrary time period. We assume what we’ve always done will always work, forgetting all those times it didn’t work.

From an evolutionary psychology point of view (assuming you go in for that sort of thing) this makes perfect sense. For most of human history what worked for the past 10, 20, or 100 years still worked well for the next 10, 20, or 100 years. It’s only relatively recently that the rate of change in society (our operating environment) accelerated to high levels of fluctuation in a single human lifetime. On the opposite side, we’ve likely evolved to overreact to short term threats over long term risks- I doubt many of our ancestors were the ones contemplating the best reaction to the tiger stalking them in the woods; our ancestors clearly got their asses out of there at least fast enough to procreate at some point.

We tend to ignore long term risks and environmental shifts, then overreact to short term incidents.

This is fairly pronounced in information security where we need to carefully balance historical data with our current environment. Over the long haul we can’t forget historical incidents, yet we also can’t assume that what worked yesterday will work tomorrow.

It’s important to use the right historical data in general, and more recent data in specific. For example, we know major shifts in technology lead to major new security threats. We know that no matter how secure we feel, incidents still occur. We know that human behavior doesn’t change, people will make mistakes, and are predictably unpredictable.

On the other hand, firewalls only stop a fraction of the threats we face, application security is now just as important as network security, and successful malware utilizes new distribution channels and propagation vectors.

Security is always a game of balance. We need to account for the past, without assuming its details are useful when defending against specific future threats.

–Rich

Saturday, February 14, 2009

Friday Summary, 13th of February, 2009

By Adrian Lane

It’s Friday the 13th, and I am in a good mood. I probably should not be, given that every conversation seems to center around some negative aspect of the economy. I started my mornings this week talking with one person after another about a possible banking collapse, and then moved to a discussion of Sirius/XM going under. Others are furious about the banking bailout as it’s rewarding failure. Tuesday of this week I was invited to speak at a business luncheon on data security and privacy, so I headed down the hill to find the side of the roads filled with cars and ATV’s for sale. Cheap. I get to the parking lot and find it empty but for a couple of pickup trucks, all are for sale. The restaurant we are supposed to meet at shuttered its doors the previous night and went out of business. We move two doors down to the pizza joint where the TV is on and the market is down 270 points and will probably be worse by the end of the day. Still, I am in a good mood. Why? Because I feel like I was able to help people.

During the lunch we talked about data security and how to protect yourself on line, and the majority of these business owners had no idea about the threats to them both physical and electronic, and no idea on what to do about them. They do now. What was surprising was I found that everyone seemed to have recently been the victim of a scam, or someone else in their family had been. One person had their checks photographed at a supermarket and someone made impressive forgeries. One had their ATM account breached but no clue as to how or why. Another had false credit card charges. Despite all the bad news I am in a good mood because I think I helped some people stay out of future trouble simply by sharing information you just don’t see in the newspapers or mainstream press.

This leads me to the other point I wanted to discuss: Rich posted this week on “An Analyst Conundrum” and I wanted to make a couple additional points. No, not just about my being cheap … although I admit there are a group of people who capture the prehistoric moths that fly out of my wallet during the rare opening … but that is not the point of this comment. What I wanted to say is we take this Totally Transparent Research process pretty seriously, and we want all of our research and opinions out in the open. We like being able to share where our ideas and beliefs come from. Don’t like it? You can tell us and everyone else who reads the blog we are full of BS, and what’s more, we don’t edit comments. One other amazing aspect of conducting research in this way has been comments on what we have not said. More specifically, every time I have pulled content I felt was important but confused the overall flow of the post, readers pick up on it. They make note of it in the comments. I think this is awesome! Tells me that people are following our reasoning. Keeps us honest. Makes us better. Right or wrong, the discussion helps the readers in general, and it helps us know what your experiences are.

Rich would prefer that I write faster and more often than I do, especially with the white papers. But odd as it may seem, I have to believe the recommendations I make otherwise I simply cannot put the words down on paper. No passion, no writing. The quote Rich referenced was from an email I sent him late Sunday night after struggling with recommending a particular technology over another, and quite literally could not finish the paper until I had solved that puzzle in my own mind. If I don’t believe it based upon what I know and have experienced, I cannot put it out there. And I don’t really care if you disagree with me as long as you let me know why what I said is wrong, and how I screwed up. More, I especially don’t care if the product vendors or security researchers are mad at me. For every vendor that is irate with what I write, there is usually one who is happy, so it’s a zero sum game. And if security researchers were not occasionally annoyed with me there would be something wrong, because we tend to be a rather cranky group when others do not share our personal perspective of the way things are. I would rather have the end users be aware of the issues and walk into any security effort with their eyes open. So I feel good in getting these last two series completed as I think it is good advice and I think it will help people in their jobs. Hopefully you will find what we do useful!

On to the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences:

Favorite Securosis Posts:

Favorite Outside Posts:

Top News and Posts:

Blog Comment of the Week:

Jack on The Business Justification for Data Security: Measuring Potential Loss:
A question/observation regarding the “qualifiable losses” you describe: Isn’t the loss of “future business” a manifestation of damaged reputation? Likewise, reduced “customer loyalty”? After all, it seems to me that reputation is nothing more than how others view an organization’s value/liability proposition and/or the moral/ethical/competence of its leadership. It’s this perception that then determines customer loyalty and future business. With this in mind, there are many events (that aren’t security-related) that can cause a shift in perceived value/liability, etc., and a resulting loss of market share, growth, cost of capital, etc. In my conversations with business management, many companies (especially larger ones) experience such events more frequently than most people realize, it’s just that (like most other things) the truly severe ones are less frequent. These historical events can provide a source of data regarding the practical effect of reputation events that can be useful in quantified or qualified estimates.

Next week … and all-Rich Friday post!

–Adrian Lane

Friday, February 13, 2009

The Business Justification for Data Security: Additional Positive Benefits

By Adrian Lane

So far in this series we have discussed how to assess both the value of the information your company uses, and some potential losses should your data be stolen. The bad news is that security spending only mitigates some portion of the threats, but cannot eliminate them. While we would like our solutions to eradicate threats, it’s usually more complicated than that. Fortunately there is some good news, that being security spending commonly addresses other areas of need and has additional tangible benefits that should be factored into the overall evaluation. For example, the collection, analysis, and reporting capabilities built into most data security products – when used with a business processing perspective – supplement existing applications and systems in management, audit and analysis. Security investment can also be readily be leveraged to reduce compliance costs, improve systems management, efficiently analyze workflows, and gain a better understanding of how data is used and where it is located. In this post, we want make short mention of some of the positive & tangible aspects of security spending that you should consider. We will put this into the toolkit at the end of the series, but for now, we want to discuss cost savings and other benefits.

Reduced compliance/audit costs

Regulatory initiatives require that certain processes be monitored for policy conformance, as well as subsequent verification to ensure those policies and controls align appropriately with compliance guidelines. As most security products examine business processes for suspected misuse or security violations, there is considerable overlap with compliance controls. Certain provisions in the Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley (SOX), and the Health Insurance Portability and Accountability Act (HIPPA) either call for security, process controls, or transactional auditing. While data security tools and products focus on security and appropriate use of information, policies can be structured to address compliance as well.

Let’s look at a couple ways security technologies assist with compliance:

  • Access controls assist with separation of duties between operational, administrative, and auditing roles.
  • Email security products provide with pretexting protection as required by GLBA.
  • Activity Monitoring solutions perform transactional analysis, and with additional polices can provide process controls for end-of-period-adjustments (SOX) as well as address ‘safeguard’ requirements in GLBA.
  • Security platforms separate the roles of data collection, data analysis, and policy enforcement, and can direct alerts to appropriate audiences outside security.
  • Collection of audit logs, combined with automated filtering and encryption, address common data retention obligations.
  • DLP, DRM, and encryption products assist in compliance with HIPAA and appropriate use of student records (FERPA).
  • Filtering, analysis, and reporting help reduce audit costs by providing auditors with necessary information to quickly verify the efficacy and integrity of controls; gathering this information is typically an expensive portion of an audit.
  • Auditing technologies provide a view into transactional activity, and establish the efficacy and appropriateness of controls.

Reduced TCO

Data security products collect information and events that have relevance beyond security. By design they provide a generic tool for the collection, analysis, and reporting of events that serve regulatory, industry, and business processing controls; automating much of the analysis and integrating with other knowledge management and response systems. As a result they can enhance existing IT systems in addition to their primary security functions. The total cost of ownership is reduced for both security and general IT systems, as the two reinforce each other – possibly without requiring additional staff. Let’s examine a few cases:

  • Automating inspection of systems and controls on financial data reduces manual inspection by Internal Audit staff.
  • Systems Management benefits from automating tedious inspection of information services, verifying that services are configured according to best practices; this can reduce breaches and system downtime, and ease the maintenance burden.
  • Security controls can ensure business processes are followed and detect failure of operations, generating alerts in existing trouble ticketing systems.

Risk reduction

Your evaluation process focuses on determining if you can justify spending some amount of money on a certain product or to address a specific threat. That laser focus is great, but data security is an enterprise issue, so don’t lose sight of the big picture. Data security products overlap with general risk reduction, similar to the way these products reduce TCO and augment other compliance efforts. When compiling your list of tradeoffs, consider other areas of risk & reward as well.

  • Assessment and penetration technologies discover vulnerabilities and reduce exposure; keeping data and applications safe helps protect networks and hosts.
  • IT systems interconnect and share data. Stopping threats in one area of business processing can improve reliability and security in connected areas.
  • Discovery helps analysts process and understand risk exposure by providing locating data, and recording how it is used throughout the enterprise, and ensuring compliance with usage policies.

Also keep in mind that we are providing a model to help you justify security expenditures, but that does not mean our goal is to promote security spending. Our approach is pragmatic, and if you can achieve the same result without additional security products to support your applications, we are all for that. In much the same way that security can reduce TCO, some products and platforms have security built in, thus avoiding the need for additional security expenditures. We recognize that data security choices typically are the last to be made, after deployment of the applications for business processing, and after infrastructure choices to support the business applications. But if your lucky enough to have built in tools, use them.

–Adrian Lane

Adrian Appears on the Network Security Podcast

By Rich

Pepper the Wonder Cat

I can’t believe I forgot to post this, but Martin was off in Chicago for work this week and Adrian joined me as guest host for the Network Security Podcast. We recorded live at my house, so the audio may sound a little different. If you listen really carefully, you can hear an appearance by Pepper the Wonder Cat, our Chief of Everything Officer here at Securosis.

The complete episode is here: Network Security Podcast, Episode 137, February 10, 2009 Time: 32:50

Show Notes:

–Rich

Los Alamos Missing Computers

By Adrian Lane

Yahoo! News is reporting that the Los Alamos nuclear weapons research facility reportedly is missing some 69 computers according to a watchdog group who released an internal memo. Either they have really bad inventory controls, or they have a kleptomaniac running around the lab. Even for a mid-sized organization, this is a lot, especially given the nature of their business. Granted the senior manager says this does not mean there was a breach of classified information, and I guess I should give him the benefit of the doubt, but I have never worked at a company where sensitive information did not flow like water around the organization regardless of policy. The requirement may be to keep classified information off unclassified systems, but unless those systems are audited, how would you know? How could you verify if they are missing.

We talk a lot about endpoint security and and the need to protect laptops, but really, if you work for an organization that deals with incredibly sensitive information (you know, like nuclear secrets) you need to encrypt all of the media regardless of the media being mobile or not. There are dozens of vendors that offer software encryption and most of the disk manufacturers are coming out with encrypted drives. And you are probably aware if you read this blog that we are proponents of DLP in certain cases; this type of policy enforcement for the movement of classified information would be a good example. You would think organizations such as this would be ahead of the curve in this area, but apparently not.

–Adrian Lane

Thursday, February 12, 2009

An Analyst Conundrum

By Rich

Since we’ve jumped on the Totally Transparent Research bandwagon, sometimes we want to write about how we do things over here, and what leads us to make the recommendations we do. Feel free to ignore the rest of this post if you don’t want to hear about the inner turmoil behind our research…

One of the problems we often face as analysts is that we find ourselves having to tell people to spend money (and not on us, which for the record, we’re totally cool with). Plenty of my industry friends pick on me for frequently telling people to buy new stuff, including stuff that’s sometimes considered of dubious value. Believe me, we’re not always happy heading down that particular toll road. Not only have Adrian and I worked the streets ourselves, collectively holding titles ranging from lowly PC tech and network admin to CIO, CTO, and VP of Engineering, but as a small business we maintain all our own infrastructure and don’t have any corporate overlords to pick up the tab.

Besides that, you wouldn’t believe how incredibly cheap the two of us are. (Unless it involves a new toy.)

I’ve been facing this conundrum for my entire career as an analyst. Telling someone to buy something is often the easy answer, but not always the best answer. Plenty of clients have been annoyed over the years by my occasional propensity to vicariously spend their money.

On the other hand, it isn’t like all our IT is free, and there really are times you need to pull out the checkbook. And even when free software or services are an option, they might end up costing you more in the long run, and a commercial solution may come with the lowest total cost of ownership.

We figure one of the most important parts of our job is helping you figure out where your biggest bang for the buck is, but we don’t take dispensing this kind of recommendation lightly. We typically try to hammer at the problem from all angles and test our conclusions with some friends still in the trenches. And keep in mind that no blanket recommendation is best for everyone and all situations- we have to write for the mean, not the deviation.

But in some areas, especially web application security, we don’t just find ourselves recommending a tool- we find ourselves recommending a bunch of tools, none of which are cheap. In our Building a Web Application Security series we’ve really been struggling to find the right balance and build a reasonable set of recommendations. Adrian sent me this email as we were working on the last part:

I finished what I wanted to write for part 8. I was going to finish it last night but I was very uncomfortable with the recommendations, and having trouble justifying one strategy over another. After a few more hours of research today, I have satisfied my questions and am happy with the conclusions. I feel that I can really answer potential questions of why we recommend this strategy opposed to some other course of action. I have filled out the strategy and recommendations for the three use cases as best I can.

Yes, we ended up having to recommend a series of investments, but before doing that we tried to make damn sure we could justify those recommendations. Don’t forget, they are written for a wide audience and your circumstances are likely different. You can always call us on any bullshit, or better yet, drop us a line to either correct us, or ask us for advice more fitting to your particular situation (don’t worry, we don’t charge for quick advice – yet).

–Rich

Recent Data Breaches- How To Limit Malicious Outbound Connections

By Rich

Word is slowly coming through industry channels that the attackers in the Heartland breach exfiltrated sniffed data via an outbound network connection. While not surprising, I did hear that the connection wasn’t encrypted- the bad guys sent the data out in cleartext (I’ll leave it to the person who passed this on to identify themselves if they want). Rumor from 2 independent sources is the bad guys are an organized group out of St. Petersburg (yes, Russia, as cliche as that is).

This is similar to a whole host of breaches- including (probably) TJX. While I’m not so naive as to think you can stop all malicious outbound connections, I do think there’s a lot we can do to make life harder on the bad guys. Endless Hole, Alaskan Glacier

First, you need to lock down your outbound connections using a combination of current and next-generation firewalls. You should isolate out your transaction network to enforce tighter controls on it than on the rest of your business network. Traditional firewalls can lock down most outbound port/protocols, but struggle with nested/stealth channels or all the stuff shoveled over port 80. Next-gen firewalls and web gateways (I hate the name, but don’t have a better one) like Palo Alto Networks or Mi5 Networks can help. Regular web gateways (Websense and McAfee/Secure Computing) are also good, but vary more on their outbound control capabilities and tend to be more focused on malware prevention (not counting their DLP products, which we’ll talk about in a second).

The web gateway and next gen firewalls will focus on your overall network, while you can lock of the transaction side with tighter traditional firewall rules and segmenting that thing off.

Next, use DLP to sniff for outbound cardholder data. The bad guys don’t seem to be encrypting, and DLP will alert on that in a heartbeat (and maybe block it, depending on the channel). You’ll want to proxy with your web gateway to sniff SSL (and only some web gateways can do this) and set the DLP to alert on unauthorized encryption usage. That might be a real pain in the ass, if you have a lot of unmanaged encryption outside of SSL. Also, to do the outbound SSL proxy you need to roll out a gateway certificate to all your endpoints and suppress browser alerts via group policies.

I also recommend DLP content discovery to reduce where you have unencrypted stored data (yes, you do have it, even if you think you don’t).

As you’ve probably figured out by now, if you are starting from scratch some of this will be very difficult to implement on an existing network, especially one that hasn’t been managed tightly. Thus I suggest you focus on any of your processing/transaction paths and start walling those off first. In the long run, that will reduce both your risks and your compliance and audit costs.

–Rich

Tuesday, February 10, 2009

The Business Justification for Data Security: Understanding Potential Loss

By Adrian Lane

Rich posted the full research paper last week, but as not everyone wants to read the full 30 pages, we decided to continue posting excepts here. We still encourage comments as this will be a living document for us, and we will expand in the future. Here is Part Four:

Understanding Potential Losses

Earlier we deliberately decoupled potential losses from risk impact, even though loss is clearly the result of a risk incident. Since this is a business justification model rather than a risk management model, it allows us to account for major types of potential loss that are the result of multiple types of risk and simplifies our overall analysis. We will highlight the major loss categories associated with data security and, as with our risk estimates, break them out into quantitative and qualitative categories. These loss categories can be directly correlated back to risk estimates, and it may make sense to walk through that exercise at times, but as we complete our business justification you’ll see why it isn’t normally necessary.

If data is stolen in a security breach, will it cost you a million dollars? A single dollar? Will you even notice? Under “Data Loss Models”, we introduced a method for estimating values of the data your company possess to underscore what is at stake. Now we will provide a technique for estimating costs to the business in the event of a loss. We look at some types of loss and their impacts. Some of these have hard costs that can be estimated with a reasonable degree of accuracy. Others are more nebulous, so assigning monetary values doesn’t make sense. But don’t forget that although we may not be able to fully quantify such losses, we cannot afford to ignore them them, because unquantifiable costs can be just as damaging.

Quantified vs. Qualified Losses

As we discussed with noisy threats, it is much easier to justify security spending based on quantifiable threats with a clear impact on productivity and efficiency. With data security, quantification is often the rare exception, and real losses typically combine quantified and qualified elements. For example, a data breach at your company may not be evident until long after the fact. You don’t lose access to the data, and you might not suffer direct losses. But if the incident becomes public, you could then face regulatory and notification costs. Stolen customer lists and pricing sheets, stolen financial plans, and stolen source code can all reduce competitive advantage and impact sales — or not, depending on who stole what. Data stolen from your company may be used to commit fraud, but the fraud itself might be committed elsewhere. Customer information used in identity theft causes your customers major hassles, and if they discover your firm was the source of the information, you may face fines and legal battles over liability. As these can account for a majority of total costs, despite the difficulty in obtaining an estimate of the impact, we must still account for the potential loss to justify spending to prevent or reduce it.

We offer two approaches to combining quantified and qualified potential losses. In the first, you walk through each potential loss category and either assign an estimated monetary value, or rate it on our 1-5 scale. This method is faster, but doesn’t help correlate the potential loss with your tolerance. In the second method, you create a chart like the one below, where all potential losses are rated on a 1-5 scale, with either value ranges (for quantitative loss) in the cells, or qualitative statements describing the level of loss. This method takes longer, since you need to identify five measurement points for each loss category, but allows you to more easily compare potential losses against your tolerance, and identify security investments to bring the potential losses (or their likelihood) down to an acceptable level.

Loss =

1

2

3

4

5

Notification costs (total, not per record)

$0-$1000

$1,001-$10,000

$10,001-$100,000

$100,001-$500,000

>$500,00

Reputation Damage

No negative publicity

Single negative press mention, local/online only

Ongoing negative press <2 weeks, local/online only, Single major outlet mention.

Ongoing sustained negative press >2 weeks, including multiple major outlets. Measurable drop in customer activity.

Sustained negative press in major outlets or on a national scale. Material drop in customer activity.

Potential Loss Categories

Here are our recommended assessment categories for potential loss, categorized by quantifiable vs. only qualifiable:

Quantifiable potential data security losses:

  • Notification Costs: CA 1386 and associated state mandates to inform customers in the event of a data breach. Notification costs can be estimated in advance, and include contact with customers, as well as any credit monitoring services to identify fraudulent events. The cost is roughly linear with the total number of records compromised.
  • Compliance Costs: Most companies are subject to federal regulations or industry codes they must adhere to. Loss of data and data integrity issues are generally violations. HIPAA, GLBA, SOX, and others include data verification requirements and fines for failure to comply.
  • Investigation & Remediation Costs: An investigation into how the data was compromised, and the associated costs to remediate the relevant security weaknesses, have a measurable cost to the organization.
  • Contracts/SLAs: Service level agreements about quality or timeliness of services are common, as are confidentiality agreements. Businesses that provide data services rely upon the completeness, accuracy, and availability of data; falling short in any one area may violate SLAs and/or subject the company to fines or loss of revenues.
  • Credit: Loss of data and compromise of IT systems are both viewed as indications of investment risk by the financial community. The resulting impact on interest rates and availability of funds may affect profitability.
  • Future Business & Accreditation: Data loss, compliance failures, or compliance penalties may impair ability to bid on contracts or even participate in certain ventures due to loss of accreditation. This can be a permanent or temporary loss, but the effects are tangible. Note that future business is also a qualitative loss — here we refer to definitive measurements, such as exclusions from business markets, as opposed to potential losses due to customer loyalty/concern.
  • Continuity of Business: Denial of Service impairs customer service and interferes with business. These are often measurable for transaction-based businesses.

Qualifiable Potential Data Security Losses

  • Reputation Damage: The reputation of a company affects its value in a number of ways. New customers often seek out firms they know and trust. Investors are likely to buy stock from companies which are trustworthy and operate effectively. Risks to reputation affect both, but it’s generally impossible to attribute an impact to a single event because other events, non-risk factors, and general market forces all feed into customer behavior.
  • Customer Loyalty: How the data loss is perceived by customers has an effect on customer and brand loyalty. If the loss of the data is viewed as preventable, and the inconvenience or financial cost to customers is high, some customers will stop doing business with the company.
  • Loss of Sales: Your customer contact information and pricing sheets in the hands of your competitor provide ample data for targeted sales campaigns. Any successes come at your expense.
  • Competitive Advantage: R&D expenditure to create a new and competitive product can be devalued if that research, source code, process, or ingredient list is stolen; but since you aren’t blocked from still bringing the product to market, the lost benefit is not fully quantifiable.
  • Future Business: You cannot accurately predict lost future business, unless you restrict it to market/ecosystem/contract exclusion as mentioned above. We’ve seen single breach disclosures put a company out of business, while other companies see sales growth despite major public breaches.

Exponential loss growth

While a single incident might result in minimal losses, a string of ongoing incidents is likely to exponentially increase losses- especially in qualitative areas such as reputation damage and lost future business. Despite what most of the surveys claim, there is very little evidence of correlation between single data breaches and lost business, or even stock price. For example, TJX suffered one of the largest data breaches in history, but sales increased steadily through the incident. It’s clear customers either didn’t pay attention, or felt that the security controls implemented after the incident made TJX safer to shop. But if TJX suffered an ongoing string of data breaches over a period of months, at some point there would be material loss of business.

When providing a business justification for security spending, you do not need to account for every single aspect of loss. Nor do you need to even show that the majority of data value is at risk. Instead, you need to understand and be able to show that valuable data is at risk, and examine the potential benefits of security and the reduction of loss in relation to cost of the investment. Simple monetary damages may be small, but the potential for loss can still considerable. If it can be shown that mitigating the risk vector associated with data theft also accomplishes operational goals, the argument is even stronger. If the investment also accounts for compliance controls, or makes a business process more efficient, the effort may pay for itself.

–Adrian Lane

Do You Use DLP? We Should Talk

By Rich

As an analyst, I’ve been covering DLP since before there was anything called DLP. I like to joke that I’ve talked with more people that have evaluated and deployed DLP than anyone else on the face of the planet. Yes, it’s exactly as exciting as it sounds.

But all those references were fairly self-selected. They’ve either been Gartner clients, or our current enterprise clients, that were/are typically looking for help in product selection or dealing with some sort of problem. Many of the rest are vendor-supplied references. This combination skews the conversations towards people picking products, people with problems, or those a vendor think will make them look good.

I’m currently working on an article for Information Security magazine on “Real-World DLP”, and I’m hunting for some new references to expand that field a bit. If you are using DLP, successfully or not, and are willing to talk confidentially, please drop me a line. I’m looking for real-world stories, good and bad. If you are willing to go on the record, we’re also looking for good quote sources. The focus of the article is more on implementation than selection, and will be vendor-neutral.

To be honest, one reason I’m putting this out in the open is to see if my normal reference channels are skewed. It’s time to see how our current positions and assumptions play out on the mean streets of reality.

Of course I’ll be totally pissed if I’ve been wrong this entire time and have to retract everything I’ve ever written on DLP.

**Update - Oh yeah, my email address is rmogull, that is with two ‘L’s, at securosis dot com. Please let me know.

–Rich

Saturday, February 07, 2009

Friday Summary: February 6, 2009

By Adrian Lane

Here it is Friday again, and it feels like just a few minutes ago that I was writing the last Friday summary. This week has been incredibly busy for both of us. Rich has been out for the count most of this week with a stomach virus and wandering his own house like a deranged zombie. This was not really a hack, they were just warning Rich’s neighborhood. As the county cordoned off his house with yellow tape and flagged him as a temporary bio-hazard, I thought it best to forgo this week’s face to face Friday staff meeting, and get back on track with our blogging. Between the business justification white paper that we launched this week, and being on the road for client meetings, we’re way behind. A few items of interest …

I appears that data security is really starting to enter the consciousness of the common consumer. Or at least it is being marketed to them. There were even more advertisements in the San Jose airport this week than ever: The ever-present McAfee & Cisco ads were joined by Symantec and Compuware. The supermarket has Identity Theft protection pamphlets from not one but two vendors. The cherry on top of this security sundae was the picture of John Walsh in the in-flight magazine, hawking “CA Internet Security Suite Plus 2009”. I was shocked. Not because CA has a consumer security product; or because they are being marketed along with Harry Potter commemorative wands, holiday meat platters and low quality electronics. No, it was John Walsh’s smiling face that surprised me. Because John Walsh “Trusts CA Security Products to keep your family safe”. WTF? Part of me is highly offended. The other part of me thinks this is absolutely brilliant marketing. We have moved way beyond a features and technology discussion, and into JV-celebrities telling us what security products we should buy. If it were only that easy.

Myopia Alert: I am sure there are others in the security community who have gone off on this rant as well, but as I did not find a reference anywhere else, I thought this topic was worth posting. A word of caution for you PR types out there who are raising customer and brand awareness through security webinars and online security fora: you might want to have some empathy for your audience. If your event requires a form to be filled out, you are going to lose a big chunk of your audience because people who care about security care about their privacy as well. The audience member will bail, or your outbound call center will be dialing 50 guys named Jack Mayhoff. Further, if that entry form requires JavaScript and a half dozen cookies, you are going to lose a bigger percentage of your audience because JavaScript is a feature and a security threat rolled into one. Finally, if the third-party vendor you use to host the event does not support Firefox or Safari, you will lose the majority of your audience. I am not trying to be negative, but want to point out that while Firefox, Safari and Opera may only constitute 25% of the browser market, they are used by 90% of the people who care about security.

Final item I wanted to talk about: Our resident wordsmith and all around good guy Chris Pepper forwarded Rich and me a Slashdot link about how free Monty Python material on YouTube has caused their DVD sales to skyrocket. Both Rich and I have raised similar points here in the past, and we even referred to this phenomena in the Business Justification for Security Spending paper about why it can be hard to understand damages. While organizations like the RIAA feel this is counter-intuitive, it makes perfect sense to me and anyone else who has ever tried guerilla marketing, or seen the effects of viral marketing. Does anyone know if the free South Park episodes did the same for South Park DVD sales? I would be interested. Oh, and Chris also forwarded Le Wrath di Kahn, which was both seriously funny and really works as opera (the art form- I didn’t test in the browser).

On to the week in review:

Webcasts, Podcasts, Outside Writing, and Conferences:

  • Rich did a webcast with Roxana Bradescu of Oracle on Information Security for Database Professionals. Here is the sign-up link, and I will post a replay link later when I get one from Oracle.

Favorite Securosis Posts:

  • Rich: Launch of the Business Justification for Security Spending white paper. Whew!
  • Adrian: The Business Justification for Data Security post on Risk Estimation. I knew this was going to cause some of the risk guys to go apoplectic, but we were not building a full-blown risk management model, and frankly, risk calculations made every model so complex no one could use it as a tool.

Favorite Outside Posts:

Top News and Posts:

Blog Comment of the Week:

From Chris Hayes on the Risk Estimation for Business Justification for Data Security post:

Up to this point in the series, this “business justification” security investment model appears to be nothing more then a glorified cost benefit analysis wrapped up in risk/business terminology with a little bit of controls analysis thrown in.

I hope that the remaining posts will include something along the lines of “business justification” in the context of business goal alignment, risk tolerances levels of decision makers and differentiating between vulnerability, risk, threat event frequency, and loss event frequency- terms of which your model dances around.

I look forward to reading the remaining posts before passing final judgment. In the mean time, I would encourage readers to take a look at the FAIR methodology as well as look at the Open Group’s recently published “Risk Taxonomy” technical standard.

As opposed to what? Lots to consider on this topic.

It’s a really nice day here in Phoenix, so it is time to go outside and enjoy some sun.

–Adrian Lane

Database Security for DBAs

By Rich

I think I’ve discovered the perfect weight loss technique- a stomach virus. In 48 hours I managed to lose 2 lbs, which isn’t too shabby. Of course I’m already at something like 10% body fat, so I’m not sure how needed the loss was, but I figure if I just write a book about this and hock it in some informercial I can probably retire. My wife, who suffered through 3 months of so-called “morning” sickness, wasn’t all that sympathetic for some strange reason.

On that note, it’s time to shift gears and talk about database security. Or, to be more accurate, talk about talking about database security.

Tomorrow (Thursday Feb 5th) I will be giving a webcast on Database Security for Database Professionals. This is the companion piece to the webinar I recently presented on Database Security for Security Professionals. This time I flip the presentation around and focus on what the DBA needs to know, presenting from their point of view.

It’s sponsored by Oracle, presented by NetworkWorld, and you can sign up here.

I’ll be posting the slides after the webinar, but not for a couple of months as we reorganize the site a bit to better handle static content. Feel free to email me if you want a PDF copy.

–Rich

Friday, February 06, 2009

The Business Justification for Data Security- Version 1.0

By Rich

We’ve been teasing you with previews, but rather than handing out more bits and pieces, we are excited to release the complete version of the Business Justification for Data Security.

This is version 1.0 of the report, and we expect it to continue to evolve as we get more public feedback. Based on some of that initial feedback, we’d like to emphasize something before you dig in. Keep in mind that this is a business justification tool, designed to help you align potential data security investments with business needs, and to document the justification to make a case with those holding the purse strings. It’s not meant to be a complete risk assessment model, although it does share many traits with risk management tools.

We’ve also designed this to be both pragmatic and flexible- you shouldn’t need to spend months with consultants to build your business justification. For some projects, you might complete it in an hour. For others, maybe a few days or weeks as you wrangle business unit heads together to force them to help value different types of information.

For those of you that don’t want to read a 38 page paper we’re going to continue to post the guts of the model as blog posts, and we also plan on blogging additional content, such as more examples and use cases.

We’d like to especially thank our exclusive sponsor, McAfee, who also set up a landing page here with some of their own additional whitepapers and content. As usual, we developed the content completely independently, and it’s only thanks to our sponsors that we can release it for free (and still feed our families). This paper is also released in cooperation with the SANS Institute, will be available in the SANS Reading Room, and we will be delivering a SANS webcast on the topic on March 17th.

This was one of our toughest projects, and we’re excited to finally get it out there. Please post your feedback in the comments, and we will be crediting reviewers that advance the model when we release the next version.

And once again, thanks to McAfee, SANS, and (as usual) Chris Pepper, our fearless editor.

–Rich