Login  |  Register  |  Contact
Tuesday, February 03, 2009

The Business Justification for Data Security: Risk Estimation

By Adrian Lane

This is the third part of our Business Justification for Data Security series (Part 1, Part 2), and if you have been following the series this far, you know that Rich and I have complained about how difficult this paper was to write. Our biggest problem was fitting risk into the model. In fact we experimented and ultimately rejected a couple models because the reduction of risk vs. any given security investment was non-linear. And there were many threats and many different responses, few of which were quantifiable, making the whole effort ‘guestimate’ soup. In the end , risk became our ‘witching rod’; a guide as to how we balance value vs loss, but just one of the tools we use to examine investment decisions.

Measuring and understanding the risks to information

If data security were a profit center, we could shift our business justification discussion from the value of information right into assessing its potential for profit. But since that isn”t the case, we are forced to examine potential reductions in value as a guide to whether action is warranted. The approach we need to take is to understand the risks that directly threaten the value of data and the security safeguards that counter those risks.

There’s no question our data is at risk; from malicious attackers and nefarious insiders to random accidents and user errors, we read about breaches and loss nearly every day. But while we have an intuitive sense that data security is a major issue, we have trouble getting a handle on the real risks to data in a quantitative sense. The number of possible threats and ways to steal information is staggering, but when it comes to quantifying risks, we lack much of the information needed for an accurate understanding of how these risks impact us.

Combining quantitative and qualitative risk estimates

We”ll take a different approach to looking at risk; we will focus on quantifying the things that we can, qualifying the things we can”t, and combining them in a consistent framework. While we can measure some risks, such as the odds of losing a laptop, it’s nearly impossible to measure other risks, such as a database breach via a web application due to a new vulnerability. If we limit ourselves only to what we can precisely measure, we won”t be able to account for many real risks to our information. Inclusion of quantitative assessments, since they are a powerful tool to understand risk and influence decisions, help validate the overall model.

For our business justification model, we deliberately simplify the risk assessment process to give us just what we need to understand need for data security investments. We start by listing out the pertinent risk categories, then the likelihood or annual rate of occurrence for each risk, followed by severity ratings broken out for confidentiality, integrity, and availability. For risk events we can predict with reasonable accuracy, such as lost laptops with sensitive information, we can use numbers. In the example below, we know the A

ualized Rate of Occurrence (ARO), so we plug with value in. For less predictable risks, we just rate them from “low” to “high”. We then mark off our currently estimated (or measured) levels in each category. For qualitative measure, we will use a 1-5 scale to , but this is arbitrary, and you should use whatever scale that provides you with a level of granularity that assists understanding.

Risk Estimation: Credit Card Data (Sample):

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Impact

Risk

Likelihood/ARO

C

I

A

Total

Lost Laptop

43

4

1

3

51

Database Breach (Ext)

2

5

3

2

12

This is the simplified risk scorecard for the business justification model. The totals aren”t meant to compare one risk category to another, but to derive estimated totals we will use in our business justification to show potential reductions from the evaluated investment. While different organizations face different risk categories, we”ve included the most common data security risks here, and in Section 6 we show how it integrates into the overall model.

Common data security risks

The following is an outline of the major categories for information loss. Any time you read about a data breach, one or more of these events occurred. This list isn”t intended to comprehensive, rather provide a good overview of common data security risk categories to give you a jump start on implementing the model. Rather than discuss each and every threat vector, we will present logical groups to illustrate that the risks and potential solutions tend to be very similar within each specific category. The following are the principal categories to consider:

Lost Media

This category describes data at rest, residing on some form of media, that has been lost or stolen. Media includes disk drives, tape, USB/memory sticks, laptops, and other devices. This category encompasses the majority of cases of data loss. Typical security measures for this class includes media encryption, media “sanitizing”, and in some cases endpoint Data Loss Prevention technology.

  • Lost disks/backup tape
  • Lost/stolen laptop.
  • Information leaked through decommissioned servers/drives
  • Lost memory stick/flash drive
  • Stolen servers/workstations

Inadvertent Disclosure

This category includes data being accidentally exposed in some way that leads to unwanted disclosure. Examples include email to unwanted recipients, posting confidential data to web sites, unsecured Internet transmissions, lack of access controls, and the like. Safeguards include email & web security platforms, DLP and access controls systems. Each is effective, but only against certain threat types. Process and workflow controls are also needed to help catch human error.

  • Data accidentally leaked through email (Sniffed, wrong address, un-purged document metadata)
  • Data leaked by inadvertent exposure (Posted to the web, open file shares, unprotected FTP, or otherwise placed in an insecure location)
  • Data leaked by unsecured connection
  • Data leaked through file sharing File sharing programs are used to move large files efficiently (and possibly illegally).

External Attack/Breach

This category describes instances of data theft where company systems and applications are compromised by a malicious attacker, affecting confidentiality and integrity. Typical attacks include compromised accounts/passwords, SQL Injection, buffer overflow, web site attacks, trojans, viruses, network “sniffers” and others. Successful compromise often results in installation of additional malicious code. While not the most frequent, this category includes the most damaging data breaches and is most likely to be result in fraud. Any security precautions may assist in detection; but assessment, penetration testing, data encryption, and application security are common preventative controls; with application & database monitoring, WAF, and flow based detection popular as detective controls.

  • Data theft through compromised account (weak passwords)
  • Database breach (Databases are extraordinarily complex applications. The term “database breach” applies to many different types of attacks on a database server)
  • Web application reach (logic flaw, exploit)
  • Database breach by insider (employee, partner, contractor)
  • Breach via compromised endpoint

Remember that is evaluation is risk based; we”ll cover potential loss measurements in the next section. While this might seem counterintuitive, this method allows us to account for security controls that reduce potential losses from multiple risk categories and reduce complexity. Remember – we are focusing on business justification, not a comprehensive risk management system. We wanted to couple quantifiable and qualitative elements; otherwise every justification project would become a 2-year risk assessment.

–Adrian Lane

Saturday, January 31, 2009

Friday Summary - Jan 30, 2009

By Adrian Lane

A couple of people forwarded me this interview, and if you have not read it, it is really worth your time. It’s an amazing interview with Matt Knox, a developer with Direct Revenue who authored adware during his employ with them. For me this is important as it highlights stuff I figured was going on but really could not prove. It also exposes much of the thought process behind the developers at Micosoft, and it completely altered my behavior for ’sanitizing’ my PC’s. For me, this all started a few years ago (2005?) when my Windows laptop was infected with this stuff. I discovered something was going on because there was ongoing activity in the background when the machine was idle and started to affect machine responsiveness.

The mysterious performance degradation was difficult to find as I could not locate a specific application responsible, and the process monitors provided with Windows are wholly inadequate. I found that there were processes running in the background unassociated with any application, and unassociated with Windows. I did find files that were associated with these processes, and it was clear they did not belong on the machine. When I went to delete them, they popped up again within minutes- with new names! I was able to find multiple registry entries, and the behavior suggested that multiple pieces of code monitored each other for health and availability, and fixed each other if one was deleted. Even if I booted in safe mode I had no confidence that I could completely remove this … whatever it was … from the machine. At that point in time I knew I needed to start over.

How this type of software could have gotten into the registry and installed itself in such a manner was perplexing to me. Being a former OS developer, I started digging, and that’s when I got mad. Mr. Knox uses the word ‘promiscuous’ to describe the OS calls, and that was exactly what it was. There were API calls to do pretty much anything you wanted to do, all without so much as a question being asked of the user or of the installing party. You get a clear picture of the mentality of the developers who wrote the IE and Windows OS code back then- there were all sorts of nifty ways to ‘do stuff’, for anyone who wanted to, and not a shred of a hint of security. All of these ‘features’ were for someone else’s benefit! They could use my resources at will- as if they had the keys to my house, and when I left, they were throwing a giant party at my expense. What really fried me was that, while I could see these processes and registry entries, none of the anti-virus or anti-malware tools would detect them. So if I wanted to secure my machine, it was up to me to do it.

So I said this changed my behavior. Here’s how:

  • Formatted the disk and reinstalled the OS
  • Switched to Firefox full time. A few months later I discovered Flashblock and NoScript.
  • I stopped paying for desktop anti-virus and used free stuff or nothing at all. It didn’t work for the desktop, and email AV addressed my real concern.
  • I found a process monitor that gave me detailed information on what was running and what resources were being used.
  • I cateloged every process on the Windows machine, and kept a file that described each process’ function so I could cross-check and remove stuff that was not supposed to be there.
  • I began manually starting everything (non-core) through the services panel if I needed it. Not only did this help me detect stuff that should not be running, it reduced risks associated with poorly secured applications that leave services sitting wide open on a port.
  • Uninstalled WebEx, RealPlayer, and several other suspects after using.
  • I kept all of my original software nearby and planned to re-install, from CD or DVD, fresh every year. Until I got VMware.
  • I used a virtual partition for risky browsing whenever possible.

I now use a Mac, and run my old licensed copies of Windows in Parallels. Surprised?

Here is the week’s security summary:

Webcasts, Podcasts, Outside Writing, and Conferences:

Favorite Securosis Posts:

Favorite Outside Posts:

<

div>

Top News and Posts:

Blog Comment of the Week:

Good comment from Jack Pepper on “PCI isn’t meant to protect cardholder …” post:

“Why is this surprising? the PCI standard was developed by the card industry to be a “bare minimum” standard for card processing. If anyone in the biz thinks PCI is more that “the bare minimum standard for card processing”, they are mistaken. I tell people that PCI compliance is like a high school diploma: if you don’t have one, people suspect you’re an idiot. If you do have one, no one is impressed.”

Dead on target. Until next week…

–Adrian Lane

Friday, January 30, 2009

Policies and Security Products

By Adrian Lane

Where do the policies in your security product come from? With the myriad of tools and security products on the market, where do the pre-built policies come from? I am not speaking of AV in this post- rather looking at IDS, VA, DAM, DLP, WAF, pen testing, SIEM, and many others that use a set of policies to address security and compliance problems. The question is who decides what is appropriate? On every sales engagement, customer and analyst meeting I have ever participated in for security products, this was a question.

This post is intended more for IT professional who are considering security products, so I am gearing for that audience. When drafting the web application security program series last month, a key topic that kept coming up over and over again from security practitioners was: “How can you recommend XYZ security solution when you know that the customer is going to have to invest a lot for the product, but also a significant amount in developing their own policy set?” This is both an accurate observation and the right question to be asking. While we stand by our recommendations for reasons stated in the original series, it would be a disservice to our IT readers if we did not discuss this in greater detail. The answer is an important consideration for anyone selecting a security tool or suite.

When I used to develop database security products, policy development was one of the tougher issues for us to address on the vendor side. Once aware of a threat, on average it took 2.5 ‘man-days’ to develop a policy with a test case and complete remediation information [prior to QA]. This becomes expensive when you have hundreds of policies being developed for different problem sets. It was a common competitive topic to discuss policy coverage and how policies were generated, and a basic function of the product, so most every vendor will invest heavily in this area. More, most vendors market their security ‘research teams’ that find exploits, develop test code, and provide remediation steps. This domain expertise is one of the areas where vendors provide value in the products that they deliver, but when it comes down to it, vendor insight is fraction of the overall source of information. With monitoring and auditing, policy development was even harder: The business use cases were more diverse and the threats not completely understood. Sure we could return the ubiquitous who-what-when-where-to-from kind of stuff, but how did that translate to business need?

If you are evaluating products or interested in augmenting your policy set, where do you start? With vulnerability research, there are several resources that I like to use:

Vendor best practices - Almost every platform vendor, from Apache to SAP, offer security best practices documents. These guidelines on how to configure and operate their product form the basis for many programs. These cover operational issues that reduce risk, discuss common exploits, and reference specific security patches. These documents are updated during each major release cycle, so make sure you periodically review for new additions, or how they recommend new features be configured and deployed. What’s more, while the vendor may not be forthcoming with exploit details, they are the best source of information for remediation and patch data.

CERT/Mitre - Both have fairly comprehensive lists of vulnerabilities to specific products. Both provide a neutral description of what the threat is. Neither had great detailed information of the actual exploit, not will they have complete remediation information. It is up to the development team to figure out the details.

Customer feedback/peer review - If you are a vendor of security products, customer have applied the policies and know what works for them. They may have modified the code that you use to remediate a situation, and that may be a better solution than what your team implemented, and/or it may be too specific to their environment for use in a generalized product. If you are running your own IT department, what have your peers done? Next time you are at a conference or user group, ask. Regardless, vendors learn from other customers what works for them to address issues, and you can too.

3rd party relationships (consultants, academia, auditors) - When it comes to development of policies related to GLBA or SOX, which are outside the expertise of most security vendors, it’s particularly valuable to leverage third party consultative relations to augment policies with their deep understanding of how best to approach the problem. In the past I have used relationships with major consulting firms to help analyze the policies and reports we provided. This was helpful, as they really did tell us when some of our policies were flat out bull$(#!, what would work, and how things could work better. If you have these relationships already in place, carve out a few hours so they can help review and analyze policies.

Research & Experience - Most companies have dedicated research teams, and this is something you should look for. They do this every day and they get really good at it. If your vendor has a recognized expert in the field on staff, that’s great too. That person may be quite helpful to the overall research and discovery process of threats and problems with the platforms and products you are protecting. The reality is that they are more likely on the road speaking to customers, press and analysts rather than really doing the research. It is good that your vendor has a dedicated team, but their experience is just one part of the big picture.

User groups - With many of the platforms, especially Oracle, I learned a lot from regional DBAs who supported databases within specific companies or specific verticals. In many cases they did not have or use a third party product, rather they had a bunch of scripts that they had built up over many years, modified, and shared with others. They shared tips on not only what they were required to do, but how they implemented them. This typically included the trial-and-error discussion of how a certain script or policy was evolved over time to meet timeliness or completeness of information requirements from other team members. Use these groups and attend regional meetings to get a better idea of how peers solve problems. Amazing wealth of knowledge, freely shared.

General frameworks - To meet compliance efforts, frameworks commonly provide checklists for compliance and security. The bad news is that the lists are generic, but the good news is they provides a good start for understanding what you need to consider, and help prepare for pre-vendor engagements and POCs.

Compliance - Polices are typically created to manage compliance with existing policies or regulations. Compliance requirements allow some latitude for how you interpret how a PCI or FISMA applies to your organization. What works, how it is implemented, what the auditors find suitable, and what is easy for them to use all play a part in the push & pull of policy development, and one of the primary reasons to consider this effort as added expense to deploying third party products.

I want to stress that you should use this as a guide to review the methods that product vendors use to develop their policies, but my intention is to make sure you clearly understand that you will need to develop your own as well. In the case of web application security, it’s you application, and it will be tough to avoid. This post may help you dig through vendor sales and marketing literature to determine what can really help to you and what is “pure puffery”, but ultimately you need to consider the costs of developing your own policies for the products you choose. Why? You can almost never find off-the-shelf polices that meet all of your needs. Security or compliance may not be part of your core business, and you may not be a domain expert in all facets of security, but for certain key areas I recommend that you invest in supplementing the off-the-shelf policies included with your security tools. Policies are best if they are yours, grounded in your experience, and tuned to your organizational needs. They provide historical memory, and form a knowledge repository for other company members to learn from. Policies can guide management efforts, assurance efforts, and compliance efforts. Yes, this is work, and potentially a lot of work paid in increments over time. If you do not develop your own policies, and this type of effort is not considered within your core business, then you are reliant on third parties (service providers or product vendors) for the production of your policies.

Hopefully you will find this helpful.

–Adrian Lane

Submit A Top Ten Web Hacking Technique

By Rich

Last week Jeremiah Grossman asked if I’d be willing to be a judge to help select the Top Ten Web Hacking Techniques for 2008. Along with Chris Hoff (not sure who that is), H D Moore, and Jeff Forristal.

Willing? Heck, I’m totally, humbly, honored.

This year’s winner will receive a free pass to Black Hat 2009, which isn’t to shabby.

We are up to nearly 70 submissions, so keep ‘em coming.

–Rich

Heartland Payment Systems Attempts To Hide Largest Data Breach In History Behind Inauguration

By Rich

Brian Krebs of the Washington Post dropped me a line this morning on a new article he posted. Heartland Payment Systems, a credit card processor, announced today, January 20th, that up to 100 Million credit cards may have been disclosed in what is likely the largest data breach in history. From Brian’s article:

Baldwin said 40 percent of transactions the company processes are from small to mid-sized restaurants across the country. He declined to name any well-known establishments or retail clients that may have been affected by the breach. Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients.

“The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.”

I want you to roll that number around on your tongue a little bit. 100 Million transactions per month. I suppose I’d try to hide behind one of the most historic events in the last 50 years if I were in their shoes.

“Due to legal reviews, discussions with some of the players involved, we couldn’t get it together and signed off on until today,” Baldwin said. “We considered holding back another day, but felt in the interests of transparency we wanted to get this information out to cardholders as soon as possible, recognizing of course that this is not an ideal day from the perspective of visibility.”

In a short IM conversation Brian mentioned he called the Secret Service today for a comment, and was informed they were a little busy.

We’ll talk more once we know more details, but this is becoming a more common vector for attack, and by our estimates is the most common vector of massive breaches. TJX, Hannaford, and Cardsystems, three of the largest previous breaches, all involved installing malicious software on internal networks to sniff cardholder data and export it.

This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls.

–Rich

The Most Powerful Evidence That PCI Isn’t Meant To Protect Cardholders, Merchants, Or Banks

By Rich

I just read a great article on the Heartland breach, which I’ll talk more about later. There is one quote in there that really stands out:

End-to-end encryption is far from a new approach. But the flaw in today”s payment networks is that the card brands insist on dealing with card data in an unencrypted state, forcing transmission to be done over secure connections rather than the lower-cost Internet. This approach avoids forcing the card brands to have to decrypt the data when it arrives.

While I no longer think PCI is useless, I still stand by the assertion that its goal is to reduce the risks of the card companies first, and only peripherally reduce the real risk of fraud. Thus cardholders, merchants, and banks carry both the bulk of the costs and the risks. And here’s more evidence of its fundamental flaws.

Let’s fix the system instead of just gluing on more layers that are more costly in the end. Heck, let’s bring back SET!

–Rich

Thursday, January 29, 2009

The Network Security Podcast, Episode 136

By Rich

I managed to constrain my rants this week, staying focused on the issue as Martin and I covered our usual range of material. I think we were in top form in the first part of the show where we focus on the economics of breaches and discussed loss numbers, vs. breach notification statistics.

Here are the show notes, and as usual the episode is here: Network Security Podcast, Episode 136, January 27, 2009 Time: 27:43

Show Notes:

–Rich

Inherent Role Conflicts In National Cybersecurity

By Rich

I spent a lot of time debating with myself if I should wade into this topic. Early in my analyst career I loved to talk about national cybersecurity issues, but I eventually realized that, as an outsider, all I was doing was expending ink and oxygen, and I wasn’t actually contributing anything. That’s why you’ve probably noticed we spend more time on this blog talking about pragmatic security issues and dispensing practical advice than waxing poetic about who should get the Presidential CISO job or dispensing advice to President Obama (who, we hate to admit, probably doesn’t read the blog). Unless or until I, or someone I know, gets “the job”, I harbor no illusions that what I write and say reaches the right ears.

But as a student of history, I’m fascinated by the transition we, of all nations, face due to our continuing reliance the Internet to run everything from our social lives, to the global economy, to national defense. Rather than laying out my 5 Point Plan for Solving Global Cyber-Hunger and Protecting Our Children, I’m going to talk about some more generic issues that I personally find compelling.

One of the more interesting problems, and one that all nations face, is the inherent conflicts between the traditional roles of those that safeguard society. Most nations rely on two institutions to protect them- the military and the police.

The military serves two roles: to protect the institution of the nation state from force, and to project power (protecting national assets, including lines of commerce, that extend outside national boundaries). Militaries are typically focused externally, even in fascist states, but do play a variable domestic role, even in the most liberal of democratic societies. Militaries are externally focused entities, who only turn internally when domestic institutions don’t have the capacity to manage situations.

The police also hold dual roles: to enforce the law, and ensure public safety. Of course the law and public safety overlap to different degrees in different political systems.

Seems simple enough, and fundamentally these institutions have existed since nearly the dawn of society. Even when it appears that the institutions are one and the same, that’s typically in name only since the skills sets involved don’t completely overlap, especially in the past few hundred years. Cops deal with crime, soldiers with war.

The Internet is blasting those barriers, and we have yet to figure out how to structure the roles and responsibilities to deal with Internet-based threats. The Internet doesn’t respect physical boundaries, and its anonymity disguises actors. The exact same attack by the exact same threat actor could be either a crime, or an act of war, depending on the perspective. One of the core problems we face in cybersecurity today is structuring the roles and responsibilities for those institutions that defend and protect us. With no easy lines, we see ongoing turf battles and uncoordinated actions.

The offensive role is still relatively well defined- it’s a responsibility of the military, should be coordinated with physical power projection capacity, and the key issue is over which specific department has responsibility. There’s a clear turf battle over offensive cyber operations here in the U.S., but that’s normal (explaining why every service branch has their own Air Force, for example). I do hope we get our *%$& together at some point, but that’s mere politics.

The defensive role is a mess. Under normal circumstances the military protects us from external threats, and law enforcement from internal threats (yes, I know there are grey areas, but roll with me here). Many/most cyberattacks are criminal acts, but that same criminal act is maybe national security threat. We can usually classify a threat by action, intent, and actor. Is the intent financial gain? Odds are it’s a crime. Is the actor a nation state? Odds are it’s a national security issue. Does the action involve tanks or planes crossing a border? It’s usually war. (Terrorism is one of the grey areas- some say it’s war, others crime, and others a bit of both depending on who is involved).

But a cyberattack? Even if it’s from China it might not be China acting. Even if it’s theft of intellectual property, it might not be a mere crime. And just who the heck is responsible for protecting us? Through all of history the military responds through use of force, but you don’t need me to point out how sticky a situation that is when we’re talking cyberspace. Law enforcement’s job is to catch the bad guys, but they aren’t really designed to protect national borders, never mind non-existent national borders. Intelligence services? It isn’t like they are any better aligned. And through all this I’m again shirking the issues of which agencies/branches/departments should have which responsibilities.

This we need to start thinking a little differently, and we may find that we need to develop new roles and responsibilities and we drive deeper into the information age. Cybersecurity isn’t only a national security problem or a law enforcement problem, it’s both. We need some means to protect ourselves from external attacks of different degrees at the national level, since just telling every business to follow best practices isn’t exactly working out. We need a means of projecting power that’s short of war, since playing defense only is a sure way to lose. And right now, most countries can’t figure out who should be in charge or what they should be doing. I highly suspect we’ll see new roles develop, especially in the area of counter-intelligence style activity to disrupt offensive operations ranging from taking out botnets, to disrupting cybercrime economies, to counterespionage issues relating to private business.

As I said in the beginning, this is a fascinating problem, and one I wish I was in a position to contribute towards, but Phoenix is a bit outside the Beltway, and no one will give me the President’s new Blackberry address. Even after I promised to stop sending all those LOLCatz forwards.

–Rich

Wednesday, January 28, 2009

The Business Justification for Data Security: Information Valuation Examples

By Rich

In our last post, we mentioned that we’d be giving a few examples for data valuation. This is the part of the post where I try and say something pithy, but I’m totally distracted by the White House press briefing on MSNBC, so I’ll cut to the chase:

As a basic exercise, let”s take a look at several common data types, discuss how they are used, and qualify their value to the organization. Several of these clearly have a high value to the organization, but others vary. Frequency of use and audience are different for every company. Before you start deriving values, you need to sit down with executives and business unit managers to find out what information you rely on in the first place, then use these valuation scenarios to help rank the information, and then feed the rest of the justification model.

Credit card numbers

Holding credit card data is essential for many organizations – a common requirement for dispute resolution; because most merchants sell products on the Internet, card data is subject to PCI DSS requirements. In addition to serving this primary function, customer support and marketing metrics derive value from the data. This information is used by employees and customers, but not shared with partners.

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Credit Card Number

4

2

3

Healthcare information (financial)

Personally Identifiable Information is a common target for attackers, and a key element for fraud since it often contains financial or identifying information. For organizations such as hospitals, this information is necessary and used widely for treatment. While the access frequency may be moderate (or low, when a patient isn”t under active treatment), it is used by patients, hospital staff, and third parties such as clinicians and insurance personnel.

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Healthcare PII

5

3

4

Intellectual property Intellectual Property can take many forms, from patents to source code, so the values associated with this type of data vary from company to company. In the case of a publicly traded company, this may be project-related or investment information that could be used for insider trading. The value would be moderate for the employees that use this information, but high near the end of the quarter and other disclosure periods, when it’s also exposed to a wider audience.

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Financial IP (normal)

3

2

1

Financial IP (disclosure period)

5

2

2

Trade secrets Trade secrets are another data type to consider. While the audience may be limited to a select few individuals within the company, with low frequency of use, the business value may be extraordinarily high

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Trade Secrets

5

1

1

<

p>

Sales data
The value of sales data for completed transactions varies widely by company. Pricing, customer lists, and contact information, are used widely throughout and between companies. In the hands of a competitor, this information could pose a serious threat to sales and revenue.

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Sales Data

2

5

4

<

p>


Customer Metrics
The value of customer metrics varies radically from company to company. Credit card issuers, for example, may rate this data as having moderate value as it is used for fraud detection as well as sold to merchants and marketers. The information is used by employees and third party purchasers, and provided to customers to review spending.

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Customer Metrics

4

2

3

You can create more more categories, and even bracket dollar value ranges if you find them helpful in assigning relative value to each data type in your organization. But we want to emphasize that these are qualitative and not quantitative assessments, and they are relative within your organization rather than absolute. The point is to show that your business uses many forms of information. Each type is used for different business functions and has its own value to the organization, even if it is not in dollars.

–Rich

The Business Justification For Data Security: Data Valuation

By Rich

Man, nothing feels better than finishing off a few major projects. Yesterday we finalized the first draft of the Business Justification paper this series is based on, and I also squeezed out my presentation for IT Security World (in March) where I’m talking about major enterprise software security. Ah, the thrills and spills of SAP R/3 vs. Netweaver security!

In our first post we provided an overview of the model. Today we’re going to dig into the first step- data valuation. For the record, we’re skipping huge chunks of the paper in these posts to focus on the meat of the model- and our invitation for reviewers is still open (official release date should be within 2 weeks).

We know our data has value, but we can”t assign a definitive or fixed monetary value to it. We want to use the value to justify spending on security, but trying to tie it to purely quantitative models for investment justification is impossible. We can use educated guesses but they”re still guesses, and if we pretend they are solid metrics we”re likely to make bad risk decisions. Rather than focusing on difficult (or impossible) to measure quantitative value, let”s start our business justification framework with qualitative assessments. Keep in mind that just because we aren”t quantifying the value of the data doesn’t mean we won”t use other quantifiable metrics later in the model. Just because you cannot completely quantify the value of data, that doesn’t mean you should throw all metrics out the window.

To keep things practical, let”s select a data type and assign an arbitrary value to it. To keep things simple you might use a range of numbers from 1 to 3, or “Low”, “Medium”, and “High” to represent the value of the data. For our system we will use a range of 1-5 to give us more granularity, with 1 being a low value and 5 being a high value.

Another two metrics help account for business context in our valuation: frequency of use and audiences. The more often the data is used, the higher its value (generally). The audience may be a handful of people at the company, or may be partners & customers as well as internal staff. More use by more people often indicates higher value, as well as higher exposure to risk. These factors are important not only for understanding the value of information, but also the threats and risks associated with it – and so our justification for expenditures. These two items will not be used as primary indicators of value, but will modify an “intrinsic” value we will discuss more thoroughly below. As before, we will assign each metric a number from 1 to 5 , and we suggest you at least loosely define the scope of those ranges. Finally, we will examine three audiences that use the data: employees, customers, and partners; and derive a 1-5 score.

The value of some data changes based on time or context, and for those cases we suggest you define and rate it differently for the different contexts. For example, product information before product release is more sensitive than the same information after release.

As an example, consider student records at a university. The value of these records is considered high, and so we would assign a value of five. While the value of this data is considered “High” as it affects students financially, the frequency of use may be moderate because these records are accessed and updated mostly during a predictable window – at the beginning and end of each semester. The number of audiences for this data is two, as the records are used by various university staff (financial services and the registrar”s office), and the student (customer). Our tabular representation looks like this:

<

p style=”font: 12.0px Helvetica; min-height: 14.0px”>

Data

Value

Frequency

Audience

Student Record

5

2

2

In our next post (later today) we’ll give you more examples of how this works.

–Rich

Tuesday, January 27, 2009

Credit Card (Paper) Security Fail

By Rich

I’m consistently impressed with the stupidity of certain financial institutions. Take credit card companies and the issuing banks. We’re in the middle of a financial meltdown driven by failures in the credit system and easy credit, yet you still can’t check out at Target (or nearly anyplace else) without the annoying offer for your 10% discount if you just apply for a card on the spot.

I also hate the “checks” they are always mailing me to transfer balances or otherwise use a credit for something I might use cash for. Any fraudster getting his or her hands on them can have a field day.

That’s why I’m highly amused by the latest offer to my wife. The envelope arrived with her name and address on the outside, and some else’s pre-printed checks on the inside.

I guess the sorting machine ended up, and hopefully her checks went to someone trustworthy.

–Rich

Saturday, January 24, 2009

Friday Summary- January 23, 2009

By Rich

Warning- today’s introduction includes my political views.

History

Whatever your political persuasion, there’s no denying the magnitude of this week. While we are far from eliminating racism and bias in this country, or the world at large, we passed an incredibly significant milestone in civil rights. My (pregnant) wife and I were sitting on the couch, watching a replay of President Obama’s speech, when she turned to me and said, “you know, our child will never know a world where we didn’t have a black president”.

Change

One thing I think we here in the US forget is just how much we change with the transition to each new administration, especially when control changes hands between parties. We see it as the usual continuity of progress, but it’s very different to the outside world. In my travels to other countries I’m amazed at their amazement at just how quickly we, as a nation, flip and flop. In the matter of a day our approach to foreign policy completely changes- never mind domestic affairs. We have an ability to completely remake ourselves to the world. It’s a hell of a strategic advantage, when you really think about it.

In a matter of 3 days we’re seeing some of the most material change since the days of Nixon. Our government is reopening, restoring ethical boundaries, and reintroducing itself to the world.

Faith

When Bush was elected in 2000 I was fairly depressed. He seemed so lacking in capacity I couldn’t understand his victory. Then, after 9/11, I felt like I was living in a different country. An angry country, that no longer respected diversity of belief or tolerance. A country where abuse of power and disdain for facts and transparency became the rule of our executive branch, if not (immediately) the rule of law.

I was in Moscow during the election and was elated when Obama won, despite the almost surreal experience of being in a rival nation. When I watched the inauguration I felt, for the first time in many years, that I again lived in the country I thought I grew up in- my faith restored.

Talking with my friends of all political persuasions, it’s clear that this is also a transition of values. Transparency is back; something sorely lacking from both the public and private sector for far longer than Bush was in office. Accountability and sacrifice are creeping their heads over the wall. And lurking along the edges of the dark clouds above us is self sacrifice and unity of purpose. I’m excited. I’m excited more about what this mean to our daily and professional lives than just our governance. Will my hopes be dashed by reality? Probably, but I’d rather plunge in head first than cower at home, shopping off Amazon.

Oh- and there was like this really huge security breach this week, some worm is running rampant and taking over all our computers, and some idiots keep downloading pirated software with a Mac trojan.

Here is the week’s security summary:

Webcasts, Podcasts, Outside Writing, and Conferences:

Favorite Securosis Posts:

Favorite Outside Posts:

  • Adrian: Hoff’s ruminating on Cloud security of Core services. The series of posts has been interesting. I follow many of these blog posts made on dozens of different web sites, but only for the occasionally humorous debate. Not because I care about the nuts and bolts of how Cloud computing will work, how we define it, or where it is going. The CIO in me loves the thought of minimal risk for trying & adopting software and services. I am interested in the flexibility of adoption. I do not need to perform rigorous evaluations of hardware, software, and environmental considerations- just determine how it meets my business needs, how easy is it to use, and does the pricing model work for me. After a while if I don’t like it, I switch. Stickiness is no longer an investment issue, but a contract issue. And I am only afraid of these services not being in my core if I run out of choices in the vendor community. I know there are a lot more things I do need to consider, and I cannot assume 100% divestiture of responsibilities for compliance and whatnot, but wow, the perception of risk reduction in platform selection drops so much that I am likely to jump forward without a full understanding of other risks I may inherit because of these percieved benefits. Not that it’s ideal, but it is likely.
  • Rich: Sharon on Wwll the Real PII Stand Up? He raises a great issue that there are a bunch of definitions of PII in different contexts, and an increasingly complex regulatory environment with multiple standards.

Top News and Posts:

Blog Comment of the Week:

We didn’t post much, but the comments were great this week. Merchantgrl on the Heartland Breach post:

They were breached a while ago and they just happened to pick that day to finally announce it?

Several people have brought up the Trustwave audit of April 2008. To be compliant, they need ‘REGULAR’ testing. https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml

Requirement 11: Regularly test security systems and processes. What was there schedule for testing? audits?

Rafal is right- the financial implications are huge. Given the magnitude, and the lack of information being released on their new 2008breach.com site, it makes you wonder.

–Rich

How Much Security Will You Tolerate?

By Adrian Lane

I have found a unique way to keep anyone from using my iMac. While family & friends love the display, they do not use my machine. Many are awed that they can run Windows in parallel to the Mac OS, and the sleek appearance and minimal footprint has created many believers- but after a few seconds they step away from the keyboard. Why? Because they cannot browse the Internet. My copy of Firefox has NoScript, Flashblock, cookie acknowledgement, and a couple of other security related ad-ons. But having to click the Flash logo, or to acknowledge a cookie, is enough to make them leave the room. “I was going to read email, but I think I will wait until I fly home”.

I have been doing this so long I never even notice. I never stopped to think that every web page requires a couple extra mouse clicks to use, but I always accepted that it was worth it. The advantages to me in terms of security are clear. And I always get that warm glow when I find myself on a site for the first time and see 25 Flash icons littering the screen and a dozen cookie requests for places I have never heard of. But I recognize that I am in the minority. The added work seems to so totally ruin the experience and completely turn them off to the Internet. My wife even refused to use my machine, and while I think the authors of NoScript deserve special election into the Web Security Hall of Fame (Which given the lack of funding, currently resides in Rich’s server closet), the common user thinks of NoScript as a curse.

And for the first time I think I fully understand their perspective, which is the motivation for this post. I too have discovered my tolerance limit. I was reading rsnake’s post on RequestPolicy Firefox extension. This looks like a really great idea, but acts like a major work inhibitor. For those not fully aware, I will simply say most web sites make requests for content from more than just one site. In a nutshell you implicitly trust more than just the web site you are currently visiting, but whomever provides content on the page. The plugin’s approach is a good one, but it pushed me over the limit of what I am willing to accept.

For every page I display I am examining cookies, Flash, and site requests. I know that web security is one of the major issues we face, but the per-page analysis is not greater than the time I spend on many pages looking for specific content. Given that I do a large percentage of research on the web, visiting 50-100 sites a day, this is over the top for me. If you are doing any form of risky browsing, I recommend you use it selectively. Hopefully we will see a streamlined version as it is a really good idea.

I guess the question in my mind is how much security will we tolerate? Even security professionals are subject to the convenience factor.

–Adrian Lane

Thursday, January 22, 2009

The Business Justification For Data Security

By Rich

You’ve probably noticed that we’ve been a little quieter than usual here on the blog. After blasting out our series on Building a Web Application Security Program, we haven’t been putting up much original content.

That’s because we’ve been working on one of our tougher projects over the past 2 weeks. Adrian and I have both been involved with data security (information-centric) security since long before we met. I was the first analyst to cover it over at Gartner, and Adrian spent many years as VP of Development and CTO in data security startups. A while back we started talking about models for justifying data security investments. Many of our clients struggle with the business case for data security, even though they know the intrinsic value. All too often they are asked to use ROI or other inappropriate models.

A few months ago one of our vendor clients asked if we were planning on any research in this area. We initially thought they wanted yet-another ROI model, but once we explained our positions they asked to sign up and license the content. Thus, in the very near future, we will be releasing a report (also distributed by SANS) on The Business Justification for Data Security. (For the record, I like the term information-centric better, but we have to acknowledge the reality that “data security” is more commonly used).

Normally we prefer to develop our content live on the blog, as with the application security series, but this was complex enough that we felt we needed to form a first draft of the complete model, then release it for public review. Starting today, we’re going to release the core content of the report for public review as a series of posts. Rather than making you read the exhaustive report, we’re reformatting and condensing the content (the report itself will be available for free, as always, in the near future). Even after we release the PDF we’re open to input and intend to continuously revise the content over time.

The Business Justification Model

Today I’m just going to outline the core concepts and structure of the model. Our principle position is that you can’t fully quantify the value of information; it changes too often, and doesn’t always correlate to a measurable monetary amount. Sure, it’s theoretically possible, but practically speaking we assume the first person to fully and accurately quantify the value of information will win the nobel prize.

Our model is built on the foundation that you quantify what you can, qualify the rest, and use a structured approach to combine those results into an overall business justification. 200901221427.jpg We purposely designed this as a business justification model, not a risk/loss model. Yes, we talk about risk, valuation, and loss, but only in the context of justifying security investments. That’s very different from a full risk assessment/management model.

Our model follows four steps:

  1. Data Valuation: In this step you quantify and qualify the value of the data, accounting for changing business context (when you can). It’s also where you rank the importance of data, so you know if you are investing in protecting the right things in the right order.
  2. Risk Estimation: We provide a model to combine qualitative and quantitative risk estimates. Again, since this is a business justification model, we show you how to do this in a pragmatic way designed to meet this goal, rather than bogging you down in near-impossible endless assessment cycles. We provide a starting list of data-security specific risk categories to focus on.
  3. Potential Loss Assessment: While it may seem counter-intuitive, we break potential losses from our risk estimate since a single kind of loss may map to multiple risk categories. Again, you’ll see we combine the quantitative and qualitative. As with the risk categories, we also provide you with a starting list.
  4. Positive Benefits Evaluation: Many data security investments also contain positive benefits beyond just reducing risk/losses. Reduced TCO and lower audit costs are just two examples.

After walking through these steps we show how to match the potential security investment to these assessments and evaluate the potential benefits, which is the core of the business justification. A summarized result might look like:

- Investing in DLP content discovery (data at rest scanning) will reduce our PCI related audit costs by 15% by providing detailed, current reports of the location of all PCI data. This translates to $xx per annual audit. - Last year we lost 43 laptops, 27 of which contained sensitive information. Laptop full drive encryption for all mobile workers effectively eliminates this risk. Since Y tool also integrates with our systems management console and tells us exactly which systems are encrypted, this reduces our risk of an unencrypted laptop slipping through the gaps by 90%. - Our SOX auditor requires us to implement full monitoring of database administrators of financial applications within 2 fiscal quarters. We estimate this will cost us $X using native auditing, but the administrators will be able to modify the logs, and we will need Y man-hours per audit cycle to analyze logs and create the reports. Database Activity Monitoring costs %Y, which is more than native auditing, but by correlating the logs and providing the compliance reports it reduces the risk of a DBA modifying a log by Z%, and reduces our audit costs by 10%, which translates to a net potential gain of $ZZ. - Installation of DLP reduces the chance of protected data being placed on a USB drive by 60%, the chances of it being emailed outside the organization by 80%, and the chance an employee will upload it to their personal webmail account by 70%.

We’ll be detailing more of the sections in the coming days, and releasing the full report early next month. But please let us know what you think of the overall structure. Also, if you want to take a look at a draft (and we know you) drop us a line…

We’re really excited to get this out there. My favorite parts are where we debunk ROI and ALE.

–Rich

Tuesday, January 20, 2009

Heartland Payment Systems Attempts To Hide Largest Data Breach In History Behind Inauguration

By Rich

Brian Krebs of the Washington Post dropped me a line this morning on a new article he posted. Heartland Payment Systems, a credit card processor, announced today, January 20th, that up to 100 million credit cards may have been disclosed in what is likely the largest data breach in history. From Brian’s article:

Baldwin said 40 percent of transactions the company processes are from small to mid-sized restaurants across the country. He declined to name any well-known establishments or retail clients that may have been affected by the breach. Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients. “The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.”

I want you to roll that number around on your tongue a little bit. 100 Million transactions per month. I suppose I’d try to hide behind one of the most historic events in the last 50 years if I were in their shoes.

“Due to legal reviews, discussions with some of the players involved, we couldn’t get it together and signed off on until today,” Baldwin said. “We considered holding back another day, but felt in the interests of transparency we wanted to get this information out to cardholders as soon as possible, recognizing of course that this is not an ideal day from the perspective of visibility.”

In a short IM conversation Brian mentioned he called the Secret Service today for a comment, and was informed they were a little busy.

We’ll talk more once we know more details, but this is becoming a more common vector for attack, and by our estimates is the most common vector of massive breaches. TJX, Hannaford, and Cardsystems, three of the largest previous breaches, all involved installing malicious software on internal networks to sniff cardholder data and export it.

This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls.

–Rich