Securosis

Research

Workers “stealing company data”?

Just ran across this article on workers “stealing company data” on the BBC news web site. The story is based upon a recent Ponemon study (who else?) of former employees and the likelihood they will steal company information. It turns out that most of those polled will in fact take something with them. The Ponemon numbers are not surprising as this tracks closely with traditional forms of employee theft across most industries. What got me shaking my head was the sheer quantity of FUD being thrown out with the raw data. A “surging wave” of activity? You bet there is! And it tightly corresponds to the number of layoffs. I am guessing when I say that the point Kevin Rowney of Vontu Symantec was trying to make is companies do very little to protect information from insiders, especially during layoffs. But the author make it sound as if insider theft is bringing about the collapse of western civilization. What I don’t believe we can do here is try to justify security spending by saying “Look at these losses in revenue! They are staggering! Were getting killed by insider theft!” These companies are in trouble to begin with, which is why they are laying people off. Ex-employees may be taking information because their accounts are still active, or they may have left with it at the time they were fired. But just because the employee walked out with the information does not necessarily mean that the company suffered a loss. That data has to be used in some manner that affects the value of the company, or results in lost sales. And the capability for ex-employees to do this, especially in this economy, is probably going down, not up. The employee who has backup tapes in their closet may dream about “sticking it” to their former employer, but odds are high that the information they employee has will never result in the company suffering damages. Heck, they would actually have to land a new job before that could happen. I know some HR reps who probably envision their ex-emplyees contacting their underground ‘connections’ to sell of backup tapes, but how many employees do you really think can carry this off? You think they are going to sell it on eBay? Call a competitor? We have seen how that turns out. No use, no loss. “I had a very strong work ethic. The problem was my ethics in work.” There is also a huge double standard here, where most companies propagate the very activity they decry. When I worked at a brokerage, it was one of our biggest fears that an employee would steal one of our “books of business”, taking it to another brokerage, and when I first learned about the difficulties in protecting data from insiders and enforcing proper use. On the flip side, it was expected every broker that interviewed had their own “book of business”. If they didn’t, they were ‘losers’ or some other expletive right out of Glengarry Glenn Ross. Having existing relationships that could immediately bring in clients to the organization was on eof the top 5 considerations for employment. Most salesmen, attorneys, financiers and executives are considered not just for the skills they possess, but the relationships they have, and the knowledge they bring to the position. That knowledge is typically in their heads, rolodexes and their iPhone. I am not saying that they did not have paper or electronic backups as well, as 15% of the respondents admitted they did. My point is companies cry foul that they are the the victims of insider theft, but in reality they fired or laid off an employee, and that employee took a job with a competitor. I have trouble calling that an insider attack. Share:

Share:
Read Post

New Database Configuration Assessment Options

Oracle has acquired mValent, the configuration management vendor. mValent provides an assessment tool to examine the configuration of applications. Actually, they do quite a bit more than that, but I wanted to focus on the value to database security and compliance in this post. This is a really good move on Oracle’s part as it fills a glaring hole that they have had for some time in their security and compliance offerings. I have never understood why Oracle did not provide this as part of OEM as every Oracle event I have been to in the last 5 years has sessions where DBA’s are swapping scripts to assess their database. Regardless, they have finally filled the gap. It provides them with a platform to implement their own best practice guidelines, and gives customers a way to implement their own security, compliance and operational policies around the database and (I assume) other application platforms. Sadly, many companies have not automated their database configuration assessments, and the market remains wide open, and this is a timely acquisition. While the value proposition for this technology will be spun by Oracle’s marketing team in a few dozen different ways (change management, compliance audits, regulatory compliance, application controls, application audits, compliance automation, etc), don’t get confused by all of the terms. When it comes down to it, this is an assessment of application configuration. And it does provide value in a number of ways: security, compliance and operations management. The basic platform can be used in many different ways all depending upon how you bundle the policy sets and distribute reports. Also keep in mind that a ‘database audit’ and ‘database auditing’ are two completely different things. Database auditing is about examining transactions. What we are talking about here is how the database is configured and deployed. To avoid the deliberate market confusion on the vendors part, here at Securosis we will stick to the terms Vulnerability Assessment and Configuration Assessment to describe the work that is being performed. Tenable Network Security has also announced on their blog that they now have the ability to perform credentialed scans of the database. This means that Nessus is no longer just a pen-test style patch level checker, but a credentialed/peer based configuration assessment. By ‘Credentialed’ I mean that the scanning tool has a user name and password with some access rights the database. This type of assessment provides a lot more functionality because there is a lot more information available to you that is not available through a penetration test. This is necessary progression for the product as the ports, quite specifically the database ports, no longer return sufficient information for a good assessment of patch levels, or any of the important information for configuration. If you want to produce meaningful compliance reports, this is the type of scan you need to provide. While I occasionally rip Tenable Security as this is something they should have done two years ago, it is really a great advancement for them as it opens up the compliance and operation management buying centers. Tenable must be considered a serious player in this space as this is a low cost, high value option. They will continue to win market share as they flesh out the policy set to include many of the industry best practices and compliance tests. Oracle will represent an attractive option for many customers, and they should be able to immediately leverage their existing relations. While not cutting edge or best-of -breed in this class, I expect many customers will adopt as it will be bundled with what they are already buying, or the investment is considered lower risk as you are going with the worlds largest business software vendors. On the opposite end of the spectrum, companies who do not view this as business critical but still want thorough scans will employe the cost effective Tenable solution. Vendors like Fortinet, with their database security appliance, and Application Security’s AppDetective product, will be further pressed to differentiate their offerings to compete with the perceived top end and bottom ends of the market. Things should get interesting in the months to come. Share:

Share:
Read Post

Friday Summary, 13th of February, 2009

It’s Friday the 13th, and I am in a good mood. I probably should not be, given that every conversation seems to center around some negative aspect of the economy. I started my mornings this week talking with one person after another about a possible banking collapse, and then moved to a discussion of Sirius/XM going under. Others are furious about the banking bailout as it’s rewarding failure. Tuesday of this week I was invited to speak at a business luncheon on data security and privacy, so I headed down the hill to find the side of the roads filled with cars and ATV’s for sale. Cheap. I get to the parking lot and find it empty but for a couple of pickup trucks, all are for sale. The restaurant we are supposed to meet at shuttered its doors the previous night and went out of business. We move two doors down to the pizza joint where the TV is on and the market is down 270 points and will probably be worse by the end of the day. Still, I am in a good mood. Why? Because I feel like I was able to help people. During the lunch we talked about data security and how to protect yourself on line, and the majority of these business owners had no idea about the threats to them both physical and electronic, and no idea on what to do about them. They do now. What was surprising was I found that everyone seemed to have recently been the victim of a scam, or someone else in their family had been. One person had their checks photographed at a supermarket and someone made impressive forgeries. One had their ATM account breached but no clue as to how or why. Another had false credit card charges. Despite all the bad news I am in a good mood because I think I helped some people stay out of future trouble simply by sharing information you just don’t see in the newspapers or mainstream press. This leads me to the other point I wanted to discuss: Rich posted this week on “An Analyst Conundrum” and I wanted to make a couple additional points. No, not just about my being cheap … although I admit there are a group of people who capture the prehistoric moths that fly out of my wallet during the rare opening … but that is not the point of this comment. What I wanted to say is we take this Totally Transparent Research process pretty seriously, and we want all of our research and opinions out in the open. We like being able to share where our ideas and beliefs come from. Don’t like it? You can tell us and everyone else who reads the blog we are full of BS, and what’s more, we don’t edit comments. One other amazing aspect of conducting research in this way has been comments on what we have not said. More specifically, every time I have pulled content I felt was important but confused the overall flow of the post, readers pick up on it. They make note of it in the comments. I think this is awesome! Tells me that people are following our reasoning. Keeps us honest. Makes us better. Right or wrong, the discussion helps the readers in general, and it helps us know what your experiences are. Rich would prefer that I write faster and more often than I do, especially with the white papers. But odd as it may seem, I have to believe the recommendations I make otherwise I simply cannot put the words down on paper. No passion, no writing. The quote Rich referenced was from an email I sent him late Sunday night after struggling with recommending a particular technology over another, and quite literally could not finish the paper until I had solved that puzzle in my own mind. If I don’t believe it based upon what I know and have experienced, I cannot put it out there. And I don’t really care if you disagree with me as long as you let me know why what I said is wrong, and how I screwed up. More, I especially don’t care if the product vendors or security researchers are mad at me. For every vendor that is irate with what I write, there is usually one who is happy, so it’s a zero sum game. And if security researchers were not occasionally annoyed with me there would be something wrong, because we tend to be a rather cranky group when others do not share our personal perspective of the way things are. I would rather have the end users be aware of the issues and walk into any security effort with their eyes open. So I feel good in getting these last two series completed as I think it is good advice and I think it will help people in their jobs. Hopefully you will find what we do useful! On to the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: In a nepotistic extravaganza during Martin’s absence, this week’s network podcast included both Rich & Adrian, with Rich sharing a few rumors on the Heartland breach. Adrian was interviewed by SC Magazine on the Los Alamos Lab’s missing computers. Rich wrote up the Mac OS X Security Update for TidBITS. Macworld released their Security Superguide, with Rich & Chris as authors. Much to their surprise! Rich participated in an SC Magazine webcast on PCI. Rich moderated the WhiteHatWorld.com Thought Leadership Roundtable on Cloud Computing Security. (Sorry, replay link isn’t up yet.) Favorite Securosis Posts: Rich: Recent Breaches- How To Limit Malicious Outbound Connections. There are a couple of great comments with additional information, including one from Big Bad Mike Rothman, who is not dead yet. Adrian: An Analyst Conundrum for, well, the ten or so reasons I mentioned above. Favorite Outside Posts: Adrian: Showing some love for Dre … Talking about why WAF

Share:
Read Post

Los Alamos Missing Computers

Yahoo! News is reporting that the Los Alamos nuclear weapons research facility reportedly is missing some 69 computers according to a watchdog group who released an internal memo. Either they have really bad inventory controls, or they have a kleptomaniac running around the lab. Even for a mid-sized organization, this is a lot, especially given the nature of their business. Granted the senior manager says this does not mean there was a breach of classified information, and I guess I should give him the benefit of the doubt, but I have never worked at a company where sensitive information did not flow like water around the organization regardless of policy. The requirement may be to keep classified information off unclassified systems, but unless those systems are audited, how would you know? How could you verify if they are missing. We talk a lot about endpoint security and and the need to protect laptops, but really, if you work for an organization that deals with incredibly sensitive information (you know, like nuclear secrets) you need to encrypt all of the media regardless of the media being mobile or not. There are dozens of vendors that offer software encryption and most of the disk manufacturers are coming out with encrypted drives. And you are probably aware if you read this blog that we are proponents of DLP in certain cases; this type of policy enforcement for the movement of classified information would be a good example. You would think organizations such as this would be ahead of the curve in this area, but apparently not. Share:

Share:
Read Post

The Business Justification for Data Security: Additional Positive Benefits

So far in this series we have discussed how to assess both the value of the information your company uses, and some potential losses should your data be stolen. The bad news is that security spending only mitigates some portion of the threats, but cannot eliminate them. While we would like our solutions to eradicate threats, it’s usually more complicated than that. Fortunately there is some good news, that being security spending commonly addresses other areas of need and has additional tangible benefits that should be factored into the overall evaluation. For example, the collection, analysis, and reporting capabilities built into most data security products – when used with a business processing perspective – supplement existing applications and systems in management, audit and analysis. Security investment can also be readily be leveraged to reduce compliance costs, improve systems management, efficiently analyze workflows, and gain a better understanding of how data is used and where it is located. In this post, we want make short mention of some of the positive & tangible aspects of security spending that you should consider. We will put this into the toolkit at the end of the series, but for now, we want to discuss cost savings and other benefits. Reduced compliance/audit costs Regulatory initiatives require that certain processes be monitored for policy conformance, as well as subsequent verification to ensure those policies and controls align appropriately with compliance guidelines. As most security products examine business processes for suspected misuse or security violations, there is considerable overlap with compliance controls. Certain provisions in the Gramm-Leach-Bliley Act (GLBA), Sarbanes-Oxley (SOX), and the Health Insurance Portability and Accountability Act (HIPPA) either call for security, process controls, or transactional auditing. While data security tools and products focus on security and appropriate use of information, policies can be structured to address compliance as well. Let’s look at a couple ways security technologies assist with compliance: Access controls assist with separation of duties between operational, administrative, and auditing roles. Email security products provide with pretexting protection as required by GLBA. Activity Monitoring solutions perform transactional analysis, and with additional polices can provide process controls for end-of-period-adjustments (SOX) as well as address ‘safeguard’ requirements in GLBA. Security platforms separate the roles of data collection, data analysis, and policy enforcement, and can direct alerts to appropriate audiences outside security. Collection of audit logs, combined with automated filtering and encryption, address common data retention obligations. DLP, DRM, and encryption products assist in compliance with HIPAA and appropriate use of student records (FERPA). Filtering, analysis, and reporting help reduce audit costs by providing auditors with necessary information to quickly verify the efficacy and integrity of controls; gathering this information is typically an expensive portion of an audit. Auditing technologies provide a view into transactional activity, and establish the efficacy and appropriateness of controls. Reduced TCO Data security products collect information and events that have relevance beyond security. By design they provide a generic tool for the collection, analysis, and reporting of events that serve regulatory, industry, and business processing controls; automating much of the analysis and integrating with other knowledge management and response systems. As a result they can enhance existing IT systems in addition to their primary security functions. The total cost of ownership is reduced for both security and general IT systems, as the two reinforce each other – possibly without requiring additional staff. Let’s examine a few cases: Automating inspection of systems and controls on financial data reduces manual inspection by Internal Audit staff. Systems Management benefits from automating tedious inspection of information services, verifying that services are configured according to best practices; this can reduce breaches and system downtime, and ease the maintenance burden. Security controls can ensure business processes are followed and detect failure of operations, generating alerts in existing trouble ticketing systems. Risk reduction Your evaluation process focuses on determining if you can justify spending some amount of money on a certain product or to address a specific threat. That laser focus is great, but data security is an enterprise issue, so don’t lose sight of the big picture. Data security products overlap with general risk reduction, similar to the way these products reduce TCO and augment other compliance efforts. When compiling your list of tradeoffs, consider other areas of risk & reward as well. Assessment and penetration technologies discover vulnerabilities and reduce exposure; keeping data and applications safe helps protect networks and hosts. IT systems interconnect and share data. Stopping threats in one area of business processing can improve reliability and security in connected areas. Discovery helps analysts process and understand risk exposure by providing locating data, and recording how it is used throughout the enterprise, and ensuring compliance with usage policies. Also keep in mind that we are providing a model to help you justify security expenditures, but that does not mean our goal is to promote security spending. Our approach is pragmatic, and if you can achieve the same result without additional security products to support your applications, we are all for that. In much the same way that security can reduce TCO, some products and platforms have security built in, thus avoiding the need for additional security expenditures. We recognize that data security choices typically are the last to be made, after deployment of the applications for business processing, and after infrastructure choices to support the business applications. But if your lucky enough to have built in tools, use them. Share:

Share:
Read Post

The Business Justification for Data Security: Understanding Potential Loss

Rich posted the full research paper last week, but as not everyone wants to read the full 30 pages, we decided to continue posting excepts here. We still encourage comments as this will be a living document for us, and we will expand in the future. Here is Part Four: Understanding Potential Losses Earlier we deliberately decoupled potential losses from risk impact, even though loss is clearly the result of a risk incident. Since this is a business justification model rather than a risk management model, it allows us to account for major types of potential loss that are the result of multiple types of risk and simplifies our overall analysis. We will highlight the major loss categories associated with data security and, as with our risk estimates, break them out into quantitative and qualitative categories. These loss categories can be directly correlated back to risk estimates, and it may make sense to walk through that exercise at times, but as we complete our business justification you’ll see why it isn’t normally necessary. If data is stolen in a security breach, will it cost you a million dollars? A single dollar? Will you even notice? Under “Data Loss Models”, we introduced a method for estimating values of the data your company possess to underscore what is at stake. Now we will provide a technique for estimating costs to the business in the event of a loss. We look at some types of loss and their impacts. Some of these have hard costs that can be estimated with a reasonable degree of accuracy. Others are more nebulous, so assigning monetary values doesn’t make sense. But don’t forget that although we may not be able to fully quantify such losses, we cannot afford to ignore them them, because unquantifiable costs can be just as damaging. Quantified vs. Qualified Losses As we discussed with noisy threats, it is much easier to justify security spending based on quantifiable threats with a clear impact on productivity and efficiency. With data security, quantification is often the rare exception, and real losses typically combine quantified and qualified elements. For example, a data breach at your company may not be evident until long after the fact. You don’t lose access to the data, and you might not suffer direct losses. But if the incident becomes public, you could then face regulatory and notification costs. Stolen customer lists and pricing sheets, stolen financial plans, and stolen source code can all reduce competitive advantage and impact sales — or not, depending on who stole what. Data stolen from your company may be used to commit fraud, but the fraud itself might be committed elsewhere. Customer information used in identity theft causes your customers major hassles, and if they discover your firm was the source of the information, you may face fines and legal battles over liability. As these can account for a majority of total costs, despite the difficulty in obtaining an estimate of the impact, we must still account for the potential loss to justify spending to prevent or reduce it. We offer two approaches to combining quantified and qualified potential losses. In the first, you walk through each potential loss category and either assign an estimated monetary value, or rate it on our 1-5 scale. This method is faster, but doesn’t help correlate the potential loss with your tolerance. In the second method, you create a chart like the one below, where all potential losses are rated on a 1-5 scale, with either value ranges (for quantitative loss) in the cells, or qualitative statements describing the level of loss. This method takes longer, since you need to identify five measurement points for each loss category, but allows you to more easily compare potential losses against your tolerance, and identify security investments to bring the potential losses (or their likelihood) down to an acceptable level. Loss = 1 2 3 4 5 Notification costs (total, not per record) $0-$1000 $1,001-$10,000 $10,001-$100,000 $100,001-$500,000 >$500,00 Reputation Damage No negative publicity Single negative press mention, local/online only Ongoing negative press <2 weeks, local/online only, Single major outlet mention. Ongoing sustained negative press >2 weeks, including multiple major outlets. Measurable drop in customer activity. Sustained negative press in major outlets or on a national scale. Material drop in customer activity. Potential Loss Categories Here are our recommended assessment categories for potential loss, categorized by quantifiable vs. only qualifiable: Quantifiable potential data security losses: Notification Costs: CA 1386 and associated state mandates to inform customers in the event of a data breach. Notification costs can be estimated in advance, and include contact with customers, as well as any credit monitoring services to identify fraudulent events. The cost is roughly linear with the total number of records compromised. Compliance Costs: Most companies are subject to federal regulations or industry codes they must adhere to. Loss of data and data integrity issues are generally violations. HIPAA, GLBA, SOX, and others include data verification requirements and fines for failure to comply. Investigation & Remediation Costs: An investigation into how the data was compromised, and the associated costs to remediate the relevant security weaknesses, have a measurable cost to the organization. Contracts/SLAs: Service level agreements about quality or timeliness of services are common, as are confidentiality agreements. Businesses that provide data services rely upon the completeness, accuracy, and availability of data; falling short in any one area may violate SLAs and/or subject the company to fines or loss of revenues. Credit: Loss of data and compromise of IT systems are both viewed as indications of investment risk by the financial community. The resulting impact on interest rates and availability of funds may affect profitability. Future Business & Accreditation: Data loss, compliance failures, or compliance penalties may impair ability to bid on contracts or even participate in certain ventures due to loss of accreditation. This can be a permanent or temporary loss, but the effects are tangible. Note that future business is also a qualitative loss — here we

Share:
Read Post

Friday Summary: February 6, 2009

Here it is Friday again, and it feels like just a few minutes ago that I was writing the last Friday summary. This week has been incredibly busy for both of us. Rich has been out for the count most of this week with a stomach virus and wandering his own house like a deranged zombie. This was not really a hack, they were just warning Rich’s neighborhood. As the county cordoned off his house with yellow tape and flagged him as a temporary bio-hazard, I thought it best to forgo this week’s face to face Friday staff meeting, and get back on track with our blogging. Between the business justification white paper that we launched this week, and being on the road for client meetings, we’re way behind. A few items of interest … I appears that data security is really starting to enter the consciousness of the common consumer. Or at least it is being marketed to them. There were even more advertisements in the San Jose airport this week than ever: The ever-present McAfee & Cisco ads were joined by Symantec and Compuware. The supermarket has Identity Theft protection pamphlets from not one but two vendors. The cherry on top of this security sundae was the picture of John Walsh in the in-flight magazine, hawking “CA Internet Security Suite Plus 2009”. I was shocked. Not because CA has a consumer security product; or because they are being marketed along with Harry Potter commemorative wands, holiday meat platters and low quality electronics. No, it was John Walsh’s smiling face that surprised me. Because John Walsh “Trusts CA Security Products to keep your family safe”. WTF? Part of me is highly offended. The other part of me thinks this is absolutely brilliant marketing. We have moved way beyond a features and technology discussion, and into JV-celebrities telling us what security products we should buy. If it were only that easy. Myopia Alert: I am sure there are others in the security community who have gone off on this rant as well, but as I did not find a reference anywhere else, I thought this topic was worth posting. A word of caution for you PR types out there who are raising customer and brand awareness through security webinars and online security fora: you might want to have some empathy for your audience. If your event requires a form to be filled out, you are going to lose a big chunk of your audience because people who care about security care about their privacy as well. The audience member will bail, or your outbound call center will be dialing 50 guys named Jack Mayhoff. Further, if that entry form requires JavaScript and a half dozen cookies, you are going to lose a bigger percentage of your audience because JavaScript is a feature and a security threat rolled into one. Finally, if the third-party vendor you use to host the event does not support Firefox or Safari, you will lose the majority of your audience. I am not trying to be negative, but want to point out that while Firefox, Safari and Opera may only constitute 25% of the browser market, they are used by 90% of the people who care about security. Final item I wanted to talk about: Our resident wordsmith and all around good guy Chris Pepper forwarded Rich and me a Slashdot link about how free Monty Python material on YouTube has caused their DVD sales to skyrocket. Both Rich and I have raised similar points here in the past, and we even referred to this phenomena in the Business Justification for Security Spending paper about why it can be hard to understand damages. While organizations like the RIAA feel this is counter-intuitive, it makes perfect sense to me and anyone else who has ever tried guerilla marketing, or seen the effects of viral marketing. Does anyone know if the free South Park episodes did the same for South Park DVD sales? I would be interested. Oh, and Chris also forwarded Le Wrath di Kahn, which was both seriously funny and really works as opera (the art form- I didn’t test in the browser). On to the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: Rich did a webcast with Roxana Bradescu of Oracle on Information Security for Database Professionals. Here is the sign-up link, and I will post a replay link later when I get one from Oracle. Favorite Securosis Posts: Rich: Launch of the Business Justification for Security Spending white paper. Whew! Adrian: The Business Justification for Data Security post on Risk Estimation. I knew this was going to cause some of the risk guys to go apoplectic, but we were not building a full-blown risk management model, and frankly, risk calculations made every model so complex no one could use it as a tool. Favorite Outside Posts: Adrian: Informative post by Robert Graham on Shellcode in software development. Write once, run anywhere malware? Anyone? Rich: XKCD was a riot. [What my friend John Kelsey used to call “Lead Pipe Cryptanalysis’ ] Top News and Posts: Nine million, in cold hard cash, stolen from ATM’s around the world. Wow. I will be blogging more on this in the future: Symantec and Ask.com joint effort. Marketing hype or real consumer value? Very informative piece on how assumptions of what should be secured and what we can ignore are often the places where we fail. Addicted to insecurity. At least 600k US jobs lost in January. Google thought everyone was serving malware. This is an atrocious practice: the EULA tells you you can’t use your firewall, and they can take all your bandwidth. RBS breach was massive, and fast. Blog Comment of the Week: From Chris Hayes on the Risk Estimation for Business Justification for Data Security post: Up to this point in the series, this “business justification” security investment model appears to be nothing more then a glorified cost benefit analysis wrapped up in risk/business

Share:
Read Post

The Business Justification for Data Security: Risk Estimation

This is the third part of our Business Justification for Data Security series (Part 1, Part 2), and if you have been following the series this far, you know that Rich and I have complained about how difficult this paper was to write. Our biggest problem was fitting risk into the model. In fact we experimented and ultimately rejected a couple models because the reduction of risk vs. any given security investment was non-linear. And there were many threats and many different responses, few of which were quantifiable, making the whole effort ‘guestimate’ soup. In the end , risk became our ‘witching rod’; a guide as to how we balance value vs loss, but just one of the tools we use to examine investment decisions. Measuring and understanding the risks to information If data security were a profit center, we could shift our business justification discussion from the value of information right into assessing its potential for profit. But since that isn”t the case, we are forced to examine potential reductions in value as a guide to whether action is warranted. The approach we need to take is to understand the risks that directly threaten the value of data and the security safeguards that counter those risks. There’s no question our data is at risk; from malicious attackers and nefarious insiders to random accidents and user errors, we read about breaches and loss nearly every day. But while we have an intuitive sense that data security is a major issue, we have trouble getting a handle on the real risks to data in a quantitative sense. The number of possible threats and ways to steal information is staggering, but when it comes to quantifying risks, we lack much of the information needed for an accurate understanding of how these risks impact us. Combining quantitative and qualitative risk estimates We”ll take a different approach to looking at risk; we will focus on quantifying the things that we can, qualifying the things we can”t, and combining them in a consistent framework. While we can measure some risks, such as the odds of losing a laptop, it’s nearly impossible to measure other risks, such as a database breach via a web application due to a new vulnerability. If we limit ourselves only to what we can precisely measure, we won”t be able to account for many real risks to our information. Inclusion of quantitative assessments, since they are a powerful tool to understand risk and influence decisions, help validate the overall model. For our business justification model, we deliberately simplify the risk assessment process to give us just what we need to understand need for data security investments. We start by listing out the pertinent risk categories, then the likelihood or annual rate of occurrence for each risk, followed by severity ratings broken out for confidentiality, integrity, and availability. For risk events we can predict with reasonable accuracy, such as lost laptops with sensitive information, we can use numbers. In the example below, we know the A ualized Rate of Occurrence (ARO), so we plug with value in. For less predictable risks, we just rate them from “low” to “high”. We then mark off our currently estimated (or measured) levels in each category. For qualitative measure, we will use a 1-5 scale to , but this is arbitrary, and you should use whatever scale that provides you with a level of granularity that assists understanding. Risk Estimation: Credit Card Data (Sample): < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Impact Risk Likelihood/ARO C I A Total Lost Laptop 43 4 1 3 51 Database Breach (Ext) 2 5 3 2 12 This is the simplified risk scorecard for the business justification model. The totals aren”t meant to compare one risk category to another, but to derive estimated totals we will use in our business justification to show potential reductions from the evaluated investment. While different organizations face different risk categories, we”ve included the most common data security risks here, and in Section 6 we show how it integrates into the overall model. Common data security risks The following is an outline of the major categories for information loss. Any time you read about a data breach, one or more of these events occurred. This list isn”t intended to comprehensive, rather provide a good overview of common data security risk categories to give you a jump start on implementing the model. Rather than discuss each and every threat vector, we will present logical groups to illustrate that the risks and potential solutions tend to be very similar within each specific category. The following are the principal categories to consider: Lost Media This category describes data at rest, residing on some form of media, that has been lost or stolen. Media includes disk drives, tape, USB/memory sticks, laptops, and other devices. This category encompasses the majority of cases of data loss. Typical security measures for this class includes media encryption, media “sanitizing”, and in some cases endpoint Data Loss Prevention technology. Lost disks/backup tape Lost/stolen laptop. Information leaked through decommissioned servers/drives Lost memory stick/flash drive Stolen servers/workstations Inadvertent Disclosure This category includes data being accidentally exposed in some way that leads to unwanted disclosure. Examples include email to unwanted recipients, posting confidential data to web sites, unsecured Internet transmissions, lack of access controls, and the like. Safeguards include email & web security platforms, DLP and access controls systems. Each is effective, but only against certain threat types. Process and workflow controls are also needed to help catch human error. Data accidentally leaked through email (Sniffed, wrong address, un-purged document metadata) Data leaked by inadvertent exposure (Posted to the web, open file shares, unprotected FTP, or otherwise placed in an insecure location) Data leaked by unsecured connection Data leaked through file sharing File sharing programs are used to move large files efficiently (and possibly illegally). External Attack/Breach This category describes instances of data theft where company systems and applications are compromised by a malicious attacker, affecting confidentiality

Share:
Read Post

Friday Summary – Jan 30, 2009

A couple of people forwarded me this interview, and if you have not read it, it is really worth your time. It’s an amazing interview with Matt Knox, a developer with Direct Revenue who authored adware during his employ with them. For me this is important as it highlights stuff I figured was going on but really could not prove. It also exposes much of the thought process behind the developers at Micosoft, and it completely altered my behavior for ’sanitizing’ my PC’s. For me, this all started a few years ago (2005?) when my Windows laptop was infected with this stuff. I discovered something was going on because there was ongoing activity in the background when the machine was idle and started to affect machine responsiveness. The mysterious performance degradation was difficult to find as I could not locate a specific application responsible, and the process monitors provided with Windows are wholly inadequate. I found that there were processes running in the background unassociated with any application, and unassociated with Windows. I did find files that were associated with these processes, and it was clear they did not belong on the machine. When I went to delete them, they popped up again within minutes- with new names! I was able to find multiple registry entries, and the behavior suggested that multiple pieces of code monitored each other for health and availability, and fixed each other if one was deleted. Even if I booted in safe mode I had no confidence that I could completely remove this … whatever it was … from the machine. At that point in time I knew I needed to start over. How this type of software could have gotten into the registry and installed itself in such a manner was perplexing to me. Being a former OS developer, I started digging, and that’s when I got mad. Mr. Knox uses the word ‘promiscuous’ to describe the OS calls, and that was exactly what it was. There were API calls to do pretty much anything you wanted to do, all without so much as a question being asked of the user or of the installing party. You get a clear picture of the mentality of the developers who wrote the IE and Windows OS code back then- there were all sorts of nifty ways to ‘do stuff’, for anyone who wanted to, and not a shred of a hint of security. All of these ‘features’ were for someone else’s benefit! They could use my resources at will- as if they had the keys to my house, and when I left, they were throwing a giant party at my expense. What really fried me was that, while I could see these processes and registry entries, none of the anti-virus or anti-malware tools would detect them. So if I wanted to secure my machine, it was up to me to do it. So I said this changed my behavior. Here’s how: Formatted the disk and reinstalled the OS Switched to Firefox full time. A few months later I discovered Flashblock and NoScript. I stopped paying for desktop anti-virus and used free stuff or nothing at all. It didn’t work for the desktop, and email AV addressed my real concern. I found a process monitor that gave me detailed information on what was running and what resources were being used. I cateloged every process on the Windows machine, and kept a file that described each process’ function so I could cross-check and remove stuff that was not supposed to be there. I began manually starting everything (non-core) through the services panel if I needed it. Not only did this help me detect stuff that should not be running, it reduced risks associated with poorly secured applications that leave services sitting wide open on a port. Uninstalled WebEx, RealPlayer, and several other suspects after using. I kept all of my original software nearby and planned to re-install, from CD or DVD, fresh every year. Until I got VMware. I used a virtual partition for risky browsing whenever possible. I now use a Mac, and run my old licensed copies of Windows in Parallels. Surprised? Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: Martin & Rich and I talk about the White House homeland security agenda, phishing, and the monster.com security breach on the Network Security Podcast #136. Don’t forget to submit any hacks or exploits for Black Hat 2009 consideration. Favorite Securosis Posts: Rich: Inherent Role Conflicts in National Cyber-security post. Adrian: The post on Policies and Security Products: Something you need to consider in any security product investment. Favorite Outside Posts: < div> Adrian: Rafal’s post on network security: not ready to give up, but surely need to switch the focus. Rich: Like Adrian said, the philosecurity interview with Matt Knox is a really interesting piece. Top News and Posts: Very interesting piece from Hackademics on IE’s “clickjacking protection”. Additional worries about upcoming Conflicker Worm payloads. Can’t be all security: This is simply astounding: Exxon achieves $45 billion in 2008. Not in revenue, in profit. The disk drive companies are marketing built in encryption. While I get a little bristly when marketed as protecting the consumer and it’s going into server arrays, this is a very good idea, and will eventually end up in consumer drives. Yeah! More on DarkMarket and the undercover side of the operation. Police still after culprits on Heartland Breach. Again? Monster.com has another breach. They have a long way to go before they catch Lexis-Nexus, but they’re trying. The Red Herring site has been down all week … wondering if they have succumbed to market conditions. Blog Comment of the Week: Good comment from Jack Pepper on “PCI isn’t meant to protect cardholder …” post: “Why is this surprising? the PCI standard was developed by the card industry to be a “bare minimum” standard for card processing. If anyone in the biz thinks PCI is more that “the bare

Share:
Read Post

Policies and Security Products

Where do the policies in your security product come from? With the myriad of tools and security products on the market, where do the pre-built policies come from? I am not speaking of AV in this post- rather looking at IDS, VA, DAM, DLP, WAF, pen testing, SIEM, and many others that use a set of policies to address security and compliance problems. The question is who decides what is appropriate? On every sales engagement, customer and analyst meeting I have ever participated in for security products, this was a question. This post is intended more for IT professional who are considering security products, so I am gearing for that audience. When drafting the web application security program series last month, a key topic that kept coming up over and over again from security practitioners was: “How can you recommend XYZ security solution when you know that the customer is going to have to invest a lot for the product, but also a significant amount in developing their own policy set?” This is both an accurate observation and the right question to be asking. While we stand by our recommendations for reasons stated in the original series, it would be a disservice to our IT readers if we did not discuss this in greater detail. The answer is an important consideration for anyone selecting a security tool or suite. When I used to develop database security products, policy development was one of the tougher issues for us to address on the vendor side. Once aware of a threat, on average it took 2.5 ‘man-days’ to develop a policy with a test case and complete remediation information [prior to QA]. This becomes expensive when you have hundreds of policies being developed for different problem sets. It was a common competitive topic to discuss policy coverage and how policies were generated, and a basic function of the product, so most every vendor will invest heavily in this area. More, most vendors market their security ‘research teams’ that find exploits, develop test code, and provide remediation steps. This domain expertise is one of the areas where vendors provide value in the products that they deliver, but when it comes down to it, vendor insight is fraction of the overall source of information. With monitoring and auditing, policy development was even harder: The business use cases were more diverse and the threats not completely understood. Sure we could return the ubiquitous who-what-when-where-to-from kind of stuff, but how did that translate to business need? If you are evaluating products or interested in augmenting your policy set, where do you start? With vulnerability research, there are several resources that I like to use: Vendor best practices – Almost every platform vendor, from Apache to SAP, offer security best practices documents. These guidelines on how to configure and operate their product form the basis for many programs. These cover operational issues that reduce risk, discuss common exploits, and reference specific security patches. These documents are updated during each major release cycle, so make sure you periodically review for new additions, or how they recommend new features be configured and deployed. What’s more, while the vendor may not be forthcoming with exploit details, they are the best source of information for remediation and patch data. CERT/Mitre – Both have fairly comprehensive lists of vulnerabilities to specific products. Both provide a neutral description of what the threat is. Neither had great detailed information of the actual exploit, not will they have complete remediation information. It is up to the development team to figure out the details. Customer feedback/peer review – If you are a vendor of security products, customer have applied the policies and know what works for them. They may have modified the code that you use to remediate a situation, and that may be a better solution than what your team implemented, and/or it may be too specific to their environment for use in a generalized product. If you are running your own IT department, what have your peers done? Next time you are at a conference or user group, ask. Regardless, vendors learn from other customers what works for them to address issues, and you can too. 3rd party relationships (consultants, academia, auditors) – When it comes to development of policies related to GLBA or SOX, which are outside the expertise of most security vendors, it’s particularly valuable to leverage third party consultative relations to augment policies with their deep understanding of how best to approach the problem. In the past I have used relationships with major consulting firms to help analyze the policies and reports we provided. This was helpful, as they really did tell us when some of our policies were flat out bull$(#!, what would work, and how things could work better. If you have these relationships already in place, carve out a few hours so they can help review and analyze policies. Research & Experience – Most companies have dedicated research teams, and this is something you should look for. They do this every day and they get really good at it. If your vendor has a recognized expert in the field on staff, that’s great too. That person may be quite helpful to the overall research and discovery process of threats and problems with the platforms and products you are protecting. The reality is that they are more likely on the road speaking to customers, press and analysts rather than really doing the research. It is good that your vendor has a dedicated team, but their experience is just one part of the big picture. User groups – With many of the platforms, especially Oracle, I learned a lot from regional DBAs who supported databases within specific companies or specific verticals. In many cases they did not have or use a third party product, rather they had a bunch of scripts that they had built up over many years, modified, and shared with others. They shared tips on not only what

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.