Securosis

Research

Submit A Top Ten Web Hacking Technique

Last week Jeremiah Grossman asked if I’d be willing to be a judge to help select the Top Ten Web Hacking Techniques for 2008. Along with Chris Hoff (not sure who that is), H D Moore, and Jeff Forristal. Willing? Heck, I’m totally, humbly, honored. This year’s winner will receive a free pass to Black Hat 2009, which isn’t to shabby. We are up to nearly 70 submissions, so keep ‘em coming. Share:

Share:
Read Post

Policies and Security Products

Where do the policies in your security product come from? With the myriad of tools and security products on the market, where do the pre-built policies come from? I am not speaking of AV in this post- rather looking at IDS, VA, DAM, DLP, WAF, pen testing, SIEM, and many others that use a set of policies to address security and compliance problems. The question is who decides what is appropriate? On every sales engagement, customer and analyst meeting I have ever participated in for security products, this was a question. This post is intended more for IT professional who are considering security products, so I am gearing for that audience. When drafting the web application security program series last month, a key topic that kept coming up over and over again from security practitioners was: “How can you recommend XYZ security solution when you know that the customer is going to have to invest a lot for the product, but also a significant amount in developing their own policy set?” This is both an accurate observation and the right question to be asking. While we stand by our recommendations for reasons stated in the original series, it would be a disservice to our IT readers if we did not discuss this in greater detail. The answer is an important consideration for anyone selecting a security tool or suite. When I used to develop database security products, policy development was one of the tougher issues for us to address on the vendor side. Once aware of a threat, on average it took 2.5 ‘man-days’ to develop a policy with a test case and complete remediation information [prior to QA]. This becomes expensive when you have hundreds of policies being developed for different problem sets. It was a common competitive topic to discuss policy coverage and how policies were generated, and a basic function of the product, so most every vendor will invest heavily in this area. More, most vendors market their security ‘research teams’ that find exploits, develop test code, and provide remediation steps. This domain expertise is one of the areas where vendors provide value in the products that they deliver, but when it comes down to it, vendor insight is fraction of the overall source of information. With monitoring and auditing, policy development was even harder: The business use cases were more diverse and the threats not completely understood. Sure we could return the ubiquitous who-what-when-where-to-from kind of stuff, but how did that translate to business need? If you are evaluating products or interested in augmenting your policy set, where do you start? With vulnerability research, there are several resources that I like to use: Vendor best practices – Almost every platform vendor, from Apache to SAP, offer security best practices documents. These guidelines on how to configure and operate their product form the basis for many programs. These cover operational issues that reduce risk, discuss common exploits, and reference specific security patches. These documents are updated during each major release cycle, so make sure you periodically review for new additions, or how they recommend new features be configured and deployed. What’s more, while the vendor may not be forthcoming with exploit details, they are the best source of information for remediation and patch data. CERT/Mitre – Both have fairly comprehensive lists of vulnerabilities to specific products. Both provide a neutral description of what the threat is. Neither had great detailed information of the actual exploit, not will they have complete remediation information. It is up to the development team to figure out the details. Customer feedback/peer review – If you are a vendor of security products, customer have applied the policies and know what works for them. They may have modified the code that you use to remediate a situation, and that may be a better solution than what your team implemented, and/or it may be too specific to their environment for use in a generalized product. If you are running your own IT department, what have your peers done? Next time you are at a conference or user group, ask. Regardless, vendors learn from other customers what works for them to address issues, and you can too. 3rd party relationships (consultants, academia, auditors) – When it comes to development of policies related to GLBA or SOX, which are outside the expertise of most security vendors, it’s particularly valuable to leverage third party consultative relations to augment policies with their deep understanding of how best to approach the problem. In the past I have used relationships with major consulting firms to help analyze the policies and reports we provided. This was helpful, as they really did tell us when some of our policies were flat out bull$(#!, what would work, and how things could work better. If you have these relationships already in place, carve out a few hours so they can help review and analyze policies. Research & Experience – Most companies have dedicated research teams, and this is something you should look for. They do this every day and they get really good at it. If your vendor has a recognized expert in the field on staff, that’s great too. That person may be quite helpful to the overall research and discovery process of threats and problems with the platforms and products you are protecting. The reality is that they are more likely on the road speaking to customers, press and analysts rather than really doing the research. It is good that your vendor has a dedicated team, but their experience is just one part of the big picture. User groups – With many of the platforms, especially Oracle, I learned a lot from regional DBAs who supported databases within specific companies or specific verticals. In many cases they did not have or use a third party product, rather they had a bunch of scripts that they had built up over many years, modified, and shared with others. They shared tips on not only what

Share:
Read Post

Inherent Role Conflicts In National Cybersecurity

I spent a lot of time debating with myself if I should wade into this topic. Early in my analyst career I loved to talk about national cybersecurity issues, but I eventually realized that, as an outsider, all I was doing was expending ink and oxygen, and I wasn’t actually contributing anything. That’s why you’ve probably noticed we spend more time on this blog talking about pragmatic security issues and dispensing practical advice than waxing poetic about who should get the Presidential CISO job or dispensing advice to President Obama (who, we hate to admit, probably doesn’t read the blog). Unless or until I, or someone I know, gets “the job”, I harbor no illusions that what I write and say reaches the right ears. But as a student of history, I’m fascinated by the transition we, of all nations, face due to our continuing reliance the Internet to run everything from our social lives, to the global economy, to national defense. Rather than laying out my 5 Point Plan for Solving Global Cyber-Hunger and Protecting Our Children, I’m going to talk about some more generic issues that I personally find compelling. One of the more interesting problems, and one that all nations face, is the inherent conflicts between the traditional roles of those that safeguard society. Most nations rely on two institutions to protect them- the military and the police. The military serves two roles: to protect the institution of the nation state from force, and to project power (protecting national assets, including lines of commerce, that extend outside national boundaries). Militaries are typically focused externally, even in fascist states, but do play a variable domestic role, even in the most liberal of democratic societies. Militaries are externally focused entities, who only turn internally when domestic institutions don’t have the capacity to manage situations. The police also hold dual roles: to enforce the law, and ensure public safety. Of course the law and public safety overlap to different degrees in different political systems. Seems simple enough, and fundamentally these institutions have existed since nearly the dawn of society. Even when it appears that the institutions are one and the same, that’s typically in name only since the skills sets involved don’t completely overlap, especially in the past few hundred years. Cops deal with crime, soldiers with war. The Internet is blasting those barriers, and we have yet to figure out how to structure the roles and responsibilities to deal with Internet-based threats. The Internet doesn’t respect physical boundaries, and its anonymity disguises actors. The exact same attack by the exact same threat actor could be either a crime, or an act of war, depending on the perspective. One of the core problems we face in cybersecurity today is structuring the roles and responsibilities for those institutions that defend and protect us. With no easy lines, we see ongoing turf battles and uncoordinated actions. The offensive role is still relatively well defined- it’s a responsibility of the military, should be coordinated with physical power projection capacity, and the key issue is over which specific department has responsibility. There’s a clear turf battle over offensive cyber operations here in the U.S., but that’s normal (explaining why every service branch has their own Air Force, for example). I do hope we get our *%$& together at some point, but that’s mere politics. The defensive role is a mess. Under normal circumstances the military protects us from external threats, and law enforcement from internal threats (yes, I know there are grey areas, but roll with me here). Many/most cyberattacks are criminal acts, but that same criminal act is maybe national security threat. We can usually classify a threat by action, intent, and actor. Is the intent financial gain? Odds are it’s a crime. Is the actor a nation state? Odds are it’s a national security issue. Does the action involve tanks or planes crossing a border? It’s usually war. (Terrorism is one of the grey areas- some say it’s war, others crime, and others a bit of both depending on who is involved). But a cyberattack? Even if it’s from China it might not be China acting. Even if it’s theft of intellectual property, it might not be a mere crime. And just who the heck is responsible for protecting us? Through all of history the military responds through use of force, but you don’t need me to point out how sticky a situation that is when we’re talking cyberspace. Law enforcement’s job is to catch the bad guys, but they aren’t really designed to protect national borders, never mind non-existent national borders. Intelligence services? It isn’t like they are any better aligned. And through all this I’m again shirking the issues of which agencies/branches/departments should have which responsibilities. This we need to start thinking a little differently, and we may find that we need to develop new roles and responsibilities and we drive deeper into the information age. Cybersecurity isn’t only a national security problem or a law enforcement problem, it’s both. We need some means to protect ourselves from external attacks of different degrees at the national level, since just telling every business to follow best practices isn’t exactly working out. We need a means of projecting power that’s short of war, since playing defense only is a sure way to lose. And right now, most countries can’t figure out who should be in charge or what they should be doing. I highly suspect we’ll see new roles develop, especially in the area of counter-intelligence style activity to disrupt offensive operations ranging from taking out botnets, to disrupting cybercrime economies, to counterespionage issues relating to private business. As I said in the beginning, this is a fascinating problem, and one I wish I was in a position to contribute towards, but Phoenix is a bit outside the Beltway, and no one will give me the President’s new Blackberry address. Even after I promised to stop sending all those LOLCatz forwards. Share:

Share:
Read Post

The Network Security Podcast, Episode 136

I managed to constrain my rants this week, staying focused on the issue as Martin and I covered our usual range of material. I think we were in top form in the first part of the show where we focus on the economics of breaches and discussed loss numbers, vs. breach notification statistics. Here are the show notes, and as usual the episode is here: Network Security Podcast, Episode 136, January 27, 2009 Time: 27:43 Show Notes: Maine surveys banks to determine some of the losses associated with major data breaches. It isn’t a small number. Monster.com loses some data. They don’t tell us who’s data they loss, or how or why, but they definitely lost some stuff. The White House homeland security agenda. There’s a cyber section. Which is cool, because someone can at least spell cyber. Phishers change URLs. We’re not sure why this is news, but we use it as an excuse to talk about other, more important things. A man buys a used MP3 player in New Zealand, with personal info on US soldiers in Iraq. WTF? Maybe it was a Zune? Tonight’s Music: Mexicolas with Big in Japan Share:

Share:
Read Post

The Business Justification For Data Security: Data Valuation

Man, nothing feels better than finishing off a few major projects. Yesterday we finalized the first draft of the Business Justification paper this series is based on, and I also squeezed out my presentation for IT Security World (in March) where I’m talking about major enterprise software security. Ah, the thrills and spills of SAP R/3 vs. Netweaver security! In our first post we provided an overview of the model. Today we’re going to dig into the first step- data valuation. For the record, we’re skipping huge chunks of the paper in these posts to focus on the meat of the model- and our invitation for reviewers is still open (official release date should be within 2 weeks). We know our data has value, but we can”t assign a definitive or fixed monetary value to it. We want to use the value to justify spending on security, but trying to tie it to purely quantitative models for investment justification is impossible. We can use educated guesses but they”re still guesses, and if we pretend they are solid metrics we”re likely to make bad risk decisions. Rather than focusing on difficult (or impossible) to measure quantitative value, let”s start our business justification framework with qualitative assessments. Keep in mind that just because we aren”t quantifying the value of the data doesn’t mean we won”t use other quantifiable metrics later in the model. Just because you cannot completely quantify the value of data, that doesn’t mean you should throw all metrics out the window. To keep things practical, let”s select a data type and assign an arbitrary value to it. To keep things simple you might use a range of numbers from 1 to 3, or “Low”, “Medium”, and “High” to represent the value of the data. For our system we will use a range of 1-5 to give us more granularity, with 1 being a low value and 5 being a high value. Another two metrics help account for business context in our valuation: frequency of use and audiences. The more often the data is used, the higher its value (generally). The audience may be a handful of people at the company, or may be partners & customers as well as internal staff. More use by more people often indicates higher value, as well as higher exposure to risk. These factors are important not only for understanding the value of information, but also the threats and risks associated with it – and so our justification for expenditures. These two items will not be used as primary indicators of value, but will modify an “intrinsic” value we will discuss more thoroughly below. As before, we will assign each metric a number from 1 to 5 , and we suggest you at least loosely define the scope of those ranges. Finally, we will examine three audiences that use the data: employees, customers, and partners; and derive a 1-5 score. The value of some data changes based on time or context, and for those cases we suggest you define and rate it differently for the different contexts. For example, product information before product release is more sensitive than the same information after release. As an example, consider student records at a university. The value of these records is considered high, and so we would assign a value of five. While the value of this data is considered “High” as it affects students financially, the frequency of use may be moderate because these records are accessed and updated mostly during a predictable window – at the beginning and end of each semester. The number of audiences for this data is two, as the records are used by various university staff (financial services and the registrar”s office), and the student (customer). Our tabular representation looks like this: < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Student Record 5 2 2 In our next post (later today) we’ll give you more examples of how this works. Share:

Share:
Read Post

The Business Justification for Data Security: Information Valuation Examples

In our last post, we mentioned that we’d be giving a few examples for data valuation. This is the part of the post where I try and say something pithy, but I’m totally distracted by the White House press briefing on MSNBC, so I’ll cut to the chase: As a basic exercise, let”s take a look at several common data types, discuss how they are used, and qualify their value to the organization. Several of these clearly have a high value to the organization, but others vary. Frequency of use and audience are different for every company. Before you start deriving values, you need to sit down with executives and business unit managers to find out what information you rely on in the first place, then use these valuation scenarios to help rank the information, and then feed the rest of the justification model. Credit card numbers Holding credit card data is essential for many organizations – a common requirement for dispute resolution; because most merchants sell products on the Internet, card data is subject to PCI DSS requirements. In addition to serving this primary function, customer support and marketing metrics derive value from the data. This information is used by employees and customers, but not shared with partners. < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Credit Card Number 4 2 3 Healthcare information (financial) Personally Identifiable Information is a common target for attackers, and a key element for fraud since it often contains financial or identifying information. For organizations such as hospitals, this information is necessary and used widely for treatment. While the access frequency may be moderate (or low, when a patient isn”t under active treatment), it is used by patients, hospital staff, and third parties such as clinicians and insurance personnel. < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Healthcare PII 5 3 4 Intellectual property Intellectual Property can take many forms, from patents to source code, so the values associated with this type of data vary from company to company. In the case of a publicly traded company, this may be project-related or investment information that could be used for insider trading. The value would be moderate for the employees that use this information, but high near the end of the quarter and other disclosure periods, when it’s also exposed to a wider audience. < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Financial IP (normal) 3 2 1 Financial IP (disclosure period) 5 2 2 Trade secrets Trade secrets are another data type to consider. While the audience may be limited to a select few individuals within the company, with low frequency of use, the business value may be extraordinarily high < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Trade Secrets 5 1 1 < p> Sales data The value of sales data for completed transactions varies widely by company. Pricing, customer lists, and contact information, are used widely throughout and between companies. In the hands of a competitor, this information could pose a serious threat to sales and revenue. < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Sales Data 2 5 4 < p> Customer Metrics The value of customer metrics varies radically from company to company. Credit card issuers, for example, may rate this data as having moderate value as it is used for fraud detection as well as sold to merchants and marketers. The information is used by employees and third party purchasers, and provided to customers to review spending. < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Data Value Frequency Audience Customer Metrics 4 2 3 You can create more more categories, and even bracket dollar value ranges if you find them helpful in assigning relative value to each data type in your organization. But we want to emphasize that these are qualitative and not quantitative assessments, and they are relative within your organization rather than absolute. The point is to show that your business uses many forms of information. Each type is used for different business functions and has its own value to the organization, even if it is not in dollars. Share:

Share:
Read Post

Credit Card (Paper) Security Fail

I’m consistently impressed with the stupidity of certain financial institutions. Take credit card companies and the issuing banks. We’re in the middle of a financial meltdown driven by failures in the credit system and easy credit, yet you still can’t check out at Target (or nearly anyplace else) without the annoying offer for your 10% discount if you just apply for a card on the spot. I also hate the “checks” they are always mailing me to transfer balances or otherwise use a credit for something I might use cash for. Any fraudster getting his or her hands on them can have a field day. That’s why I’m highly amused by the latest offer to my wife. The envelope arrived with her name and address on the outside, and some else’s pre-printed checks on the inside. I guess the sorting machine ended up, and hopefully her checks went to someone trustworthy. Share:

Share:
Read Post

How Much Security Will You Tolerate?

I have found a unique way to keep anyone from using my iMac. While family & friends love the display, they do not use my machine. Many are awed that they can run Windows in parallel to the Mac OS, and the sleek appearance and minimal footprint has created many believers- but after a few seconds they step away from the keyboard. Why? Because they cannot browse the Internet. My copy of Firefox has NoScript, Flashblock, cookie acknowledgement, and a couple of other security related ad-ons. But having to click the Flash logo, or to acknowledge a cookie, is enough to make them leave the room. “I was going to read email, but I think I will wait until I fly home”. I have been doing this so long I never even notice. I never stopped to think that every web page requires a couple extra mouse clicks to use, but I always accepted that it was worth it. The advantages to me in terms of security are clear. And I always get that warm glow when I find myself on a site for the first time and see 25 Flash icons littering the screen and a dozen cookie requests for places I have never heard of. But I recognize that I am in the minority. The added work seems to so totally ruin the experience and completely turn them off to the Internet. My wife even refused to use my machine, and while I think the authors of NoScript deserve special election into the Web Security Hall of Fame (Which given the lack of funding, currently resides in Rich’s server closet), the common user thinks of NoScript as a curse. And for the first time I think I fully understand their perspective, which is the motivation for this post. I too have discovered my tolerance limit. I was reading rsnake’s post on RequestPolicy Firefox extension. This looks like a really great idea, but acts like a major work inhibitor. For those not fully aware, I will simply say most web sites make requests for content from more than just one site. In a nutshell you implicitly trust more than just the web site you are currently visiting, but whomever provides content on the page. The plugin’s approach is a good one, but it pushed me over the limit of what I am willing to accept. For every page I display I am examining cookies, Flash, and site requests. I know that web security is one of the major issues we face, but the per-page analysis is not greater than the time I spend on many pages looking for specific content. Given that I do a large percentage of research on the web, visiting 50-100 sites a day, this is over the top for me. If you are doing any form of risky browsing, I recommend you use it selectively. Hopefully we will see a streamlined version as it is a really good idea. I guess the question in my mind is how much security will we tolerate? Even security professionals are subject to the convenience factor. Share:

Share:
Read Post

Friday Summary- January 23, 2009

Warning- today’s introduction includes my political views. History Whatever your political persuasion, there’s no denying the magnitude of this week. While we are far from eliminating racism and bias in this country, or the world at large, we passed an incredibly significant milestone in civil rights. My (pregnant) wife and I were sitting on the couch, watching a replay of President Obama’s speech, when she turned to me and said, “you know, our child will never know a world where we didn’t have a black president”. Change One thing I think we here in the US forget is just how much we change with the transition to each new administration, especially when control changes hands between parties. We see it as the usual continuity of progress, but it’s very different to the outside world. In my travels to other countries I’m amazed at their amazement at just how quickly we, as a nation, flip and flop. In the matter of a day our approach to foreign policy completely changes- never mind domestic affairs. We have an ability to completely remake ourselves to the world. It’s a hell of a strategic advantage, when you really think about it. In a matter of 3 days we’re seeing some of the most material change since the days of Nixon. Our government is reopening, restoring ethical boundaries, and reintroducing itself to the world. Faith When Bush was elected in 2000 I was fairly depressed. He seemed so lacking in capacity I couldn’t understand his victory. Then, after 9/11, I felt like I was living in a different country. An angry country, that no longer respected diversity of belief or tolerance. A country where abuse of power and disdain for facts and transparency became the rule of our executive branch, if not (immediately) the rule of law. I was in Moscow during the election and was elated when Obama won, despite the almost surreal experience of being in a rival nation. When I watched the inauguration I felt, for the first time in many years, that I again lived in the country I thought I grew up in- my faith restored. Talking with my friends of all political persuasions, it’s clear that this is also a transition of values. Transparency is back; something sorely lacking from both the public and private sector for far longer than Bush was in office. Accountability and sacrifice are creeping their heads over the wall. And lurking along the edges of the dark clouds above us is self sacrifice and unity of purpose. I’m excited. I’m excited more about what this mean to our daily and professional lives than just our governance. Will my hopes be dashed by reality? Probably, but I’d rather plunge in head first than cower at home, shopping off Amazon. Oh- and there was like this really huge security breach this week, some worm is running rampant and taking over all our computers, and some idiots keep downloading pirated software with a Mac trojan. Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: Martin and I talk a bit about all sorts of things- including Obama’s tech agenda, on The Network Security Podcast. I seem to run off on 3 separate rants. I wrote up the Heartland data breach for Dark Reading. I did a few interviews on the breach, including the MIT Technology Review, SearchSecurity, and SC Magazine. Favorite Securosis Posts: Rich: My Heartland post, because it got Slashdotted. Adrian: Perhaps it is the contrarian in me, but my favorite post is The Business Justification for Data Security. There is a lot of information here. Favorite Outside Posts: Adrian: Hoff’s ruminating on Cloud security of Core services. The series of posts has been interesting. I follow many of these blog posts made on dozens of different web sites, but only for the occasionally humorous debate. Not because I care about the nuts and bolts of how Cloud computing will work, how we define it, or where it is going. The CIO in me loves the thought of minimal risk for trying & adopting software and services. I am interested in the flexibility of adoption. I do not need to perform rigorous evaluations of hardware, software, and environmental considerations- just determine how it meets my business needs, how easy is it to use, and does the pricing model work for me. After a while if I don’t like it, I switch. Stickiness is no longer an investment issue, but a contract issue. And I am only afraid of these services not being in my core if I run out of choices in the vendor community. I know there are a lot more things I do need to consider, and I cannot assume 100% divestiture of responsibilities for compliance and whatnot, but wow, the perception of risk reduction in platform selection drops so much that I am likely to jump forward without a full understanding of other risks I may inherit because of these percieved benefits. Not that it’s ideal, but it is likely. Rich: Sharon on Wwll the Real PII Stand Up? He raises a great issue that there are a bunch of definitions of PII in different contexts, and an increasingly complex regulatory environment with multiple standards. Top News and Posts: Barack Obama’s inauguration stopped all activity at Securosis as Adrian came over to watch for a couple hours. His speech is worth a reread even if you watched it live. A lot of trusted websites are serving malware. The NSA spied on everyone. Except you, of course- you’re too boring. Conficker worm bad. I thought you Windows users figured out that patching thing? Actually, I highly suspect the infection numbers are inflated. Blog Comment of the Week: We didn’t post much, but the comments were great this week. Merchantgrl on the Heartland Breach post: They were breached a while ago and they just happened to pick that day to finally announce it? Several people have brought up the Trustwave audit of

Share:
Read Post

The Business Justification For Data Security

You’ve probably noticed that we’ve been a little quieter than usual here on the blog. After blasting out our series on Building a Web Application Security Program, we haven’t been putting up much original content. That’s because we’ve been working on one of our tougher projects over the past 2 weeks. Adrian and I have both been involved with data security (information-centric) security since long before we met. I was the first analyst to cover it over at Gartner, and Adrian spent many years as VP of Development and CTO in data security startups. A while back we started talking about models for justifying data security investments. Many of our clients struggle with the business case for data security, even though they know the intrinsic value. All too often they are asked to use ROI or other inappropriate models. A few months ago one of our vendor clients asked if we were planning on any research in this area. We initially thought they wanted yet-another ROI model, but once we explained our positions they asked to sign up and license the content. Thus, in the very near future, we will be releasing a report (also distributed by SANS) on The Business Justification for Data Security. (For the record, I like the term information-centric better, but we have to acknowledge the reality that “data security” is more commonly used). Normally we prefer to develop our content live on the blog, as with the application security series, but this was complex enough that we felt we needed to form a first draft of the complete model, then release it for public review. Starting today, we’re going to release the core content of the report for public review as a series of posts. Rather than making you read the exhaustive report, we’re reformatting and condensing the content (the report itself will be available for free, as always, in the near future). Even after we release the PDF we’re open to input and intend to continuously revise the content over time. The Business Justification Model Today I’m just going to outline the core concepts and structure of the model. Our principle position is that you can’t fully quantify the value of information; it changes too often, and doesn’t always correlate to a measurable monetary amount. Sure, it’s theoretically possible, but practically speaking we assume the first person to fully and accurately quantify the value of information will win the nobel prize. Our model is built on the foundation that you quantify what you can, qualify the rest, and use a structured approach to combine those results into an overall business justification. We purposely designed this as a business justification model, not a risk/loss model. Yes, we talk about risk, valuation, and loss, but only in the context of justifying security investments. That’s very different from a full risk assessment/management model. Our model follows four steps: Data Valuation: In this step you quantify and qualify the value of the data, accounting for changing business context (when you can). It’s also where you rank the importance of data, so you know if you are investing in protecting the right things in the right order. Risk Estimation: We provide a model to combine qualitative and quantitative risk estimates. Again, since this is a business justification model, we show you how to do this in a pragmatic way designed to meet this goal, rather than bogging you down in near-impossible endless assessment cycles. We provide a starting list of data-security specific risk categories to focus on. Potential Loss Assessment: While it may seem counter-intuitive, we break potential losses from our risk estimate since a single kind of loss may map to multiple risk categories. Again, you’ll see we combine the quantitative and qualitative. As with the risk categories, we also provide you with a starting list. Positive Benefits Evaluation: Many data security investments also contain positive benefits beyond just reducing risk/losses. Reduced TCO and lower audit costs are just two examples. After walking through these steps we show how to match the potential security investment to these assessments and evaluate the potential benefits, which is the core of the business justification. A summarized result might look like: – Investing in DLP content discovery (data at rest scanning) will reduce our PCI related audit costs by 15% by providing detailed, current reports of the location of all PCI data. This translates to $xx per annual audit. – Last year we lost 43 laptops, 27 of which contained sensitive information. Laptop full drive encryption for all mobile workers effectively eliminates this risk. Since Y tool also integrates with our systems management console and tells us exactly which systems are encrypted, this reduces our risk of an unencrypted laptop slipping through the gaps by 90%. – Our SOX auditor requires us to implement full monitoring of database administrators of financial applications within 2 fiscal quarters. We estimate this will cost us $X using native auditing, but the administrators will be able to modify the logs, and we will need Y man-hours per audit cycle to analyze logs and create the reports. Database Activity Monitoring costs %Y, which is more than native auditing, but by correlating the logs and providing the compliance reports it reduces the risk of a DBA modifying a log by Z%, and reduces our audit costs by 10%, which translates to a net potential gain of $ZZ. – Installation of DLP reduces the chance of protected data being placed on a USB drive by 60%, the chances of it being emailed outside the organization by 80%, and the chance an employee will upload it to their personal webmail account by 70%. We’ll be detailing more of the sections in the coming days, and releasing the full report early next month. But please let us know what you think of the overall structure. Also, if you want to take a look at a draft (and we know you) drop us a line… We’re really excited to get this out

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.