Securosis

Research

Database Security for DBAs

I think I’ve discovered the perfect weight loss technique- a stomach virus. In 48 hours I managed to lose 2 lbs, which isn’t too shabby. Of course I’m already at something like 10% body fat, so I’m not sure how needed the loss was, but I figure if I just write a book about this and hock it in some informercial I can probably retire. My wife, who suffered through 3 months of so-called “morning” sickness, wasn’t all that sympathetic for some strange reason. On that note, it’s time to shift gears and talk about database security. Or, to be more accurate, talk about talking about database security. Tomorrow (Thursday Feb 5th) I will be giving a webcast on Database Security for Database Professionals. This is the companion piece to the webinar I recently presented on Database Security for Security Professionals. This time I flip the presentation around and focus on what the DBA needs to know, presenting from their point of view. It’s sponsored by Oracle, presented by NetworkWorld, and you can sign up here. I’ll be posting the slides after the webinar, but not for a couple of months as we reorganize the site a bit to better handle static content. Feel free to email me if you want a PDF copy. Share:

Share:
Read Post

Friday Summary: February 6, 2009

Here it is Friday again, and it feels like just a few minutes ago that I was writing the last Friday summary. This week has been incredibly busy for both of us. Rich has been out for the count most of this week with a stomach virus and wandering his own house like a deranged zombie. This was not really a hack, they were just warning Rich’s neighborhood. As the county cordoned off his house with yellow tape and flagged him as a temporary bio-hazard, I thought it best to forgo this week’s face to face Friday staff meeting, and get back on track with our blogging. Between the business justification white paper that we launched this week, and being on the road for client meetings, we’re way behind. A few items of interest … I appears that data security is really starting to enter the consciousness of the common consumer. Or at least it is being marketed to them. There were even more advertisements in the San Jose airport this week than ever: The ever-present McAfee & Cisco ads were joined by Symantec and Compuware. The supermarket has Identity Theft protection pamphlets from not one but two vendors. The cherry on top of this security sundae was the picture of John Walsh in the in-flight magazine, hawking “CA Internet Security Suite Plus 2009”. I was shocked. Not because CA has a consumer security product; or because they are being marketed along with Harry Potter commemorative wands, holiday meat platters and low quality electronics. No, it was John Walsh’s smiling face that surprised me. Because John Walsh “Trusts CA Security Products to keep your family safe”. WTF? Part of me is highly offended. The other part of me thinks this is absolutely brilliant marketing. We have moved way beyond a features and technology discussion, and into JV-celebrities telling us what security products we should buy. If it were only that easy. Myopia Alert: I am sure there are others in the security community who have gone off on this rant as well, but as I did not find a reference anywhere else, I thought this topic was worth posting. A word of caution for you PR types out there who are raising customer and brand awareness through security webinars and online security fora: you might want to have some empathy for your audience. If your event requires a form to be filled out, you are going to lose a big chunk of your audience because people who care about security care about their privacy as well. The audience member will bail, or your outbound call center will be dialing 50 guys named Jack Mayhoff. Further, if that entry form requires JavaScript and a half dozen cookies, you are going to lose a bigger percentage of your audience because JavaScript is a feature and a security threat rolled into one. Finally, if the third-party vendor you use to host the event does not support Firefox or Safari, you will lose the majority of your audience. I am not trying to be negative, but want to point out that while Firefox, Safari and Opera may only constitute 25% of the browser market, they are used by 90% of the people who care about security. Final item I wanted to talk about: Our resident wordsmith and all around good guy Chris Pepper forwarded Rich and me a Slashdot link about how free Monty Python material on YouTube has caused their DVD sales to skyrocket. Both Rich and I have raised similar points here in the past, and we even referred to this phenomena in the Business Justification for Security Spending paper about why it can be hard to understand damages. While organizations like the RIAA feel this is counter-intuitive, it makes perfect sense to me and anyone else who has ever tried guerilla marketing, or seen the effects of viral marketing. Does anyone know if the free South Park episodes did the same for South Park DVD sales? I would be interested. Oh, and Chris also forwarded Le Wrath di Kahn, which was both seriously funny and really works as opera (the art form- I didn’t test in the browser). On to the week in review: Webcasts, Podcasts, Outside Writing, and Conferences: Rich did a webcast with Roxana Bradescu of Oracle on Information Security for Database Professionals. Here is the sign-up link, and I will post a replay link later when I get one from Oracle. Favorite Securosis Posts: Rich: Launch of the Business Justification for Security Spending white paper. Whew! Adrian: The Business Justification for Data Security post on Risk Estimation. I knew this was going to cause some of the risk guys to go apoplectic, but we were not building a full-blown risk management model, and frankly, risk calculations made every model so complex no one could use it as a tool. Favorite Outside Posts: Adrian: Informative post by Robert Graham on Shellcode in software development. Write once, run anywhere malware? Anyone? Rich: XKCD was a riot. [What my friend John Kelsey used to call “Lead Pipe Cryptanalysis’ ] Top News and Posts: Nine million, in cold hard cash, stolen from ATM’s around the world. Wow. I will be blogging more on this in the future: Symantec and Ask.com joint effort. Marketing hype or real consumer value? Very informative piece on how assumptions of what should be secured and what we can ignore are often the places where we fail. Addicted to insecurity. At least 600k US jobs lost in January. Google thought everyone was serving malware. This is an atrocious practice: the EULA tells you you can’t use your firewall, and they can take all your bandwidth. RBS breach was massive, and fast. Blog Comment of the Week: From Chris Hayes on the Risk Estimation for Business Justification for Data Security post: Up to this point in the series, this “business justification” security investment model appears to be nothing more then a glorified cost benefit analysis wrapped up in risk/business

Share:
Read Post

The Business Justification for Data Security- Version 1.0

We’ve been teasing you with previews, but rather than handing out more bits and pieces, we are excited to release the complete version of the Business Justification for Data Security. This is version 1.0 of the report, and we expect it to continue to evolve as we get more public feedback. Based on some of that initial feedback, we’d like to emphasize something before you dig in. Keep in mind that this is a business justification tool, designed to help you align potential data security investments with business needs, and to document the justification to make a case with those holding the purse strings. It’s not meant to be a complete risk assessment model, although it does share many traits with risk management tools. We’ve also designed this to be both pragmatic and flexible- you shouldn’t need to spend months with consultants to build your business justification. For some projects, you might complete it in an hour. For others, maybe a few days or weeks as you wrangle business unit heads together to force them to help value different types of information. For those of you that don’t want to read a 38 page paper we’re going to continue to post the guts of the model as blog posts, and we also plan on blogging additional content, such as more examples and use cases. We’d like to especially thank our exclusive sponsor, McAfee, who also set up a landing page here with some of their own additional whitepapers and content. As usual, we developed the content completely independently, and it’s only thanks to our sponsors that we can release it for free (and still feed our families). This paper is also released in cooperation with the SANS Institute, will be available in the SANS Reading Room, and we will be delivering a SANS webcast on the topic on March 17th. This was one of our toughest projects, and we’re excited to finally get it out there. Please post your feedback in the comments, and we will be crediting reviewers that advance the model when we release the next version. And once again, thanks to McAfee, SANS, and (as usual) Chris Pepper, our fearless editor. Share:

Share:
Read Post

The Business Justification for Data Security: Risk Estimation

This is the third part of our Business Justification for Data Security series (Part 1, Part 2), and if you have been following the series this far, you know that Rich and I have complained about how difficult this paper was to write. Our biggest problem was fitting risk into the model. In fact we experimented and ultimately rejected a couple models because the reduction of risk vs. any given security investment was non-linear. And there were many threats and many different responses, few of which were quantifiable, making the whole effort ‘guestimate’ soup. In the end , risk became our ‘witching rod’; a guide as to how we balance value vs loss, but just one of the tools we use to examine investment decisions. Measuring and understanding the risks to information If data security were a profit center, we could shift our business justification discussion from the value of information right into assessing its potential for profit. But since that isn”t the case, we are forced to examine potential reductions in value as a guide to whether action is warranted. The approach we need to take is to understand the risks that directly threaten the value of data and the security safeguards that counter those risks. There’s no question our data is at risk; from malicious attackers and nefarious insiders to random accidents and user errors, we read about breaches and loss nearly every day. But while we have an intuitive sense that data security is a major issue, we have trouble getting a handle on the real risks to data in a quantitative sense. The number of possible threats and ways to steal information is staggering, but when it comes to quantifying risks, we lack much of the information needed for an accurate understanding of how these risks impact us. Combining quantitative and qualitative risk estimates We”ll take a different approach to looking at risk; we will focus on quantifying the things that we can, qualifying the things we can”t, and combining them in a consistent framework. While we can measure some risks, such as the odds of losing a laptop, it’s nearly impossible to measure other risks, such as a database breach via a web application due to a new vulnerability. If we limit ourselves only to what we can precisely measure, we won”t be able to account for many real risks to our information. Inclusion of quantitative assessments, since they are a powerful tool to understand risk and influence decisions, help validate the overall model. For our business justification model, we deliberately simplify the risk assessment process to give us just what we need to understand need for data security investments. We start by listing out the pertinent risk categories, then the likelihood or annual rate of occurrence for each risk, followed by severity ratings broken out for confidentiality, integrity, and availability. For risk events we can predict with reasonable accuracy, such as lost laptops with sensitive information, we can use numbers. In the example below, we know the A ualized Rate of Occurrence (ARO), so we plug with value in. For less predictable risks, we just rate them from “low” to “high”. We then mark off our currently estimated (or measured) levels in each category. For qualitative measure, we will use a 1-5 scale to , but this is arbitrary, and you should use whatever scale that provides you with a level of granularity that assists understanding. Risk Estimation: Credit Card Data (Sample): < p style=”font: 12.0px Helvetica; min-height: 14.0px”> Impact Risk Likelihood/ARO C I A Total Lost Laptop 43 4 1 3 51 Database Breach (Ext) 2 5 3 2 12 This is the simplified risk scorecard for the business justification model. The totals aren”t meant to compare one risk category to another, but to derive estimated totals we will use in our business justification to show potential reductions from the evaluated investment. While different organizations face different risk categories, we”ve included the most common data security risks here, and in Section 6 we show how it integrates into the overall model. Common data security risks The following is an outline of the major categories for information loss. Any time you read about a data breach, one or more of these events occurred. This list isn”t intended to comprehensive, rather provide a good overview of common data security risk categories to give you a jump start on implementing the model. Rather than discuss each and every threat vector, we will present logical groups to illustrate that the risks and potential solutions tend to be very similar within each specific category. The following are the principal categories to consider: Lost Media This category describes data at rest, residing on some form of media, that has been lost or stolen. Media includes disk drives, tape, USB/memory sticks, laptops, and other devices. This category encompasses the majority of cases of data loss. Typical security measures for this class includes media encryption, media “sanitizing”, and in some cases endpoint Data Loss Prevention technology. Lost disks/backup tape Lost/stolen laptop. Information leaked through decommissioned servers/drives Lost memory stick/flash drive Stolen servers/workstations Inadvertent Disclosure This category includes data being accidentally exposed in some way that leads to unwanted disclosure. Examples include email to unwanted recipients, posting confidential data to web sites, unsecured Internet transmissions, lack of access controls, and the like. Safeguards include email & web security platforms, DLP and access controls systems. Each is effective, but only against certain threat types. Process and workflow controls are also needed to help catch human error. Data accidentally leaked through email (Sniffed, wrong address, un-purged document metadata) Data leaked by inadvertent exposure (Posted to the web, open file shares, unprotected FTP, or otherwise placed in an insecure location) Data leaked by unsecured connection Data leaked through file sharing File sharing programs are used to move large files efficiently (and possibly illegally). External Attack/Breach This category describes instances of data theft where company systems and applications are compromised by a malicious attacker, affecting confidentiality

Share:
Read Post

Friday Summary – Jan 30, 2009

A couple of people forwarded me this interview, and if you have not read it, it is really worth your time. It’s an amazing interview with Matt Knox, a developer with Direct Revenue who authored adware during his employ with them. For me this is important as it highlights stuff I figured was going on but really could not prove. It also exposes much of the thought process behind the developers at Micosoft, and it completely altered my behavior for ’sanitizing’ my PC’s. For me, this all started a few years ago (2005?) when my Windows laptop was infected with this stuff. I discovered something was going on because there was ongoing activity in the background when the machine was idle and started to affect machine responsiveness. The mysterious performance degradation was difficult to find as I could not locate a specific application responsible, and the process monitors provided with Windows are wholly inadequate. I found that there were processes running in the background unassociated with any application, and unassociated with Windows. I did find files that were associated with these processes, and it was clear they did not belong on the machine. When I went to delete them, they popped up again within minutes- with new names! I was able to find multiple registry entries, and the behavior suggested that multiple pieces of code monitored each other for health and availability, and fixed each other if one was deleted. Even if I booted in safe mode I had no confidence that I could completely remove this … whatever it was … from the machine. At that point in time I knew I needed to start over. How this type of software could have gotten into the registry and installed itself in such a manner was perplexing to me. Being a former OS developer, I started digging, and that’s when I got mad. Mr. Knox uses the word ‘promiscuous’ to describe the OS calls, and that was exactly what it was. There were API calls to do pretty much anything you wanted to do, all without so much as a question being asked of the user or of the installing party. You get a clear picture of the mentality of the developers who wrote the IE and Windows OS code back then- there were all sorts of nifty ways to ‘do stuff’, for anyone who wanted to, and not a shred of a hint of security. All of these ‘features’ were for someone else’s benefit! They could use my resources at will- as if they had the keys to my house, and when I left, they were throwing a giant party at my expense. What really fried me was that, while I could see these processes and registry entries, none of the anti-virus or anti-malware tools would detect them. So if I wanted to secure my machine, it was up to me to do it. So I said this changed my behavior. Here’s how: Formatted the disk and reinstalled the OS Switched to Firefox full time. A few months later I discovered Flashblock and NoScript. I stopped paying for desktop anti-virus and used free stuff or nothing at all. It didn’t work for the desktop, and email AV addressed my real concern. I found a process monitor that gave me detailed information on what was running and what resources were being used. I cateloged every process on the Windows machine, and kept a file that described each process’ function so I could cross-check and remove stuff that was not supposed to be there. I began manually starting everything (non-core) through the services panel if I needed it. Not only did this help me detect stuff that should not be running, it reduced risks associated with poorly secured applications that leave services sitting wide open on a port. Uninstalled WebEx, RealPlayer, and several other suspects after using. I kept all of my original software nearby and planned to re-install, from CD or DVD, fresh every year. Until I got VMware. I used a virtual partition for risky browsing whenever possible. I now use a Mac, and run my old licensed copies of Windows in Parallels. Surprised? Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: Martin & Rich and I talk about the White House homeland security agenda, phishing, and the monster.com security breach on the Network Security Podcast #136. Don’t forget to submit any hacks or exploits for Black Hat 2009 consideration. Favorite Securosis Posts: Rich: Inherent Role Conflicts in National Cyber-security post. Adrian: The post on Policies and Security Products: Something you need to consider in any security product investment. Favorite Outside Posts: < div> Adrian: Rafal’s post on network security: not ready to give up, but surely need to switch the focus. Rich: Like Adrian said, the philosecurity interview with Matt Knox is a really interesting piece. Top News and Posts: Very interesting piece from Hackademics on IE’s “clickjacking protection”. Additional worries about upcoming Conflicker Worm payloads. Can’t be all security: This is simply astounding: Exxon achieves $45 billion in 2008. Not in revenue, in profit. The disk drive companies are marketing built in encryption. While I get a little bristly when marketed as protecting the consumer and it’s going into server arrays, this is a very good idea, and will eventually end up in consumer drives. Yeah! More on DarkMarket and the undercover side of the operation. Police still after culprits on Heartland Breach. Again? Monster.com has another breach. They have a long way to go before they catch Lexis-Nexus, but they’re trying. The Red Herring site has been down all week … wondering if they have succumbed to market conditions. Blog Comment of the Week: Good comment from Jack Pepper on “PCI isn’t meant to protect cardholder …” post: “Why is this surprising? the PCI standard was developed by the card industry to be a “bare minimum” standard for card processing. If anyone in the biz thinks PCI is more that “the bare

Share:
Read Post

The Most Powerful Evidence That PCI Isn’t Meant To Protect Cardholders, Merchants, Or Banks

I just read a great article on the Heartland breach, which I’ll talk more about later. There is one quote in there that really stands out: End-to-end encryption is far from a new approach. But the flaw in today”s payment networks is that the card brands insist on dealing with card data in an unencrypted state, forcing transmission to be done over secure connections rather than the lower-cost Internet. This approach avoids forcing the card brands to have to decrypt the data when it arrives. While I no longer think PCI is useless, I still stand by the assertion that its goal is to reduce the risks of the card companies first, and only peripherally reduce the real risk of fraud. Thus cardholders, merchants, and banks carry both the bulk of the costs and the risks. And here’s more evidence of its fundamental flaws. Let’s fix the system instead of just gluing on more layers that are more costly in the end. Heck, let’s bring back SET! Share:

Share:
Read Post

Heartland Payment Systems Attempts To Hide Largest Data Breach In History Behind Inauguration

Brian Krebs of the Washington Post dropped me a line this morning on a new article he posted. Heartland Payment Systems, a credit card processor, announced today, January 20th, that up to 100 Million credit cards may have been disclosed in what is likely the largest data breach in history. From Brian’s article: Baldwin said 40 percent of transactions the company processes are from small to mid-sized restaurants across the country. He declined to name any well-known establishments or retail clients that may have been affected by the breach. Heartland called U.S. Secret Service and hired two breach forensics teams to investigate. But Baldwin said it wasn’t until last week that investigators uncovered the source of the breach: A piece of malicious software planted on the company’s payment processing network that recorded payment card data as it was being sent for processing to Heartland by thousands of the company’s retail clients. … “The transactional data crossing our platform, in terms of magnitude… is about 100 million transactions a month,” Baldwin said. “At this point, though, we don’t know the magnitude of what was grabbed.” I want you to roll that number around on your tongue a little bit. 100 Million transactions per month. I suppose I’d try to hide behind one of the most historic events in the last 50 years if I were in their shoes. “Due to legal reviews, discussions with some of the players involved, we couldn’t get it together and signed off on until today,” Baldwin said. “We considered holding back another day, but felt in the interests of transparency we wanted to get this information out to cardholders as soon as possible, recognizing of course that this is not an ideal day from the perspective of visibility.” In a short IM conversation Brian mentioned he called the Secret Service today for a comment, and was informed they were a little busy. We’ll talk more once we know more details, but this is becoming a more common vector for attack, and by our estimates is the most common vector of massive breaches. TJX, Hannaford, and Cardsystems, three of the largest previous breaches, all involved installing malicious software on internal networks to sniff cardholder data and export it. This was also another case that was discovered by initially detecting fraud in the system that was traced back to the origin, rather than through their own internal security controls. Share:

Share:
Read Post

Submit A Top Ten Web Hacking Technique

Last week Jeremiah Grossman asked if I’d be willing to be a judge to help select the Top Ten Web Hacking Techniques for 2008. Along with Chris Hoff (not sure who that is), H D Moore, and Jeff Forristal. Willing? Heck, I’m totally, humbly, honored. This year’s winner will receive a free pass to Black Hat 2009, which isn’t to shabby. We are up to nearly 70 submissions, so keep ‘em coming. Share:

Share:
Read Post

Policies and Security Products

Where do the policies in your security product come from? With the myriad of tools and security products on the market, where do the pre-built policies come from? I am not speaking of AV in this post- rather looking at IDS, VA, DAM, DLP, WAF, pen testing, SIEM, and many others that use a set of policies to address security and compliance problems. The question is who decides what is appropriate? On every sales engagement, customer and analyst meeting I have ever participated in for security products, this was a question. This post is intended more for IT professional who are considering security products, so I am gearing for that audience. When drafting the web application security program series last month, a key topic that kept coming up over and over again from security practitioners was: “How can you recommend XYZ security solution when you know that the customer is going to have to invest a lot for the product, but also a significant amount in developing their own policy set?” This is both an accurate observation and the right question to be asking. While we stand by our recommendations for reasons stated in the original series, it would be a disservice to our IT readers if we did not discuss this in greater detail. The answer is an important consideration for anyone selecting a security tool or suite. When I used to develop database security products, policy development was one of the tougher issues for us to address on the vendor side. Once aware of a threat, on average it took 2.5 ‘man-days’ to develop a policy with a test case and complete remediation information [prior to QA]. This becomes expensive when you have hundreds of policies being developed for different problem sets. It was a common competitive topic to discuss policy coverage and how policies were generated, and a basic function of the product, so most every vendor will invest heavily in this area. More, most vendors market their security ‘research teams’ that find exploits, develop test code, and provide remediation steps. This domain expertise is one of the areas where vendors provide value in the products that they deliver, but when it comes down to it, vendor insight is fraction of the overall source of information. With monitoring and auditing, policy development was even harder: The business use cases were more diverse and the threats not completely understood. Sure we could return the ubiquitous who-what-when-where-to-from kind of stuff, but how did that translate to business need? If you are evaluating products or interested in augmenting your policy set, where do you start? With vulnerability research, there are several resources that I like to use: Vendor best practices – Almost every platform vendor, from Apache to SAP, offer security best practices documents. These guidelines on how to configure and operate their product form the basis for many programs. These cover operational issues that reduce risk, discuss common exploits, and reference specific security patches. These documents are updated during each major release cycle, so make sure you periodically review for new additions, or how they recommend new features be configured and deployed. What’s more, while the vendor may not be forthcoming with exploit details, they are the best source of information for remediation and patch data. CERT/Mitre – Both have fairly comprehensive lists of vulnerabilities to specific products. Both provide a neutral description of what the threat is. Neither had great detailed information of the actual exploit, not will they have complete remediation information. It is up to the development team to figure out the details. Customer feedback/peer review – If you are a vendor of security products, customer have applied the policies and know what works for them. They may have modified the code that you use to remediate a situation, and that may be a better solution than what your team implemented, and/or it may be too specific to their environment for use in a generalized product. If you are running your own IT department, what have your peers done? Next time you are at a conference or user group, ask. Regardless, vendors learn from other customers what works for them to address issues, and you can too. 3rd party relationships (consultants, academia, auditors) – When it comes to development of policies related to GLBA or SOX, which are outside the expertise of most security vendors, it’s particularly valuable to leverage third party consultative relations to augment policies with their deep understanding of how best to approach the problem. In the past I have used relationships with major consulting firms to help analyze the policies and reports we provided. This was helpful, as they really did tell us when some of our policies were flat out bull$(#!, what would work, and how things could work better. If you have these relationships already in place, carve out a few hours so they can help review and analyze policies. Research & Experience – Most companies have dedicated research teams, and this is something you should look for. They do this every day and they get really good at it. If your vendor has a recognized expert in the field on staff, that’s great too. That person may be quite helpful to the overall research and discovery process of threats and problems with the platforms and products you are protecting. The reality is that they are more likely on the road speaking to customers, press and analysts rather than really doing the research. It is good that your vendor has a dedicated team, but their experience is just one part of the big picture. User groups – With many of the platforms, especially Oracle, I learned a lot from regional DBAs who supported databases within specific companies or specific verticals. In many cases they did not have or use a third party product, rather they had a bunch of scripts that they had built up over many years, modified, and shared with others. They shared tips on not only what

Share:
Read Post

Inherent Role Conflicts In National Cybersecurity

I spent a lot of time debating with myself if I should wade into this topic. Early in my analyst career I loved to talk about national cybersecurity issues, but I eventually realized that, as an outsider, all I was doing was expending ink and oxygen, and I wasn’t actually contributing anything. That’s why you’ve probably noticed we spend more time on this blog talking about pragmatic security issues and dispensing practical advice than waxing poetic about who should get the Presidential CISO job or dispensing advice to President Obama (who, we hate to admit, probably doesn’t read the blog). Unless or until I, or someone I know, gets “the job”, I harbor no illusions that what I write and say reaches the right ears. But as a student of history, I’m fascinated by the transition we, of all nations, face due to our continuing reliance the Internet to run everything from our social lives, to the global economy, to national defense. Rather than laying out my 5 Point Plan for Solving Global Cyber-Hunger and Protecting Our Children, I’m going to talk about some more generic issues that I personally find compelling. One of the more interesting problems, and one that all nations face, is the inherent conflicts between the traditional roles of those that safeguard society. Most nations rely on two institutions to protect them- the military and the police. The military serves two roles: to protect the institution of the nation state from force, and to project power (protecting national assets, including lines of commerce, that extend outside national boundaries). Militaries are typically focused externally, even in fascist states, but do play a variable domestic role, even in the most liberal of democratic societies. Militaries are externally focused entities, who only turn internally when domestic institutions don’t have the capacity to manage situations. The police also hold dual roles: to enforce the law, and ensure public safety. Of course the law and public safety overlap to different degrees in different political systems. Seems simple enough, and fundamentally these institutions have existed since nearly the dawn of society. Even when it appears that the institutions are one and the same, that’s typically in name only since the skills sets involved don’t completely overlap, especially in the past few hundred years. Cops deal with crime, soldiers with war. The Internet is blasting those barriers, and we have yet to figure out how to structure the roles and responsibilities to deal with Internet-based threats. The Internet doesn’t respect physical boundaries, and its anonymity disguises actors. The exact same attack by the exact same threat actor could be either a crime, or an act of war, depending on the perspective. One of the core problems we face in cybersecurity today is structuring the roles and responsibilities for those institutions that defend and protect us. With no easy lines, we see ongoing turf battles and uncoordinated actions. The offensive role is still relatively well defined- it’s a responsibility of the military, should be coordinated with physical power projection capacity, and the key issue is over which specific department has responsibility. There’s a clear turf battle over offensive cyber operations here in the U.S., but that’s normal (explaining why every service branch has their own Air Force, for example). I do hope we get our *%$& together at some point, but that’s mere politics. The defensive role is a mess. Under normal circumstances the military protects us from external threats, and law enforcement from internal threats (yes, I know there are grey areas, but roll with me here). Many/most cyberattacks are criminal acts, but that same criminal act is maybe national security threat. We can usually classify a threat by action, intent, and actor. Is the intent financial gain? Odds are it’s a crime. Is the actor a nation state? Odds are it’s a national security issue. Does the action involve tanks or planes crossing a border? It’s usually war. (Terrorism is one of the grey areas- some say it’s war, others crime, and others a bit of both depending on who is involved). But a cyberattack? Even if it’s from China it might not be China acting. Even if it’s theft of intellectual property, it might not be a mere crime. And just who the heck is responsible for protecting us? Through all of history the military responds through use of force, but you don’t need me to point out how sticky a situation that is when we’re talking cyberspace. Law enforcement’s job is to catch the bad guys, but they aren’t really designed to protect national borders, never mind non-existent national borders. Intelligence services? It isn’t like they are any better aligned. And through all this I’m again shirking the issues of which agencies/branches/departments should have which responsibilities. This we need to start thinking a little differently, and we may find that we need to develop new roles and responsibilities and we drive deeper into the information age. Cybersecurity isn’t only a national security problem or a law enforcement problem, it’s both. We need some means to protect ourselves from external attacks of different degrees at the national level, since just telling every business to follow best practices isn’t exactly working out. We need a means of projecting power that’s short of war, since playing defense only is a sure way to lose. And right now, most countries can’t figure out who should be in charge or what they should be doing. I highly suspect we’ll see new roles develop, especially in the area of counter-intelligence style activity to disrupt offensive operations ranging from taking out botnets, to disrupting cybercrime economies, to counterespionage issues relating to private business. As I said in the beginning, this is a fascinating problem, and one I wish I was in a position to contribute towards, but Phoenix is a bit outside the Beltway, and no one will give me the President’s new Blackberry address. Even after I promised to stop sending all those LOLCatz forwards. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.