Securosis

Research

Friday Summary: March 26, 2010

It’s been a bit of a busy week. We finished up 2 major projects and I made a quick out of town run to do a little client work. As a result, you probably noticed we were a bit light on the posting. For some silly reason we thought things might slow down after RSA. I’m writing this up on my USAirways flight but I won’t get to post it until I get back home. Despite charging the same as the other airlines, there’s no WiFi. Heck, they even stopped showing movies and the AirMall catalogs are getting a bit stale. With USAirways I feel lucky when we have little perks, like two wings and a pilot. You know you’re doing something wrong when you provide worse service at the same price as your competitors. On the upside, they now provide free beer and wine in the lounge. Assuming you can find it. In the basement. Without stairs. With the lights out. And the “Beware of Tiger” sign. Maybe Apple should start an airline. What the hell, Hooters’ pulled it off. All the flight attendants and pilots can wear those nice color coded t-shirts and jeans. The planes will be “magical” and they’ll be upgraded every 12 months so YOU HAVE TO FLY ON ONE! The security lines won’t be any shorter, but they’ll hand out water and walk around with little models of the planes to show you how wonderful they all are. Er… maybe I should just get on with the summary. And I’m sorry I missed CanSecWest and the Pwn2Own contest. I didn’t really expect someone to reveal an IE8 on Windows 7 exploit, considering its value on the unofficial market. Pretty awesome work. Since I have to write up the rest of the Summary when I get home it will be a little lighter this week, but I promise Adrian will make up for it next week. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Effort Will Measure Costs Of Monitoring, Managing Network Security. Database Security Metrics for the Community at Large. Security Optimism. Favorite Securosis Posts David Mortman: FireStarter: There is No Market for Security Innovation. Mike Rothman: FireStarter: There is No Market for Security Innovation. Rich nails it. Read the comments. Great discussion. Rich: Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them. I never thought Quant would grow like this – we’re now on our third project, with two of them running concurrently. Other Securosis Posts Hello World. Meet Pwn2Own. Some DLP Metrics. Bonus Incite 3/19/2010: Don’t be LHF. Favorite Outside Posts David Mortman: Side-Channel Leaks in Web Applications. Mike Rothman: Time and Cost to Defend the Town. Security is about trade-offs. Bejtlich strikes again by presenting the discussion we have to have with senior management.. Rich: Securing Your Facebook. Threatpost with a nice place to send your friends and family for some easy to understand advice. Project Quant Posts Project Quant: Database Security – Patch. Top News and Posts Hacker exploits IE8 on Windows 7 to Win Pwn2Own. Website Security Seals Smackdown. Google releases “Skipfish”, a free web application security scanner. Busting CyberFUD. Fired CISO says his comments never put Penn’s data at risk . Sorry, if you don’t have permission, and you want to keep your job, you don’t talk. I wish it were otherwise, but that’s how the world works. Mozilla Acknowledges Critical Zero Day Flaw in Firefox. TJX Hacker Gets 20-Year Jail Sentence. Researchers Finding New Ways to Bypass Exploit Mitigations. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Jim Ivers, in response to FireStarter: There is No Market for Security Innovation. Great post and good observations. The security market is a very interesting and complex ecosystem and even companies that have an innovation that directly addresses a generally accepted problem have a difficult road. The reactive nature of security and the evolving nature of the problems to which the market responds is one level of complexity. The sheer number of vendors in the space and the confusing noise created by those numbers is another. Innovation is further dampened by the large established vendors that move to protect market share by assuring their customer base that they have known problems covered when there is evidence to the contrary. Ultimately revenue becomes the gating factor in sustaining a growing company. But buyers have a habit of taking a path of risk avoidance by placing bets on establish suites of products rather than staking professional reputation on unproven innovative ideas. Last I checked, Gartner had over 20 analysts dedicated to IT security in one niche or another, which speaks to how complex the task of evaluating and selecting IT security products can be for any organization. The odds of even the most innovative companies being heard over the noise are small, which is a shame for all concerned, as innovation serves both the customers and the vendors. Share:

Share:
Read Post

Security Innovation Redux: Missing the Forest for the Trees

There was a great level of discourse around Rich’s FireStarter on Monday: There is No Market for Security Innovation. Check out the comments to get a good feel for the polarization of folks on both sides of the discussion. There were also a number of folks who posted their own perspectives, ranging from Will Gragido at Cassandra Security, Adam Shostack on the New School blog, to the hardest working man in showbiz, Alex Hutton at Verizon Business. All these folks made a number of great points. But part of me thinks we are missing the forest for the trees here. The FireStarter was really about new markets and the fact that it’s very very hard for innovative technology to cross the chasm unless it’s explicitly mandated by a compliance regulation. I strongly believe that, and we’ve seen numerous examples over the past few years. But part of Alex’s post dragged me back to my Pragmatic philosophy, when he started talking about how “innovation” isn’t really just constrained to a new shiny widget that goes into a 19” rack (or a hypervisor). It can be new uses for stuff you already have. Or working the politics of the system a bit better internally by getting face time with business leaders. I don’t really call these tactics innovation, but I’m splitting hairs here. My point, which I tweeted, is “Regardless of innovation in security, most of the world doesn’t use they stuff they already have. IMO that is the real problem.” Again, within this echo chamber most of us have our act together, certainly relative to the rest of the world. And we are passionate about this stuff, like Charlie Miller fuzzing all sorts of stuff to find 0-day attacks, while his kids are surfing on the Macs. So we get all excited about Pwn2Own and other very advanced stuff, which may or may not ever become weaponized. We forget the rest of the world is security Neanderthal man. So part of this entire discussion about innovation seems kind of silly to me, since most of the world can’t use the tools they already have. Share:

Share:
Read Post

Hello World. Meet Pwn2Own.

I’m currently out on a client engagement, but early results over Twitter say that Internet Explorer 8 on Windows 7, Firefox on Windows 7, Safari on Mac OS X, and Safari on iPhone were all exploited within seconds in the Pwn2Own contest at the CanSecWest conference. While these exploits took the developers weeks or months to complete, that’s still a clean sweep. There is a very simple lesson in these results: If your security program relies on preventing or eliminating vulnerabilities and exploits, it is not a security program. Share:

Share:
Read Post

FireStarter: There is No Market for Security Innovation

I often hear that there is no innovation left in security. That’s complete bullshit. There is plenty of innovation in security – but more often than not there’s no market for that innovation. For anything innovative to survive (at least in terms of physical goods and software) it needs to have a market. Sometimes, as with the motion controllers of the Nintendo Wii, it disrupts an existing market by creating new value. In other cases, the innovation taps into unknown needs or desires and succeeds by creating a new market. Security is a bit of a tougher nut. As I’ve discussed before, both on this blog and in the Disruptive Innovation talk I give with Chris Hoff, security is reactive by nature. We are constantly responding to changes in the underlying processes/organizations we protect, as well as to threats evolving to find new pathways through our defenses. With very few exceptions, we rarely invest in security to reduce risks we aren’t currently observing. If it isn’t a clear, present, and noisy danger, it usually finds itself on the back burner. Innovations like firewalls and antivirus really only succeeded when the environment created conditions that showed off value in these tools. Typically that value is in stopping pain, and not every injury causes pain. Even when we are proactive, there’s only a market for the reactive. The pain must pass a threshold to justify investment, and an innovator can only survive for so long without customer investment. Innovation is by definition almost always ahead of the market, and must create its own market to some degree. This is tough enough for cool things like iPads and TiVos, but nearly impossible for something less sexy like security. I love my TiVo, but I only appreciate my firewall. As an example, let’s take DLP. By bringing content analysis into the game, DLP became one of the most innovative, if not the most innovative, data security technologies we’ve seen. Yet 5+ years in, after multiple acquisitions by major vendors, we’re still only talking about a $150M market. Why? DLP didn’t keep your website up, didn’t keep the CEO browsing ESPN during March Madness, and didn’t keep email spam-free. It addresses a problem most people couldn’t see without DLP a DLP tool! Only when it started assisting with compliance (not that it was required) did the market start growing. Another example? How many of you encrypted laptops before you had to start reporting lost laptops as a data breach? On the vendor side, real innovation is a pain in the ass. It’s your pot of gold, but only after years of slogging it out (usually). Sometimes you get the timing right and experience a quick exit, but more often than not you either have to glom onto an existing market (where you’re fighting for your life against competitors that really shouldn’t be your competitors), or you find patient investors who will give you the years you need to build a new market. Everyone else dies. Some examples? PureWire wasn’t the first to market (ScanSafe was) and didn’t get the biggest buyout (ScanSafe again), but they timed it right and were in and out before they had to slog. Fidelis is forced to compete in the DLP market, although the bulk of their value is in managing a different (but related) threat. 7+ years in and they are just now starting to break out of that bubble. Core Security has spent 7 years building a market- something only possible with patient investors. Rumor is Palo Alto has some serious firewall and IPS capabilities, but rather than battling Cisco/Checkpoint, they are creating an ancillary market (application control) and then working on the cross-sell. Most of you don’t buy innovative security products. After paying off your maintenance and licens renewals, and picking up a few widgets to help with compliance, there isn’t a lot of budget left. You tend to only look for innovation when your existing tools are failing so badly that you can’t keep the business running. That’s why it looks like there’s no security innovation – it’s simply ahead of market demand, and without a market it’s hard to survive. Unless we put together a charity fund or those academics get off their asses and work on something practical, we lack the necessary incubators to keep innovation alive until you’re ready to buy it. So the question is… how can we inspire and sustain innovation when there’s no market for it? Or should we? When does innovation make sense? What innovation are we willing to spend on when there’s no market? When and how should we become early adopters? Share:

Share:
Read Post

Some DLP Metrics

One of our readers, Jon Damratoski, is putting together a DLP program and asked me for some ideas on metrics to track the effectiveness of his deployment. By ‘ask’, I mean he sent me a great list of starting metrics that I completely failed to improve on. Jon is looking for some feedback and suggestions, and agreed to let me post these. Here’s his list: Number of people/business groups contacted about incidents – tie in somehow with user awareness training. Remediation metrics to show trend results in reducing incidents – at start of DLP we had X events, after talking to people for 30 days about incidents we now have Y events. Trend analysis over 3, 6, & 9 month periods to show how the number of events has reduced as remediation efforts kick in. Reduction in the average severity of an event per user, business group, etc. Trend: number of broken business policies. Trend: number of incidents related to automated business practices (automated emails). Trend: number of incidents that generated automatic email. Trend: number of incidents that were generated from service accounts – (emails, batch files, etc.) I thought this was a great start, and I’ve seen similar metrics on the dashboards of many of the DLP products. The only one I have to add to Jon’s list is: Average number of incidents per user. Anyone have other suggestions? Share:

Share:
Read Post

Announcing NetSec Ops Quant: Network Security Metrics Suck. Let’s Fix Them.

The lack of credible and relevant network security metrics has been a thorn in my side for years. We don’t know how to define success. We don’t know how to communicate value. And ultimately, we don’t even know what we should be tracking operationally to show improvement (or failure) in our network security activities. But we in the echo chamber seem to be happier bitching about this, or flaming each other on mailing lists, than focusing on finding a solution. Some folks have tried to drive towards a set of metrics that make sense, but I can say most of the attempts are way too academic and also cost too much to collect to be usable in everyday practice. Not to mention that most of our daily activities aren’t even included in the models. Not to pick on them too much, but I think these issues are highlighted in the way the Center for Internet Security has scoped out network security metrics. Basically, they didn’t. They have metrics on Incident Management, Vulnerability Management, Patch Management, Configuration Change Management, Application Security, and Financial Metrics. So the guy managing the network security devices doesn’t count? Again, I know CIS is working towards a lot of other stuff, but the reality is the majority of security spending is targeted at the network and endpoint domains, and there are no good metrics for those. So let’s fix it. Today, we are kicking off the next in our series of Quant projects. This one is called Network Security Operations Quant, and we aim to build a process map and underlying cost model for how organizations manage their network security devices. The project’s formal objective and scope are: The objective of Network Security Operations Quant is to develop a cost model for monitoring and managing network security devices that accurately reflects the associated financial and resource costs. Secondarily, we also want to: Build the model in a manner that supports use as an operational efficiency model to help organizations optimize their network security monitoring and management processes, and compare costs of different options. Heavily engage the community and produce an open model with wide support and credibility, using the Totally Transparent Research process. Advance the state of IT metrics, particularly operational security metrics. We are grateful to our friends at SecureWorks, who are funding this primary research effort. As with all our quant processes, our methodology is: Establish the high level process map via our own research. Use a broad survey to validate and identify gaps in the process map. Define a set of subprocesses for each high-level process. Build metrics for each subprocess. Assemble the metrics into a model which can be used to track operational improvement. From a scoping standpoint, we are going to deal with 5 different network security processes: Monitoring firewalls Monitoring IDS/IPS Monitoring server devices Managing firewalls Managing IDS/IPS Yes, we know network security is bigger than just these 5 functions, but we can’t boil the ocean. There is a lot of other stuff we’ll model out using the Quant process over the next year, but this should be a good start. Put up or shut up We can’t do this alone. So we are asking for your help. First off, we are going to put together a “panel” of organizations to serve as the basis for our initial primary research. That means we’ll be either doing site visits or detailed phone interviews to understand how you undertake network security processes. We’ll also need the folks on the panel to shoot holes in our process maps before they are posted for public feedback. We are looking for about a dozen organizations from a number of different verticals and company sizes (large enterprise to mid-market). As with all our research, there will be no direct attribution to your organization. We are happy to sign NDAs and the like. If you are interested in participating, please send me an email directly at mrothman (at) securosis . com. Once the initial process maps are posted, we will post a survey to find out whether you actually do the steps we identify. We’ll also want your feedback on the process via posts that describe each step in the process. Everyone has an opportunity to participate and we hope you will take us up on it. This is possibly the coolest research project I’ve personally been involved with and I’m really excited to get moving on it. We look forward to your participation, so we finally can get on the same page, and figure out how to measure how we “network security plumbers” do our business. Share:

Share:
Read Post

Bonus Incite 3/19/2010: Don’t be LHF

I got a little motivated this AM (it might have something to do with blowing off this afternoon to watch NCAA tourney games) and decided to double up on the Incite this week. I read Adrian’s Friday Summary intro this and it kind of bothered me. Mostly because I don’t know the answers either, and I find questions that I can’t answer cause me stress and angst. Maybe it’s because I like to be a know-it-all and it sucks when your own limitations smack you upside the head. Anyhow, what do we do about this whole information sharing culture we’ve created – and more importantly, how do we make sure the next generation is protected from the new age scam artists who prey on over-sharers? I came across this coverage from RSA of Hugh Thompson’s interviews of Craigslist and the Woz. Both Newmark and Wozniak believe education is the answer. Truth be told, I have mixed feelings. I know the futility of widespread education because you can’t possibly keep up with the attackers, not within a mass market context. Yet my plan is still to use education as one of a few tactics that I’ll use to keep my kids (and the Boss) safe online. The reality is that because my kids will be trained on how to recognize fraud and what not to do online, they will be ahead of 95% of the other folks out there. And remember, most attackers prey on the lowest hanging fruit. As long as my kids aren’t that, I think things will work out OK. But I also maintain pretty tight controls on the machines they use and the network they connect to. As they get more sophisticated, so will the defenses. I’ll implement a kids’ browsing network, and segment out my business machines and sensitive data). I already lock down their devices so they can’t install software (unless I know about it). At some point, they’ll get their own machines and I’ll centralize the file storage (both for backup and oversight), so I can easily rebuild their machines every couple months. And we’ve got a lot of controls to protect our finances as well. We check the credit cards frequently (to ensure unauthorized transactions get caught quickly) and have a home incident response plan in the event one of my devices does get pwned. Of course, that doesn’t answer the question of how to solve the macro problem, but honestly I’m not sure we can. Fraud has been happening since the beginning of time, and it’s a bit crazy to think we could stop it entirely. But I can work my ass off to minimize the impact of the bad guys on my own situation, which is a pretty good objective – both at home and at work. Have a great weekend. – Mike. Photo credit: “that low-hanging fruit they keep talking about in meetings” originally uploaded by travelskerricks Bonus Incite 4 U Getting screwed by the back channel – I read a recent post from the security career counselors (Mike Murray and Lee Kushner) and it got my goat a bit. The post was about how to deal with negative references, and I’m sensitive to this. I’ve been in a situation where a former boss sent a torpedo through my engine room as I had a new job lined up and closed. It was during a back channel conversation so I had no recourse (even though there was a non-disparagement clause in my exit agreement). Mike and Lee suggest first assembling a list of positive references that can offset a negative reference, as well as being candid with your prospective employer about the issues. This is great advice, since that’s exactly how I dealt with the situation. I did my own backchannel work and got folks inside the company to talk about me (on deep background), as well as confronting the situation head on. It worked out for me, but everyone needs to have contingency plans for everything, and a negative reference is certainly one of them. – MR Isn’t UTM a hopping market? – From all the market share projections and growth numbers, the UTM (unified threat management) market is growing like gangbusters. Yet you see companies like Symantec (a few years ago) and McAfee (who recently shut down their SnapGear offering) getting out of the business. The reality is there are multiple market segments in network security and they require different solutions. UTM can be applicable to large enterprises, but they don’t buy combined solutions. They evaluate the products on a function-by-function basis. So they will compare the UTM-based IPS to the stand-alone IPS and so on, before they decide whether to embrace an integrated solution. Whereas the mid-market wants a toaster to make their problems go away. So hats off to McAfee for deciding they didn’t have a competitive offering or leveraged path to market, and getting out of the business. One of the hardest things to do is kill a product, no matter how competitive it is. Strong companies need to kill things, or they become overpopulated and operate sub-optimally. – MR Stupid is as stupid does – I recently watched Forrest Gump again, and it’s a treasure trove of little saying that really apply to our daily existence. We are security professionals, which mean we should understand risks and act accordingly. How can you tell your internal users to do something if you don’t do it yourself? I guess you can, but come back into the shop after having your own machine pwned and see how much credibility you have left. So when I see the inevitable reports from security conferences about how stupid our own professionals are, it makes me nuts. At the RSA show, Motorola AirDefense found all sorts of wireless stupidity from the attendees, and it’s really nutty. If you don’t have a 3G card, then just make due without connecting for a few hours while you are at the show. You have a mobile device and if it’s that important, go back to your hotel. At a security show they

Share:
Read Post

Network Security Fundamentals: Egress Filtering

As we wrap up our initial wave of Network Security Fundamentals, we’ve already discussed Default Deny, Monitoring everything, Correlation, and Looking for Not Normal. Now it’s time to see if we can actually get in the way of some of these nasty attacks. So what are we trying to block? Basically a lot of the issues we find through looking for not normal. The general idea involves implementing a positive security model not just to inbound traffic (default deny), but to outbound traffic as well. This is called egress filtering, and in practice is basically turning your perimeter device inside out and applying policies to outbound traffic. This defensive tactic ensures that non-standard ports and protocols don’t make their way out of your network. Filtering can also block reconnaissance tactics, network enumeration techniques, outbound spam bots, and those pesky employees running Internet businesses from within your corporate network. Amazingly enough this still happens, and too many organizations are none the wiser. Defining Egress Filtering Policies Your best bet is to start with recent incidents and their root causes. Define the outbound ports and protocols which allowed the data to be exfiltrated from your network. Yes, this is obvious, but it’s a start and you don’t want to block everything. Not unless you enjoy being ritually flayed by your users. Next leverage the initial steps in the Fundamentals series and analyze correlated data to determine what is normal. Armed with this information, next turn to the recent high-profile attacks getting a lot of airtime. Think Aurora and learn how that attack exfiltrates data (custom encrypted protocol on ports 443). For such higher-probability attacks, define another set of egress filtering rules to make sure you block (or at least are notified) when you have outbound traffic on the ports used during the attacks. You can also use tighter location-based filtering policies, like not allowing traffic to countries where you don’t do business. This won’t work for mega-corporations doing business in every country in the world, but for the other 99.99% of you, it’s an option. Or you could enforcing RFC standards on Port 80 and 443 to make sure no custom protocol is hiding anything in a standard HTTP stream. Again, there are lots of different ways to set up your egress filtering rules. Most can help, depending on the nature of your network traffic, none are a panacea. Whichever you decide to implement, make sure you are testing the rules in non-blocking mode first to make sure nothing breaks. Blocking or Alerting As you can imagine, it’s a dicey proposition to start blocking traffic that may break legitimate applications. So take care when defining these rules, or take the easy way out and just send alerts when one of your egress policies is violated. Of course, the alerting approach can (and probably will) result in plenty of false positives, but as you tune the policies, you’ll be able to minimize that. Which brings up the hard truth of playing around with these policies. There are no short cuts. Vendors who talk about self-defending anything, or learning systems, or anything else that doesn’t involve the brutal work of defining policies and tuning them over time until they work in your environment, basically doesn’t spend enough time in the real world. ‘nuff said. To finish our discussion of blocking, again think about these rules in terms of your IPS. You block the stuff you know is bad, and you alert on the stuff you aren’t sure about. Let’s hope you aren’t so buried under alerts that something important gets by, but that’s life in the big city. No Magic Bullets Yes, we believe egress filtering is a key control in your security arsenal, but as with everything else, it’s not a panacea. There are lots of attacks which will skate by undetected, including those that send traffic over standard ports. So once again, it’s important to look at other controls to provide additional layers of defense. These may include outbound content filtering, application-aware perimeter devices, deep packet inspection, and others. More Network Security Fundamentals I’m going to switch gears a bit and start documenting Endpoint Security Fundamentals next week, but be back to networks soon enough, getting into wireless security, network pen testing, perimeter change control, and outsourced perimeter monitoring. Stay tuned. Share:

Share:
Read Post

Friday Summary: March 19, 2010

Your Facebook account gets compromised. Your browser flags your favorite sports site as a malware distributor. Your Twitter account is hacked through a phishing scam. You get AV pop-ups on your machine, but cannot tell which are real and which are scareware. Your identify gets stolen. You try to repair the damage and make sure it doesn’t happen again, only to get ripped off by the credit agency (you know who I am talking about). Exasperated, you just want to go home, relax, and catch up on March Madness. But it turns out the bracket email from your friend was probably another phishing attempt, and your alma mater suspends a star player while it investigates derogatory public comments – which it eventually discovers were forged. Man, it sucks to be Generation Y. There has been an incredible cacophony over the last couple weeks across the mainstream media about social networks being manipulated for fun, personal satisfaction, and profit. Even the people my my semi-rural area are discussing how it has affected them and their children, so I know it is getting national attention. What I can’t figure out is how their behavior will change – if at all. RSnake discussed a Microsoft paper recently, expanding on its discussion of why training users on the dangers of unsafe browsing often does not make economic sense. Even if it was viable, people don’t want to learn all that stuff, as it makes web browsing more work than fun. So what gives? I believe that our increasing use of and dependency on the Internet, and the corresponding increases in fraud and misuse, require change. But will people feel differently, and will this drive them to actually behave differently? Will the changes be technological, legal, or social? We could see tighter or looser privacy rules on websites, or legal precedents or new laws – we have already seen dramatic shifts in what younger people consider private and are willing to publicize online. The paper asserts that “The wisdom of the crowd discerns that ignoring some threats brings little actual harm …” which I totally agree with, and describes Twitter phishing and Facebook hacks. Bank accounts being drained and cars being shut down are a whole different level of problem, though. I really don’t have an answer – or even an inkling – of what happens next. I do think the problem has gotten sufficiently mainstream that we will to see mainstream impacts and reactions, though. Interesting times! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian on Database Dangers in the Cloud on Dark Reading. Video interview with Rich on endpoint security, agents, and best of breed technologies. Project Quant in Database Security Metrics Project Needs Community Input Favorite Securosis Posts Rich: FireStarter: IP Breach Disclosure, No-Way, No-How. I’m surprised this generated so little debate for a FireStarter. When I explain this verbally to people, it never fails to generate a vigorous response. David Mortman: FireStarter: IP Breach Disclosure, No-Way, No-How. Mike Rothman: Mogull’s Law. If Rich has the stones to name a law after himself, then I’m in. Not sure how proportional the causation is, but clearly users do whatever hurts the least. Adrian Lane: Network Security Fundamentals: Egress Filtering. Other Securosis Posts LHF: Quick Wins with DLP – the Conclusion Incite 3/17/2010: Seeing the Enemy Database Activity Analysis Survey LHF: Quick Wins in DLP, Part 2 Favorite Outside Posts Rich: Conversations With a Blackhat. The best takeaway from RSnake’s summary of talking with some bad guys is that at least some of what we are doing on the security side is actually working. So much for the “security is failing” meme… David Mortman: Three Steps to a Rational Security Budget. Mike Rothman: Why I’m Skeptical of “Due Diligence” Based Security. I have no idea what Alex is talking about, but he has a picture of Anakin, Obi-Wan and Yoda with the glowing ghosts of John Lennon and George Harrison. So it’s my favorite of the week. Adrian Lane: Walkthrough: Click at Your Own Risk. Analysis of privacy and the manipulation of public impressions through social media. An excellent piece of analysis from … a football statistics site. Long but very informative, and a perspective I don’t think a lot of people appreciate. Project Quant Posts Project Quant: Database Security – Configuration Management Top News and Posts What I thought was the biggest news of the week: HD Moore’s post on The Latest Adobe Exploit and Session Upgrading. – AL Penetrating Intranets through Adobe Flex Applications. A study highlights efforts to take down ISPs that allow malicious activity. This is a boon to reputation-based filtering. To be honest, I used to be skeptical of the idea but I’m slowly becoming a convert. –RM Zeus Trojan Now Has Hardware Licensing Scheme. Microsoft, security vendor clash over Virtual PC bug. Hacker Disables Over 100 Cars Remotely. Former employee using someone else’s login. Now where have we heard that before? Emerging Identity Theft Market. The $10 million number seems high to me, but the trends are not surprising. Facebook Password Scam. We have been talking about the Internet subsuming television for years. Google’s Set-top Box is an attempt to watch closely, because television is all about advertising, which is Google’s strong suit (although they have not been a TV player to date), and this would enable a new kind of advertising. Should be interesting! Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Andy Jaquith, in response to RSA Tomfoolery: APT is the Fastest Way to Identify Fools and Liars. When a comments makes me laugh out loud, it usually gets my vote! I’ve been using the phrase “Advanced Persistent Chinese” lately. It sounds good, it’s more accurate, and it’s funny. What’s not to like? I completely agree that the displays of vendor idiocy around APT are far too widespread. You can’t have a carnival without the barker, apparently. Good seeing you, by the way, Any – albeit far too briefly. Share:

Share:
Read Post

Mogull’s Law

I’m about to commit the single most egotistical act of my blogging/analyst career. I’m going to make up my own law and name it after myself. Hopefully I’m almost as smart as everyone says I think I am. I’ve been talking a lot, and writing a bit, about the intersection of security and psychology in security. One example is my post on the anonymization of losses, and another is the one on noisy vs. quiet security threats. Today I read a post by RSnake on the effectiveness of user training and security products, which was inspired by a great paper from Microsoft: So Long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users. I think we can combine these thoughts into a simple ‘law’: The rate of user compliance with a security control is directly proportional to the pain of the control vs. the pain of non-compliance. We need some supporting definitions: Rate of compliance equals the probability the user will follow a required security control, as opposed to ignoring or actively circumventing said control. The pain of the control is the time added to an established process, and/or the time to learn and implement a new process. The pain of non-compliance includes the consequences (financial, professional, or personal) and the probability of experiencing said consequences. Consequences exist on a spectrum – with financial as the most impactful, and social as the least. The pain of non-compliance must be tied to the security control so the user understands the cause/effect relationship. I could write it out as an equation, but then we’d all make up magical numbers instead of understanding the implications. Psychology tells us people only care about things which personally affect them, and fuzzy principles like “the good of the company” are low on the importance scale. Also that immediate risks hold our attention far more than long-term risks; and we rapidly de-prioritize both high-impact low-frequency events, and high-frequency low-impact events. Economics teaches us how to evaluate these factors and use external influences to guide widescale behavior. Here’s an example: Currently most security incidents are managed out of a central response budget, as opposed to business units paying the response costs. Economics tells us that we can likely increase the rate of compliance with security initiatives if business units have to pay for response costs they incur, thus forcing them to directly experience the pain of a security incident. I suspect this is one of those posts that’s going to be edited and updated a bunch based on feedback… Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.