Securosis

Research

Always Be Looking

You really should read Lee Kushner and Mike Murray’s Information Security Leaders blog. Besides being good guys, they usually post good perspectives on career management each week. Like this post on Rats and Ships, where they basically address how to know your company is in trouble and when to start looking for what’s next. Obviously if the company is in turmoil and you don’t have your head in the sand, the writing will be on the wall. I learned in the school of hard knocks that you always have to be looking for what’s next. I know that sounds very cynical. I know it represents a lack of loyalty to whoever you are working for now. You see, things can change in an instant. Your company can lose a big deal. You could be the fall guy (or gal) for a significant breach (remember blame != responsibility). Or you could have a difference of opinion with your boss. There are an infinite number of scenarios that result in the same thing: You, out on your ass, with no job. Usually you expect it, but not always. The absolute worst day of my career was having to lay off a good friend, who had absolutely no idea it was coming. Because I couldn’t give him a head’s up that we were trying to sell the company, he was blindsided. When we closed the deal, almost everyone had to go. Some situations you can see coming, some you can’t. But either way you need to be prepared. If you are in security, you are trained to think defensively. You look at a situation and need to figure out how you can get pwned, screwed, or killed. It’s no different managing your career. Always be aware of how you can get screwed. Hopefully you won’t and you’ll have a long, prosperous career wherever you are, if that’s what you choose. But that doesn’t get you off the hook for being prepared. You should always be out there networking, meeting people, and getting involved in your community and paying it forward. Read Harvey Mackay’s book “Dig Your Well Before You’re Thirsty.” It’s the best book I’ve read about why you need to do something you likely despise – networking. And let’s not forget that opportunity cuts both ways. You need to be ready to pull the rip cord when things come unglued, but sticking around can be worthwhile too. For one, less people around means more opportunity for you, especially if you are pretty junior. You may end up with far more responsibility than your title, salary, and/or experience would otherwise warrant. And if you can see it through to the recovery (to the degree there is a recovery), you are positioned to be an up and comer in your organization. I guess the bigger message is to be aware of what’s going on, and to actively manage your career progression. Don’t let your career manage you. To the degree you want to do that. If you are really a glutton for punishment, start your own company. Then you can stop looking. Because you’ll know where to find all the problems. Photo credit: “Virtual Defensive Driving” originally uploaded by Kristin Brenemen Share:

Share:
Read Post

Incite 4/6/2011: Do Work

We spent last weekend up north visiting friends and family while the kids are on Spring Break. We decided to surprise them on Sunday by going to a baseball game. It was opening weekend and our home team was in town. We got cheap seats in the upper deck, but throughout the game we kept moving downwards, and by the 9th inning we were literally in the front row on the dugout. The Boss turned to me and asked if the kids had any idea how lucky they are. Yeah, right. And that’s a huge problem for me. Given a lot of luck and a little talent, I make a pretty good living, which means my kids can do things that weren’t possible for me growing up. But where do you draw the line? You want the kids to have great experiences, but you also want them to understand the work involved to provide those experiences. The best answer I have right now is to do work. I think I saw Chris Nickerson say that on Twitter one day and it resonated with me. It’s basically leading by example. I get up every morning and do work. Even though most of the time what I do all day doesn’t feel like work. The kids know that I work hard and I’m good about reminding them when they get a little uppity. One of the best parts of the weekend was seeing our twin nephews. They are 3 months old and a lot of fun. But each time I got my hands on one of them, I’d start working them out. You know, getting them to start supporting their weight – both sitting and standing. I also had them doing some tummy time, which brought back plenty of memories from when my kids were babies. Just like I remembered, newborns don’t like to do work. They like to eat and sleep and crap their pants. And when they would bark at me I’d just look them in the eye and say “stop bitching and do work!” Though maybe it is a bit early to push them out of their comfort zone. Although they do have to get into that fancy pre-school, after all… Yes, I know kids need to be kids too. They need to play and have fun because lord knows once they get out of school it’s not as much fun. But they can work at having fun. They can work on their ball skills, being a good friend, or even Angry Birds. If you want to be good, you need to work at it. That’s right. Do work! Working at home creates some challenges because every so often one of the kids will want to play during the work day. I politely (or sometimes not so politely) decline and remind them that Dad is doing work. Then I make sure they did work before letting them go do their own thing. You see, working hard is a habit and I know that sometimes I can be a bit relentless with them, but if they don’t learn a good work ethic now life will be pretty tough. So I’ll assume that reading my drivel is work for you, so you can feel good about spending 10 minutes with us each day. And no, I won’t reimburse you for those 10 minutes you’ll never get back. Now get back to work! That’s what I’m going to do. -Mike Photo credits: “Do work, son!” originally uploaded by Lee Huynh Incite 4 U Bully? I’m good with that: We haven’t spoken about Stuxnet recently, so let me point to an interesting post from VC David Cowan (the first money into VeriSign among others), who talks about how the guy that decomposed and published all the gory details of Stuxnet is misguided in calling the US a cyber-bully. You see, whether Ralph Langner wants to admit it or not, a nuclear-capable Iran isn’t in anyone’s best interests. Regardless of your politics, it’s hard to make a case otherwise. So presumably the US (and other partners) came up with a way to avoid bombing the crap out of somewhere while meeting their requirements. That’s innovation, folks. And innovation can’t be stopped. Remember the Manhattan Project? How long was it before the USSR had their own nuclear weapons? Once Pandora’s box is open, it’s open. And I’m glad the US got to open this one. – MR Advanced Persistent Service Providers: Ever hear of Epsilon? Not the Greek letter – the email marketing company. Me neither, until the breach notifications started rolling in. I bet the Secret Service never heard of them either. Evidently they are a pretty successful company, and that made them a target. As our emails and names start circulating the botnets, one interesting point is emerging. If you read one email sent to the DataBreaches.net folks you realize that the lost data included not only folks who opted out, but leftover data from prior corporate customers. That’s right, they kept everything. Forever. This provides a new perspective on the idea of persistence, eh? Perhaps it’s time to check your contracts with your service providers, so you aren’t exposed by their mistakes, after you switch to their competitor. – RM Consumerization FTW: ZDNet discussed an interesting use case for Pano Logic virtual client terminals at public libraries. I am a big fan of desktop virtualization, both for security because it’s easier to patch and implement policy centrally, and also because this makes your virtual session available regardless of your location or device. This is not an endorsement of any product – just of this type of technology in general. The use case makes sense, and particularly for schools which need controlled environments. At the same time I realize this will probably never catch on – for the same reason phone booths are gone – cell phones made them obsolete. The organizations with the most to gain from this service model are least likely to be able to afford it. In the long run schools and public libraries will likely require people to

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Sharing Data Safely

The best definition of a security benchmarking effort I am aware of is in Chapter 11 of my book, The Pragmatic CSO, which provides a good perspective on why benchmarking is important. Since it is very hard to have objective, defendable measures of security effectiveness, impact, etc., a technique that can yield very interesting insight into the performance of your security program is to compare it to others. If you can get a sample set of standard questions, then you can get a feel for whether you are off the reservation on some activities and out ahead of others. Benchmarking has been in use in other IT disciplines for decades. Whether it was data center performance or network utilization, companies have always felt compelled to compare themselves to others. It’s part of the competitive, win at all costs mentality that pervades business. So one of the best ways to figure out how good your security is, and get a feel for various other operational aspects of your security program, is to figure out how you compare to someone else. The objective is not to come up with a “security number” or “risk score”, but to present information in the context of other companies that face the same kinds of attacks. This provides management with what they always want: a perspective on the level of risk they are willing to take. If you are behind a reasonable peer group, they can decide to invest more or to accept the risks of a less effective security program. If they are ahead, maybe they will opt to maintain or even accelerate investment in the unlikely event they can differentiate on security). Or, yes, they might decide to scale back on security ‘overhead’. Either way, it’s a win for you as the practitioner, because you know where you stand and the decision makers are actually making informed decisions with data. How novel! But before we can start thinking about comparing all the metrics we’ve decided are important and are now collecting systematically, we need some kind of infrastructure and mechanism to share this data, safely and securely. A few years ago I did a lot of research into building a security benchmark, and customers clearly agreed that any sharing mechanism would need to ensure: Anonymity: First and foremost, these customers wanted to make sure the data wasn’t attributed back to them. No way, no how. Of all the things I discussed with these customers, this was non-negotiable. There could be no way for another customer could identify source data or derive which company provided any of the data. Integrity: The next issue was making sure the data was meaningful. That means it must be objectively and consistently gathered. Obviously there would need to be some level of agreement on what to count and how to count it, and that would likely be the purview of a third party. Security: This goes hand in hand with anonymity, but it’s different in that potential customers need to understand how the data would be protected (at a granular level) before they’d be comfortable sharing. Given all that, is it any wonder that security benchmarking remains in its infancy? When talking to any potential community aggregator or commercial benchmark offering, be sure to dig very deeply into how the data is both secured and aggregated to calculate the benchmarks. You need to ensure proper data encryption and segregation to make sure your data doesn’t get mixed with others, and that even if it somehow does, it wouldn’t be accessible. Additionally, you’ll want to make sure any device uploading data (this must be systematic and automated, remember) is mutually authenticated and authorized so no one can game the benchmark. From an infrastructure protection standpoint, make sure all the proper controls are in place. Things like strong identity management, egress filtering, HIPS (if not whitelisting on the devices with access to the data), as well as significant monitoring on the network and database. Given some recent high-profile breaches, it’s not unreasonable to expect network full packet capture as well. Ultimately you need to be comfortable with how your data is protected, so ask as many questions as you need to. From an application standpoint it’s also reasonable to expect the code to be built using some kind of secure development methodology. So learn about the threat models the vendor (or community) used to design the protection, as well as to what degree automated and non-automated testing mechanisms were used to scrutinize the application at all points during the development process. Learn about audits and pen tests, and basically crawl into very dark places in the provider’s infrastructure to get comfortable. This is a tall order and adds substantially to the due diligence required to get comfortable participating in a security benchmark. We understand this will be too high a hurdle for some. But keep your eyes on the prize: making security decisions based only actual data, within the context of your peer group. As opposed to doing what your gut tells you, or politics, or prayer. Once you clear this intellectual hurdle it’s time to define your peer groups for comparison and how to analyze the data. That’s next. Share:

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Collecting Data Systematically

Once you have figured out what you want to count (security metrics), the next question is how to collect the data. Remember we look for metrics that are a) consistently and objectively measurable, and b) cheap to gather. That means some things we want to count may not be feasible. So let’s go through each bucket of metrics and list out the places we can get that data. Quantitative Metrics These metrics are pretty straightforward to collect (under the huge assumption that you are already using some management tool to handle the function). That means some kind of consoles for things like patching, vulnerabilities, configurations, and change management. Without one, aggregating metrics (and benchmarking relative to other companies) is likely too advanced and too much effort. Walk before you run, and automate/manage these key functions before you worry about counting. Incident Management: These metrics tend to be generated as part of the post-mortem/Quality Assurance step after closing the incident. Any post-mortem should be performed by a team, with the results communicated up the management stack, so you should have consensus/buy-in on metrics like incident cost, time to discover, and time to recover. We are looking for numbers with official units (like any good metric). Vulnerability, Patch, Configuration, and Change Management: These kinds of metrics should be stored by whatever tool you use for the specific function. The respective consoles should provide reports that can be exported (usually in XML or CSV format). Unless you use a metrics/benchmarking system that integrates with your tool, you’ll need to map its output into a format you can normalize, and use for reporting and comparing to peers. But make sure each console gets a full view of the entire process, including remediation. Be sure that every change, scan, and patch is logged in the system, so you can track the (mean) time to perform each function. Application Security: The metrics for application security tend to be a little more subjective than we’d prefer (like % of critical applications), but ultimately things like security test coverage can be derived from whatever tools are used to implement the application security process. This is especially true for web application security scanning, QA, and other processes that tend to be tool driven – as opposed to more amorphous functions such as threat modeling and code review. Financial: Hopefully you have a good relationship with your CFO and finance team, because they will have metrics on what you spend. You can gather direct costs such as software and personnel, but indirect costs are more challenging. Depending on the sophistication of your internal cost allocation, you may have very detailed information on how to allocate shared overhead, but more likely you will need to work with the finance team to estimate. Remember that precision is less important than consistency. As long as you estimate the allocations consistently, you can get valid trend data; if you’re comparing to peers you’ll need to be a bit more careful about your definitions. For the other areas we mentioned, including identity, network security, and endpoint protection, this data will be stored in the respective management consoles. As a rule of thumb, the more mature the product (think endpoint protection and firewalls), the more comprehensive the data. And most vendors have already had requests to export data, or built in more sophisticated management reporting/dashboards, for large scale deployments. But that’s not always the case – some consoles make it harder than others to export data to different analysis tools. These management consoles – especially the Big IT management stacks – are all about aggregating information from lots of places, not necessarily integrating with other analysis tools. That means as your metrics/benchmarking efforts mature, a key selection criterion will be the presence of an open interface to get data both in and out. Qualitative Metrics As discussed in the last post, qualitative metrics are squishy by definition and cannot meet the definition of a “good” metric. The numbers on awareness metrics should reside somewhere, probably in HR, but it’s not clear they are aggregated. And percentage of incidents due to employee error is clearly subjective, assessed as part of the incident response process, and stored for later collection. We recommend including that judgement as part of the general incident reporting process. Attitude is much squishier – basically you ask your users what they think of your organization. The best way to do that is an online survey tool. Tons of companies offer online services for that (we use SurveyMonkey, but there are plenty). Odds are your marketing folks already have one you can piggyback on, but they aren’t expensive. You’ll want to survey your employees at least a couple times a year and track the trends. The good news is they all make it very easy to get the data out. Systematic Collection This is the point in the series where we remind you that gathering metrics and benchmarking are not one-time activities. They are an ongoing adventure. So you need to scope out the effort as a repeatable process, and make sure you’ve got the necessary resources and automation to ensure you can collect this data over time. Collecting metrics on an ad hoc basis defeats their purpose, unless you are just looking for a binary (yes/no) answer. You need to collect data consistently and systematically to get real value from them. Without getting overly specific about data repository designs and the like, you’ll need a central place to store the information. That could be as simple as a spreadsheet or database, a more sophisticated business intelligence/analysis tool, or even an online service designed to collect metrics and present data. Obviously the more specific a tool is to security metrics, the less customization you’ll need to generate the dashboards and reports needed to use these metrics as a management tool. Now that you have a system in place for metrics collection we get to the meat of the series: benchmarking your metrics to a peer group. Over the next couple posts we’ll dig into exactly what that means, including how to

Share:
Read Post

Fool us once… EMC/RSA Buys NetWitness

To no one’s surprise (after NetworkWorld spilled the beans two weeks ago), RSA/EMC formalized its acquisition of NetWitness. I guess they don’t want to get fooled again the next time an APT comes to visit. Kidding aside, we have long been big fans of full packet capture, and believe it’s a critical technology moving forward. On that basis alone, this deal looks good for RSA/EMC. Deal Rationale APT, of course. Isn’t that the rationale for everything nowadays? Yes, that’s a bit tongue in cheek (okay, a lot) but for a long time we have been saying that you can’t stop a determined attacker, so you need to focus on reacting faster and better. The reality remains that the faster you figure out what happened and remediate (as much as you can), the more effectively you contain the damage. NetWitness gear helps organizations do that. We should also tip our collective hats to Amit Yoran and the rest of the NetWitness team for a big economic win, though we don’t know for sure how big a win. NetWitness was early into this market and did pretty much all the heavy lifting to establish the need, stand up an enterprise class solution, and show the value within a real attack context. They also showed that having a llama at a conference party can work for lead generation. We can’t minimize the effect that will have on trade shows moving forward. So how does this help EMC/RSA? First of all, full packet capture solves a serious problem for obvious targets of determined attackers. Regardless of whether the attack was a targeted phish/Adobe 0-day or Stuxnet type, you need to be able to figure out what happened, and having the actual network traffic helps the forensics guys put the pieces together. Large enterprises and governments have figured this out and we expect them to buy more of this gear this year than last. Probably a lot more. So EMC/RSA is buying into a rapidly growing market early. But that’s not all. There is a decent amount of synergy with the rest of RSA’s security management offerings. Though you may hear some SIEM vendors pounding their chests as a result of this deal, NetWitness is not SIEM. Full packet capture may do some of the same things (including alert on possible attacks), but it analysis is based on what’s in the network traffic – not logs and events. More to the point, the technologies are complimentary – most customers pump NetWitness alerts into a SIEM for deeper correlation with other data sources. Additionally some of NetWitness’ new visualization and malware analysis capabilities supplement the analysis you can do with SIEM. Not coincidentally, this is how RSA positioned the deal in the release, with NetWitness and EnVision data being sent over to Archer for GRC (whatever that means). Speaking of EnVision, this deal may take some of the pressure off that debacle. Customers now have a new shiny object to look at, while maybe focusing a little less on moving off the RSA log aggregation platform. It’s no secret that RSA is working on the next generation of the technology, and being able to offer NetWitness to unhappy EnVision customers may stop the bleeding until the next version ships. A side benefit is that the sheer amount of network traffic to store will drive some back-end storage sales as well. For now, NetWitness is a stand-alone platform. But it wouldn’t be too much of a stretch to see some storage/archival integration with EMC products. EMC wouldn’t buy technology like NetWitness just to drive more storage demand, but it won’t hurt. Too Little, Too Late (to Stop the Breach) Lots of folks drew the wrong conclusion, that RSA bought NetWitness because of their recent breach. But these deals doesn’t happen overnight, so this acquisition has been in the works for quite a while. But what could better justify buying a technology than helping to detect a major breach? I’m sure EMC is pretty happy to control that technology. The trolls and haters focus on the fact that the breach still happened, so the technology couldn’t work that well, right? Actually, the biggest issue is that EMC didn’t have enough NetWitness throughout their environment. They might have caught the breach earlier if they had the technology more widely deployed. Then again, maybe not, because you never know how effective any control will be at any given time against any particular attack, but EMC/RSA can definitely make the case that they could have reacted faster if they had NetWitness everywhere. And now they likely will. Competitive Impact The full packet capture market is still very young. There are only a handful of direct competitors to NetWitness, all of whom should see their valuations skyrocket as a result of this deal. Folks like Solera Networks are likely grinning from ear to ear today. We also expect a number of folks in adjacent businesses (such as SIEM) to start dipping their toes into this water. Speaking of SIEM, NetWitness did have partnerships with the major SIEM providers to send them data, and this deal is unlikely to change much in the short term. But we expect to see a lot more integration down the road between NetWitness, EnVision Next, and Archer, which could create a competitive wedge for RSA/EMC in large enterprises. So we expect the big SIEM players to either buy or build this capability over the next 18 months to keep pace. Not that they aren’t all over the APT marketing already. Bottom Line This is a good deal for RSA/EMC – acquiring NetWitness provides a strong, differentiated technology in what we believe will be an important emerging market. But with RSA’s mixed results in leveraging acquired technology, it’s not clear that they will remain the leader in two years. But if they provide some level of real integration in that timeframe, they will have a very compelling set of products for security/compliance management. This is also a good

Share:
Read Post

White Paper: Network Security in the Age of *Any* Computing

We all know about the challenges for security professionals posed by mobile devices, and by the need to connect to anything from anywhere. We have done some research on how to start securing those mobile devices, and have broadened that research with a network-centric perspective on these issues. Let’s set the stage for this paper: Everyone loves their iDevices and Androids. The computing power that millions now carry in their pockets would have required a raised floor and a large room full of big iron just 25 years ago. But that’s not the only impact we see from this wave of consumerization, the influx of consumer devices requiring access to corporate networks. Whatever control you thought you had over the devices in the IT environment is gone. End users pick their devices and demand access to critical information within the enterprise. Whether you like it or not. And that’s not all. We also have demands for unfettered access from anywhere in the world at any time of day. And though smart phones are the most visible devices, there are more. We have the ongoing tablet computing invasion (iPad for the win!); and a new generation of workers who demand the ability to choose their computers, mobile devices, and applications. Even better, you aren’t in a position to dictate much of anything moving forward. It’s a great time to be a security professional, right? In this paper we focus on the network architectures and technologies that can help you protect critical corporate data, given your requirements to provide users with access to critical and sensitive information on any device, from anywhere, at any time. A special thanks to ForeScout for sponsoring the research. Find it in the research library or download the PDF directly: Network Security in the Age of Any Computing: Risks and Options to Control Mobile, Wireless, and Endpoint Devices. Share:

Share:
Read Post

PROREALITY: Security is rarely a differentiator

I’ve been in this business a long time – longer than most, though not as long as some. That longevity provides perspective, and has allowed me to observe the pendulum swinging back and forth more than once. This particular pendulum is the security as an enabler concept – you know, positioning security not as an overhead function but as a revenue driver (either direct or indirect). Jeremiah’s post earlier this week, PROTIP: Security as a Differentiator, brought back that periodic (and ultimately fruitless) discussion. His general contention is that security can differentiate an offering, ultimately leading to security being a vehicle that drives revenue. So before we start down this path again, let me squash it like the cockroach it is. First we examine one of Jeremiah’s contentions: When security is made visible (i.e. help customers be and feel safe), the customer may be more inclined to do business with those who clearly take the matter seriously over others who don’t. That’s not entirely false. But the situations (or in marketing speak, segments) where that is true are very limited. Banks have been telling me for years that churn increases after a breach is publicized, and the one which say they are secure gain customers. I still don’t buy it, mostly because the data always seems to come from some vendor pushing their product to protect bank customer data. The reality is that words do not follow behavior when it comes to security. Whether you sit on the vendor side or the user side you know this. When you ask someone if they are worried about security, of course they say yes. Every single time. But when you ask them to change their behavior – or more specifically not do something they want to because it’s a security risk – you see the reality. The vast majority of people don’t care about security enough to do (or not do) anything. Jeremiah is dreaming – if he were describing reality, everyone related to the security business would benefit. Unfortunately it’s more of a PRODREAM than a PROTIP. Or maybe even a PROHALLUCINATION. He’s not high on peyote or anything. Jer is high on the echo chamber. When you hang around all day with people who care about security, you tend to think the echo chamber reflect the mass market. It doesn’t – not by a long shot. So spending a crapload of money on really being secure is a good thing to do. To be clear, I would like you to do that. But don’t do it to win more business – you won’t, and you’ll be disappointed – or your bosses will be disappointed in you for failing to deliver. Invest in security it because it’s the right thing to do. For your customers and for the sustainability of your business. You may not get a lot of revenue upside from being secure, but you can avoid revenue downside. I believe this to be true for most businesses, but not all. Cloud service providers absolutely can differentiate based on security. That will matter to some customers and possibly ease their migration to the cloud. There are other examples of this as well, but not many. I really wish Jeremiah was right. It would be great for everyone. But I’d be irresponsible if I didn’t point out the cold, hard reality. Photo credit: “3 1 10 Bearman Cartoon Cannabis Skunk Hallucinations” originally uploaded by Bearman2007 Share:

Share:
Read Post

Incite 3/30/2011: The Silent Clipper

I’m very fortunate to have inherited Rothman hair, which is gray but plentiful and grows fast. Like fungus. Given my schedule, I tend to wait until things get lost in my hair before I get it cut. Like birds; or yard debris; or Nintendo DS games. A few weeks back the Boss told me to get it cut when I lost my iPhone in my hair. So I arranged a day to hit the barber I have frequented for years. I usually go on Mondays when I can, because his partner is off. These guys have a pretty sophisticated queuing system, honed over 40+ years. Basically you wait until your guy is open. That works fine unless the partner is open and your guy is backed up. Then the partner gives me the evil eye as he listens to his country music. But I have to stay with my guy because he has a vacuum hooked up to his clipper. Yes, I wait for my guy because he uses a professional Flowbee. But when I pulled up the shop was closed. I’ve been going there for 7 years and the shop has never been closed on Monday. Then I looked at the sign, which shows hours only for the partner – my guy’s hours aren’t listed. Rut roh, I got a bad feeling. But I was busy, so I figured I’d go back later in the week and see what happened. I went in Thursday, and my guy wasn’t there. Better yet, the partner was backed up, but I had just lost one of the kids in my hair, so I really needed a cut. I’m quick on the uptake, so I figured something was funky, but all my guy’s stuff was still there – including pictures of his grandkids. It’s like the place that time forgot. But you can’t escape time. It catches everyone. Finally the situation was clarified when a customer came in to pay his respects to the partner. My fears were confirmed: my guy was gone, his trusty clippers silenced. The Google found his obituary. Logically I know death completes the circle of life, and no one can escape. Not even my barber. Truth be told, I was kind of sad. But I probably shouldn’t be. Barber-man lived a good life. He cut hair for decades and enjoyed it. He did real estate as well. He got a new truck every few years, so the shop must have provided OK. He’d talk about his farm, which kept him busy. I can’t say I knew him well, but I’m going to miss him. So out of respect I wait and then sit in the partner’s chair. Interestingly enough he gave me a great cut, even though I was covered in hair without the Flowbee. I was thinking I’d have to find a new guy, but maybe I’ll stick with partner-man. Guess there is a new barber-man in town. Godspeed Richard. Enjoy the next leg of your journey. -Mike Photo credits: “Barber Shop” originally uploaded by David Smith Incite 4 U Can I call you Dr. Hacker?: Very interesting analysis here by Ed Moyle about whether security should be visionary. Personally I don’t know what that means, because our job is to make sure visionary business leaders can do visionary things without having critical IP or private data show up on BitTorrent. But the end of the post on whether security will be innovation-driven (like product development), standards-driven, innovation-averse (like accounting), or standard-driven, innovation-accepting (like medicine) got me thinking. I think we’d like to think we’ll be innovation-driven, but ultimately I suspect we’ll end up like medicine. Everyone still gets sick (because the viruses adapt to our defenses), costs continue to skyrocket, and the government eventually steps in to make everything better. Kill me now, Dr. Hacker. – MR Learn clarity from the (PHP)Fog: One of the things that fascinates me about breaches (and most crisis events) is how the affected react. As I wrote about last week, most people do almost exactly the wrong thing. But as we face two major breaches within our industry, at RSA (“everyone pretend you don’t know what’s going on even though it’s glaringly obvious”), and Comodo (“we were the victim of a state-sponsored attack from Iran, not a teenager, we swear”); perhaps we should learn some lessons from PHPFog (“How We Got Owned by a Few Teenagers (and Why It Will Never Happen Again)”). Honesty is, by far, the best way to maintain the trust of your customers and the public. Especially when you use phrases like, “This was really naive and irresponsible of me.” Treat your customers and the public like adults, not my 2 year old. Especially when maintaining secrecy doesn’t increase their security. – RM MySQL PwNaGe: For the past few days, the news that mysql.com has both a SQL injection vulnerability and a Cross Site Scripting (XSS) vulnerability has been making the rounds. The vulnerabilities are not in the MySQL database engine, but in the site itself. Detailed information from the hacked site was posted on Full Disclosure last Sunday as proof. Appearently the MySQL team was alerted to the issue in January, and this looks like a case of “timely disclosure” – they could have taken the hack further if they wanted. Not much in takeaways from this other than SQL injection is still a leading attack vector and you should have quality passwords to help survive dictionary attacks in the aftermath of a breach. Still no word from Oracle, as there is no acknowledgement of the attack on mysql.com. I wonder if they will deploy a database firewall? – AL APT: The FUD goes on and on and on and on: I applaud Chris Eng’s plea for the industry to stop pushing the APT FUD at all times. He nails the fact that vendors continue to offer solutions to the APT because they don’t want to miss out when the “stop APT project” gets funded. The nebulous definition of APT helps vendors obfuscate the truth, and as Chris points out it frustrates many of us. Yes, we should call out vendors for

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Security Metrics (from 40,000 feet)

In our introduction to Security Benchmarking, Going Beyond Metrics, we spent some time defining metrics and pointing out that they have multiple consumers, which means we need to package and present the data to these different constituencies. As you’ll see, there is no lack of things to count. But in reality, just because you can count something doesn’t mean you should. So let’s dig a bit into what you can count. Disclaimers: we can only go so deep in a blog series. If you are intent on building a metrics program, you must read Andy Jaquith’s seminal work Security Metrics: Replacing Fear, Uncertainty and Doubt. The book goes into great detail about how to build a security metrics program. The first significant takeaway is how to define a good security metric in the first place: Expressed as numbers Have one or more units of measure Measured in a consistent and objective way Can be gathered cheaply Have contextual relevance Contextual relevance tends to be the hard thing. As Andy says in his March 2010 security metrics article in Information Security magazine: “the metrics must help someone–usually the boss–make a decision about an important security or business issue.” That’s where most security folks tend to fall down, focusing on things that don’t matter, or drawing suspect conclusions from operational data. For example, generating a security posture rating from AV coverage won’t work well. Consensus Metrics We also need to tip our hats to the folks at the Center for Internet Security, who have published a good set of starter security metrics, built via their consensus approach. Also take a look at their QuickStart guide, which does a good job of identifying the process to implement a metrics program. Yes, consensus involves lowest common denominators, and their metrics are no different. But keep things in context: the CIS document provides a place to start, not the definitive list of what you should count. Taking a look at the CIS consensus metrics: Incident Management: Cost of incidents, Mean cost of incidents, Mean incident recovery cost, Mean time to incident discovery, Number of incidents, Mean time between security incidents, Mean time to incident recovery Vulnerability Management: Vulnerability scanning coverage, % systems with no severe vulnerabilities, Mean time to mitigate vulnerabilities, Number of known vulnerabilities, Mean cost to mitigate vulnerabilities Patch Management: Patch policy compliance, Patch management coverage, Mean time to patch, Mean cost to patch Configuration Management: % of configuration compliance, Configuration management coverage, current anti-malware compliance Change Management: Mean time to complete changes, % of changes with security review, % of changes with security exceptions Application security: # of applications, % of critical applications, Application risk access coverage, Application security testing coverage Financial: IT security spending as % of IT budget, IT security budget allocation Obviously there are many other types of information you can collect – particularly from your identity, firewall/IPS, and endpoint management consoles. Depending on your environment these other metrics may be important for operations. We just want to provide a rough sense of the kinds of metrics you can start with. For those gluttons for punishment who really want to dig in we have built Securosis Quant models that document extremely granular process maps and the associated metrics for Patch Management, Network Security Operations (monitoring/managing firewalls and IDS/IPS), and Database Security. We won’t claim all these metrics are perfect. They aren’t even supposed to be – nor are they all relevant to all organizations. But they are a place to start. And most folks don’t know where to start, so this is a good thing. Qualitative ‘Metrics’ I’m very respectful of Andy’s work and his (correct) position regarding the need for any metrics to be numbers and have units of measure. That said, there are some things that aren’t metrics (strictly speaking) but which can still be useful to track, and for benchmarking yourself against other companies. We’ll call these “qualitative metrics,” even though that’s really an oxymoron. Keep in mind that the actual numbers you get for these qualitative assessments isn’t terribly meaningful, but the trend lines are. We’ll discuss how to leverage these ‘metrics’/benchmarks later. But some context on your organization’s awareness and attitudes around security is critical. Awareness: % of employees signing acceptable use policies, % of employees taking security training, % of trained employees passing a security test, % of incidents due to employee error Attitude: % of employees who know there is a security group, % of employees who believe they understand threats to private data, % of employees who believe security hinders their job activities We know what you are thinking. What a load of bunk. And for gauging effectiveness you aren’t wrong. But any security program is about more than just the technical controls – a lot more. So qualitatively understanding the perception, knowledge, and awareness of security among employees is important. Not as important as incident metrics, so we suggest focusing on the technical controls first. But you ignore personnel and attitudes at your own risk. More than a few security folks have been shot down because they failed to pay attention to how they were perceived internally. Again, entire books have been written about security metrics. Our goal is to provide some ideas (and references) for you to understand what you can count, but ultimately what you do count depends on your security program and business imperatives. Next we will focus on how to collect these metrics systematically. Because without your own data, you can’t compare anything. Share:

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Introduction

At Securosis we tend to be passionate about security. We have the luxury of time (and lack of wingnuts yelling at us all day) to think about how security should work, and make suggestions for how to get there. We also have our own pet projects – areas of research that get us excited. We usually focus on ‘hot’ topics, because they pay the bills. We rarely get to step back and think outside the box about a security process that really needs to change. That’s why I’m very excited to be starting a new research project called Security Benchmarking, Going Beyond Metrics – interestingly enough, on security metrics and benchmarking. This topic is near and dear to my heart. I have been writing about metrics for years, and I broached the subject of benchmarking in my security methodology book (The Pragmatic CSO) back in 2007. To be candid, talking about security metrics – and more specifically security benchmarking – was way ahead of the market. Four years later, we still struggle to decide what we should count. Forget about comparing our numbers to other organizations to understand relative performance – which is how we would define a benchmark. It has been like trying to teach a toddler quantum physics. But we believe this idea’s time has come. In this series and the resulting white paper, I will revisit many of the ideas in The Pragmatic CSO, including updates based on industry progress since 2007. Ultimately, at Securosis we focus on practical (even pragmatic) application of research, so there won’t be any fluff or pie-in-the-sky handwaving. Just things you can start thinking about right now, with some actionable information to both rejuvenate your security metrics program and start comparing yourself against your peers. Before we jump in, thanks to our friends at nCircle for sponsoring this research. The rest of this series will appear on the complete (‘heavy’) side of our site and our heavy RSS feed. Introduction: Security Metrics As long as we have been doing security, we have been trying to count different aspects of our work. The industry has had vert limited success so far (yes – we are being very kind), so we need a better way to answer the question: “How effective are you at security?” The fundamental problem is that security is a nebulous topic, and at the end of the day the only important question is whether you are compromised or not – that is the ultimate measure of your effectiveness. But that doesn’t help communicate value to senior management or increase operational efficiency. The problem is further complicated by the literally infinite number of things to count. You can count emails and track which ones are bad – that’s one metric. So is the number of network flows, compared to how many of them are ‘bad’. If you can count it, it’s a metric. It may not be a good metric, but it is a metric. You can spend as much time as you like modeling, and counting, and correlating, and trying to figure out your “coverage” percentage, comparing the controls (always finite) to every conceivable attack (always infinite). But ultimately we have found that most security professionals do best keeping two sets of books. No, not like Worldcom did in the good old days, but two distinct sets of metrics: Important to senior management: Folks like the CIO, CFO, and CEO want to know whether you are ‘secure’ and how effective the security team is. They want to hear about the number of ‘incidents’, how much money you spend, and whether you hit the service levels you committed to. They tend to focus on those for ‘overhead’ functions – and whether you like it or not, security is overhead. Important to running your business: Distinct from business-centric numbers, you also need to measure the efficiency of your security processes. These are the numbers that make senior management eyes glaze over. Things like AV updates, time to re-image a machine or deploy a patch, number of firewall rule changes, and a host of other metrics that track what your folks are doing every day. The point of these numbers isn’t to gauge security quality overall, but to figure out how you can do your work faster and better. Of course, it’s almost impossible to improve things you don’t control. So we will focus on activities that can be directly impacted by the CSO and/or the security team. As we work through this series we will look at logical groupings of metrics that can be used for both operational and benchmarking purposes. But before we get ahead of ourselves, let’s define security benchmarking at a high level. Security Benchmarking Given our general failure to define and collect a set of objective, defendable measures of security effectiveness, impact, etc., a technique that can yield very interesting insight into your security environment is to compare your numbers to others. If you can get a fairly broad set of consistent data (both quantitative and qualitative), then compare your numbers to the dataset, you can get a feel for relative performance. This is what we mean by security benchmarking. Benchmarks have been used in other IT disciplines for decades. Whether for data center performance or network utilization, companies have always felt compelled to compare themselves to others. This hasn’t happened in security to date, mostly because we haven’t been sure what to count. If we can build some consensus on that, and figure out a way to collect and share that data safely, then benchmarking becomes much more feasible. Let’s discuss some metrics and why they would be interesting to compare to others: Number of incidents: Are you overly targeted? Or less effective at stopping attacks? The number of incidents doesn’t tell the entire story, but knowing how you fare relative to other is certainly interesting. Downtime for security issues: How effective you are at stopping attacks? And how severe is their impact? The downtime metric doesn’t capture everything, but it does get at the most visible impact of an attack. Number of activities: By tracking activity at a high level, you

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.