Securosis

Research

New Release: Our Insanely Comprehensive Database Security Framework and Metrics

Some projects take us a few days. Others? More like 18 months. Back before Mike even joined us, Adrian and I started a ‘quick’ project to develop a basic set of metrics for database security programs. As with most of our Project Quant efforts, we quickly realized there wasn’t even a starting framework out there, never mind any metrics. We needed to create a process for every database security task before we could define where people spent their time and money. Over the next year and a half we posted, reposted, designed, redesigned, and finally produced a framework we are pretty darn proud of. To our knowledge this is the most comprehensive database security program framework out there. From developing policies, to patch management, to security assessments, to activity monitoring, we cover all the major database security activities. We have structured this with a modular set of processes and subprocesses, with metrics to measure key costs at each step. The combination of process framework and metrics should give you some good ideas for structuring, improving, and optimizing your own program. Here’s the permanent home for the report, where you can post feedback and which will include update notices: Measuring and Optimizing Database Security Operations (DBQuant). We broke this into an Executive Summary that focuses on the process, and the full report with everything: Executive Summary. (PDF) The Full Report. (PDF) Special thanks to Application Security Inc. for sponsoring the report, and sticking with us as we pretended to be PhD candidates and dragged this puppy out. Share:

Share:
Read Post

Database Trends

This is a non-security post, in case that matters to you. A few days ago I was reading about a failed Telcomm firm ‘refocusing’ its business and technology to become a cloud database provider. I’m thinking that’s the last frackin’ thing we need. Some opportunistic serial start-up-tard can’t wait to fail the first time, and wants skip over onto not one but two, hot trends. Smells like 1999. Of course they landed an additional $4M; couple Cloud with a modular database and it’s a no-lose situation – at least for landing venture funding. So why do we need vendor #22 jumping onto the database in the cloud bandwagon? I visited the xeround site, and after looking at their cloud database architecture … damn, it appears solid. Think of a more modular MySQL. Or better yet, Amazon Dynamo with less myopic focus on search and content delivery. Modular back-end storage options, multiple access nodes disassociated from the query engines, and multiple API handlers. The ability to mix and match components to form a database engine depending upon the task at hand makes more sense than the “everything all the time” model we have with relational vendors. I don’t see anything novel here, just a solid assemblage of features. To fully take advantage of the elastic, multi-zone, multi-tenant pay-as-you go cloud service, a modular, dynamic database is more appropriate. Notice that I did not say ‘requirement’ – you can run Oracle as an AMI on Amazon too, but that’s neither modular nor nimble in my view. The main point I want to make is that the next generation of databases is going to look more like this and less like Oracle and IBM DB2. The core architecture described embodies a “use just what you need” approach, and allows you tailor the database to fit the application service model. And don’t mistake me for yet another analyst claiming that relational database platforms are dead. I have taken criticism in the past because people felt I was indicating relational platforms had run their course, but that’s not the case. It’s more like the way RISC concepts appeared in CISC processors to make them better, but did not supersede the original as promised. NoSQL concepts are pushing the definition of what ‘database’ means. And we see all these variants because the relational platforms are not a good fit for either the application model or cloud service delivery models. Expect many of the good NoSQL ideas to show up in relational platforms as the next evolutionary step. For now, the upstarts are pointing the way. Note that this is not an endorsement of the xeround technology. Frankly I am too busy to load up an AMI and try their database to see if it works as advertised. And their feature comparison is kinda BS. But conceptually I think this model is on track. That’s why will see many new database solutions on the market, as many firms struggle to find the right mix of features and platform options to meet requirements of application developers and cloud computing customers. Share:

Share:
Read Post

Software vs. Appliance: Understanding DAM Deployment Tradeoffs

One thing I don’t miss from my vendor days in the Database Activity Monitoring market is the competitive infighting. Sure, I loved to do the competitive analyses to see how each vendor viewed itself, and how they were all trying to differentiate their products. I did not enjoy going into a customer shop after a competitor “poisoned the well” with misleading statements, evangelical pitches touting the right way to tackle a problem, or flat-out lies. Being second into a customer account meant having to deal with the dozen land mines left in their minds, and explaining those issues just to get even. The common land mines were about performance, lack of impact on IT systems, and platform support. The next vendor in line countered with architectures that did not scale, difficulties in deployment, inability to collect important events, and management complexity of every other product on the market. The customer often cannot determine who’s lying until after they purchase something and see if it does what the vendor claimed, so this game continues until the market reaches a certain level of maturity. With Database Activity Monitoring, the appliance vs. software debate is still raging. It’s not front and center in most product marketing materials. It’s not core to solving most security challenges. It is positioned as an advantage behind the scenes, especially during bake-offs between vendors, to undermine competitors. Criticism not based on the way events are processed, UI, or event storage – but simply on the deployment model. Hardware is better than software. Software is better than hardware. This virtual hardware appliance is just as good as software. And so on. This is an area where I can help customers understand the tradeoffs of the different models. Today I am kicking off a short series to discuss tradeoffs between appliance, software, and virtual appliance implementations of Database Activity Monitoring systems. I’ll research the current state of the DAM market and highlight the areas you need to focus on to determine which is right for you. I’ll also share some personal experiences that illustrate the difference between the theoretical and the practical. The series will be broken into four parts: Hardware: Discussion of hardware appliances dedicated to Database Activity Monitoring. I’ll cover the system architecture, common deployment models, and setup. Then we’ll delve into the major benefits and constraints of appliances including performance, scalability, architecture, and disaster recovery. Software: Contrasting DAM appliances with software architecture and deployment models; then cover pros and cons including installation and configuration, flexibility, scalability and performance, and installation/setup Virtual Appliances: Virtualization and cloud models demand adaptation for many security technologies, and DAM is no different. Here I will discuss why virtual appliances are necessary – contrasting against with hardware-based appliances – and consider practical considerations that crop up. Data Collection and Management: A brief discussion of how data collection and management affect DAM. I will focus on areas that come up in competitive situations and tend to confuse buying decisions. I have been an active participant in these discussions over the last decade, and I worked for a DAM software provider. As a result I need to acknowledge, up front, my historical bias in favor of software. I have publicly stated my preference for software in the past based upon my experiences as a CIO and author of DAM technology. As an analyst, however, I have come to recognize that there is no single ‘best’ technology. My own experiences sometimes differ from customer reality, and I undersetand that every customer has its own preferred way of doing things. But make no mistake – the deployment model matters! With that said, there is no single ‘best’ model. Hardware, software, and virtual appliance – each has advantages and disadvantages. What works for each customer depends on its specific needs. And just like vendors, customer will have their own biases. What’s important is what is ‘better’ for the consumer. I will provide a list of pros and cons, to help you decide what will work best. I will point out my own preferences (bias), and as always you are welcome to call ‘BS’ on anything in this series you don’t accept. Perhaps more than any other series I have ever written at Securosis, I want to encourage feedback from the security and IT practitioner community. Why? Because I have witnessed too many software solutions that don’t scale as advertised. I am aware of several hardware deployments that cost the customer almost 4X the original bid. I am aware of software – my own firm was guilty – so inflexible we were booted from the customer site. I know these issues still occur, so my goal is to help wade through the competitive puffery. I encourage you to share what have you seen, what you prefer, and why, as it helps the community. Share:

Share:
Read Post

Always Be Looking

You really should read Lee Kushner and Mike Murray’s Information Security Leaders blog. Besides being good guys, they usually post good perspectives on career management each week. Like this post on Rats and Ships, where they basically address how to know your company is in trouble and when to start looking for what’s next. Obviously if the company is in turmoil and you don’t have your head in the sand, the writing will be on the wall. I learned in the school of hard knocks that you always have to be looking for what’s next. I know that sounds very cynical. I know it represents a lack of loyalty to whoever you are working for now. You see, things can change in an instant. Your company can lose a big deal. You could be the fall guy (or gal) for a significant breach (remember blame != responsibility). Or you could have a difference of opinion with your boss. There are an infinite number of scenarios that result in the same thing: You, out on your ass, with no job. Usually you expect it, but not always. The absolute worst day of my career was having to lay off a good friend, who had absolutely no idea it was coming. Because I couldn’t give him a head’s up that we were trying to sell the company, he was blindsided. When we closed the deal, almost everyone had to go. Some situations you can see coming, some you can’t. But either way you need to be prepared. If you are in security, you are trained to think defensively. You look at a situation and need to figure out how you can get pwned, screwed, or killed. It’s no different managing your career. Always be aware of how you can get screwed. Hopefully you won’t and you’ll have a long, prosperous career wherever you are, if that’s what you choose. But that doesn’t get you off the hook for being prepared. You should always be out there networking, meeting people, and getting involved in your community and paying it forward. Read Harvey Mackay’s book “Dig Your Well Before You’re Thirsty.” It’s the best book I’ve read about why you need to do something you likely despise – networking. And let’s not forget that opportunity cuts both ways. You need to be ready to pull the rip cord when things come unglued, but sticking around can be worthwhile too. For one, less people around means more opportunity for you, especially if you are pretty junior. You may end up with far more responsibility than your title, salary, and/or experience would otherwise warrant. And if you can see it through to the recovery (to the degree there is a recovery), you are positioned to be an up and comer in your organization. I guess the bigger message is to be aware of what’s going on, and to actively manage your career progression. Don’t let your career manage you. To the degree you want to do that. If you are really a glutton for punishment, start your own company. Then you can stop looking. Because you’ll know where to find all the problems. Photo credit: “Virtual Defensive Driving” originally uploaded by Kristin Brenemen Share:

Share:
Read Post

Friday Summary: April 8, 2011

I was almost Phished this week. Not by some Nigerian scammer, or Russian botnet, but by my own bank. Bundled with both my checking and mortgage statements – with the bank’s name, logos, and phone number was the warning: “Notice: Credit Report Review Re: Suspicious activity detection”. The letter made it appear that there were ongoing suspicious activity reported by the credit agency, and I needed to take immediate action. I thought “Crud, now I have to deal with this.” Enclosed was a signature sheet that looked like they wanted permission to investigate and take action. But wait a minute – when does my bank ask for permission? My suspicion awoke. I looked at the second page of the letter, under an electron microscope to read the 10^-6 point fine print, and it turned out suspicious activity was only implied. They were using fear of not acting to scare me into signing the sheet. The letter was a ruse to get me to buy credit monitoring ‘Services’ from some dubious partner firm that has been repeatedly fined millions by various state agencies for deceptive business practices. Now my bank – First Usury Depository – is known for new ‘products’ that are actually financial IED’s. Of the 30 fantastic new FUD offerings mailed in the last three years, not one could have saved me money. All would have resulted in higher fees, and all contained traps to hike interest rates or incur hidden charges. But the traps are hidden in the financial terms – they had not stooped to fear before, instead using the lure of financial independence and assurances that I was being very smart. Alan Shimmel’s right that we need to be doubly vigilant for phishing scams, just for the wrong reasons. Both phishers and bank executives are looking to make a quick buck by fooling people. They both use social engineering tactics: official-looking scary communications, to trigger fear, to prompt rushed and careless action. And they both face very low probabilities of jail time. I can’t remember who tweeted “Legitimate breach notification is indistinguishable from phishing”, but it’s true on a number of levels. Phished or FUDded, you’re !@#$ed either way. I have to give First Usury some credit – their attack is harder to detect. I am trained to look at email headers and HTML content, but not so adept at deciphering credit reports and calculating loan-to-value ratios. If I am phished out of my credit card number, I am only liable for the first $50 If I am FUDded into a new service by my bank, it’s $20 every month. Hey, it has worked for AOL for decades… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on metrics in Dark Reading. Adrian’s DAM and Intrusion Defense lesson Rich on Threatpost talking about RSA and Epsilon breaches. Adrian’s Securing Databases In The Cloud: Part 4 at Dark Reading. Favorite Securosis Posts Rich: Less Innovation Please. We don’t need more crap. We need more crap that works. That we use properly. Mike Rothman: Less Innovation Please. Adrian kills it with this post. Exactly right. “We need to use what we have.” Bravo. Adrian Lane: FireStarter: Now What? Other Securosis Posts Always Be Looking. Incite 4/6/2011: Do Work. Fool us once… EMC/RSA Buys NetWitness. Security Benchmarking, Going Beyond Metrics: Collecting Data Systematically. Security Benchmarking, Going Beyond Metrics: Sharing Data Safely. Quick Wins with DLP Light: Technologies and Architectures. Quick Wins with DLP Light: The Process. Favorite Outside Posts Rich: IEEE’s cloud portability project: A fool’s errand? Seriously, do you really think interoperability is in a cloud provider’s best interest? They’ll all push this off as long as possible. What will really happen is smaller cloud vendors will adopt API and functional compatibility with the big boys, hoping you will move to them. Mike Rothman: Jeremiah Grossman Reveals His Process for Security Research. Good interview with the big White Hat. Also other links to interviews with Joanna Rutkowska, HD Moore, Charlie Miller, and some loudmouth named Rothman. Pepper: Creepy really is. You can build a remarkable activity picture / geotrack / slime trail from public photo geolocation tags. Adrian Lane: Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. NSO Quant: Manage Metrics–Signature Management. Research Reports and Presentations Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Conde Nast $8M Spear Phishing Scam was mostly buried in the news, but a big deal! Something about email addresses being hacked. You make have heard about it from 50 or so of your closest vendors. Albert Gonzales surprise appeal. IBM to battle Amazon in the public cloud. Cyberwars Should Not Be Defined in Military Terms, Experts Warn. Net giants challenge French data law. EMC Acquires NetWitness Corporation Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Lubinski, in response to Incite: Do Work. “They seem to forget we are all supposed to be on the same team” I work with a few people like this. It makes me wonder if they don’t really think about it and just go on doing what they have been doing for X number of years and consider that good enough. The RSA can get pwnd as easily as the rest of the world, its not like they have users that carry around magical anti-hacker unicorn’s. I see a new buzzword coming on, StuxAPT. 🙂 No? Share:

Share:
Read Post

Incite 4/6/2011: Do Work

We spent last weekend up north visiting friends and family while the kids are on Spring Break. We decided to surprise them on Sunday by going to a baseball game. It was opening weekend and our home team was in town. We got cheap seats in the upper deck, but throughout the game we kept moving downwards, and by the 9th inning we were literally in the front row on the dugout. The Boss turned to me and asked if the kids had any idea how lucky they are. Yeah, right. And that’s a huge problem for me. Given a lot of luck and a little talent, I make a pretty good living, which means my kids can do things that weren’t possible for me growing up. But where do you draw the line? You want the kids to have great experiences, but you also want them to understand the work involved to provide those experiences. The best answer I have right now is to do work. I think I saw Chris Nickerson say that on Twitter one day and it resonated with me. It’s basically leading by example. I get up every morning and do work. Even though most of the time what I do all day doesn’t feel like work. The kids know that I work hard and I’m good about reminding them when they get a little uppity. One of the best parts of the weekend was seeing our twin nephews. They are 3 months old and a lot of fun. But each time I got my hands on one of them, I’d start working them out. You know, getting them to start supporting their weight – both sitting and standing. I also had them doing some tummy time, which brought back plenty of memories from when my kids were babies. Just like I remembered, newborns don’t like to do work. They like to eat and sleep and crap their pants. And when they would bark at me I’d just look them in the eye and say “stop bitching and do work!” Though maybe it is a bit early to push them out of their comfort zone. Although they do have to get into that fancy pre-school, after all… Yes, I know kids need to be kids too. They need to play and have fun because lord knows once they get out of school it’s not as much fun. But they can work at having fun. They can work on their ball skills, being a good friend, or even Angry Birds. If you want to be good, you need to work at it. That’s right. Do work! Working at home creates some challenges because every so often one of the kids will want to play during the work day. I politely (or sometimes not so politely) decline and remind them that Dad is doing work. Then I make sure they did work before letting them go do their own thing. You see, working hard is a habit and I know that sometimes I can be a bit relentless with them, but if they don’t learn a good work ethic now life will be pretty tough. So I’ll assume that reading my drivel is work for you, so you can feel good about spending 10 minutes with us each day. And no, I won’t reimburse you for those 10 minutes you’ll never get back. Now get back to work! That’s what I’m going to do. -Mike Photo credits: “Do work, son!” originally uploaded by Lee Huynh Incite 4 U Bully? I’m good with that: We haven’t spoken about Stuxnet recently, so let me point to an interesting post from VC David Cowan (the first money into VeriSign among others), who talks about how the guy that decomposed and published all the gory details of Stuxnet is misguided in calling the US a cyber-bully. You see, whether Ralph Langner wants to admit it or not, a nuclear-capable Iran isn’t in anyone’s best interests. Regardless of your politics, it’s hard to make a case otherwise. So presumably the US (and other partners) came up with a way to avoid bombing the crap out of somewhere while meeting their requirements. That’s innovation, folks. And innovation can’t be stopped. Remember the Manhattan Project? How long was it before the USSR had their own nuclear weapons? Once Pandora’s box is open, it’s open. And I’m glad the US got to open this one. – MR Advanced Persistent Service Providers: Ever hear of Epsilon? Not the Greek letter – the email marketing company. Me neither, until the breach notifications started rolling in. I bet the Secret Service never heard of them either. Evidently they are a pretty successful company, and that made them a target. As our emails and names start circulating the botnets, one interesting point is emerging. If you read one email sent to the DataBreaches.net folks you realize that the lost data included not only folks who opted out, but leftover data from prior corporate customers. That’s right, they kept everything. Forever. This provides a new perspective on the idea of persistence, eh? Perhaps it’s time to check your contracts with your service providers, so you aren’t exposed by their mistakes, after you switch to their competitor. – RM Consumerization FTW: ZDNet discussed an interesting use case for Pano Logic virtual client terminals at public libraries. I am a big fan of desktop virtualization, both for security because it’s easier to patch and implement policy centrally, and also because this makes your virtual session available regardless of your location or device. This is not an endorsement of any product – just of this type of technology in general. The use case makes sense, and particularly for schools which need controlled environments. At the same time I realize this will probably never catch on – for the same reason phone booths are gone – cell phones made them obsolete. The organizations with the most to gain from this service model are least likely to be able to afford it. In the long run schools and public libraries will likely require people to

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Sharing Data Safely

The best definition of a security benchmarking effort I am aware of is in Chapter 11 of my book, The Pragmatic CSO, which provides a good perspective on why benchmarking is important. Since it is very hard to have objective, defendable measures of security effectiveness, impact, etc., a technique that can yield very interesting insight into the performance of your security program is to compare it to others. If you can get a sample set of standard questions, then you can get a feel for whether you are off the reservation on some activities and out ahead of others. Benchmarking has been in use in other IT disciplines for decades. Whether it was data center performance or network utilization, companies have always felt compelled to compare themselves to others. It’s part of the competitive, win at all costs mentality that pervades business. So one of the best ways to figure out how good your security is, and get a feel for various other operational aspects of your security program, is to figure out how you compare to someone else. The objective is not to come up with a “security number” or “risk score”, but to present information in the context of other companies that face the same kinds of attacks. This provides management with what they always want: a perspective on the level of risk they are willing to take. If you are behind a reasonable peer group, they can decide to invest more or to accept the risks of a less effective security program. If they are ahead, maybe they will opt to maintain or even accelerate investment in the unlikely event they can differentiate on security). Or, yes, they might decide to scale back on security ‘overhead’. Either way, it’s a win for you as the practitioner, because you know where you stand and the decision makers are actually making informed decisions with data. How novel! But before we can start thinking about comparing all the metrics we’ve decided are important and are now collecting systematically, we need some kind of infrastructure and mechanism to share this data, safely and securely. A few years ago I did a lot of research into building a security benchmark, and customers clearly agreed that any sharing mechanism would need to ensure: Anonymity: First and foremost, these customers wanted to make sure the data wasn’t attributed back to them. No way, no how. Of all the things I discussed with these customers, this was non-negotiable. There could be no way for another customer could identify source data or derive which company provided any of the data. Integrity: The next issue was making sure the data was meaningful. That means it must be objectively and consistently gathered. Obviously there would need to be some level of agreement on what to count and how to count it, and that would likely be the purview of a third party. Security: This goes hand in hand with anonymity, but it’s different in that potential customers need to understand how the data would be protected (at a granular level) before they’d be comfortable sharing. Given all that, is it any wonder that security benchmarking remains in its infancy? When talking to any potential community aggregator or commercial benchmark offering, be sure to dig very deeply into how the data is both secured and aggregated to calculate the benchmarks. You need to ensure proper data encryption and segregation to make sure your data doesn’t get mixed with others, and that even if it somehow does, it wouldn’t be accessible. Additionally, you’ll want to make sure any device uploading data (this must be systematic and automated, remember) is mutually authenticated and authorized so no one can game the benchmark. From an infrastructure protection standpoint, make sure all the proper controls are in place. Things like strong identity management, egress filtering, HIPS (if not whitelisting on the devices with access to the data), as well as significant monitoring on the network and database. Given some recent high-profile breaches, it’s not unreasonable to expect network full packet capture as well. Ultimately you need to be comfortable with how your data is protected, so ask as many questions as you need to. From an application standpoint it’s also reasonable to expect the code to be built using some kind of secure development methodology. So learn about the threat models the vendor (or community) used to design the protection, as well as to what degree automated and non-automated testing mechanisms were used to scrutinize the application at all points during the development process. Learn about audits and pen tests, and basically crawl into very dark places in the provider’s infrastructure to get comfortable. This is a tall order and adds substantially to the due diligence required to get comfortable participating in a security benchmark. We understand this will be too high a hurdle for some. But keep your eyes on the prize: making security decisions based only actual data, within the context of your peer group. As opposed to doing what your gut tells you, or politics, or prayer. Once you clear this intellectual hurdle it’s time to define your peer groups for comparison and how to analyze the data. That’s next. Share:

Share:
Read Post

Less Innovation Please

It happens every time we have a series of breaches. The ‘innovators’ get press coverage with some brand-new idea for how to stop hackers and catch malicious employees trying to steal data. We are seeing yet another cycle right now, which Rich discussed yesterday in FireStarter: Now What? The sheer idiocy of Wired Magazine’s Paranoia Meter made me laugh out loud. Not that monitoring should not be done, but the concept of monitoring users’ physical traits to identify bad behavior is a lot more effort and is also error-prone. Looking at posture, mouse movements, and keystrokes to judge state of mind, then using that to predict data theft? Who could believe in that? It baffles me. User behavior in the IT realm does not need to be measured in terms of eye movement, typing speed, or shifting in one’s seat – if it did, we would need to round up all the 3rd graders in the world because we’d have a serious problem. Worse, the demand is clearly a marketing attempt to capitalize on WikiLeaks and HBGary – the whole thing reminds me more than a little of South Park’s ‘It’. Behavior analysis of resource usage is quite feasible without spy cameras and shoving probes where they don’t belong. We can collect just about every action a user takes on the network, and if we choose from endpoint and applications as well – all of which is simpler, more reliable, and cheaper than adding physical sensors or interpreting their output. It’s completely feasible to analyze actual (electronic) user actions – rather than vague traits with unclear meaning – in order to identify behavioral patterns indicating known attacks and misuse. Today we mostly see attribute-based analysis (time, location, document type, etc.), but behavioral profiles can be derived to use as a template for identifying good or bad acts, and used to validate current activity. How well this all works depends more on your requirements and available time than the capabilities of particular tools. What angers me here the complete lack of discussion of SIEM, File Activity Monitoring, Data Loss Prevention, or Database Activity Monitoring – all four technologies exist today and don’t rely upon bizarre techniques to collect data or pseudoscience to predict crime. Four techniques with flexible analysis capabilities on tangible metrics. Four techniques that have been proven to detect misuse in different ways. We don’t really need more ‘innovative’ security technologies as Wired suggests. We need to use what we have. Often we need it to be easier to use, but we can already have good capabilities for solving these problems. Many of these tools have been demonstrated to work. The impediments are cost and effort – not lack of capabilities. Share:

Share:
Read Post

Security Benchmarking, Going Beyond Metrics: Collecting Data Systematically

Once you have figured out what you want to count (security metrics), the next question is how to collect the data. Remember we look for metrics that are a) consistently and objectively measurable, and b) cheap to gather. That means some things we want to count may not be feasible. So let’s go through each bucket of metrics and list out the places we can get that data. Quantitative Metrics These metrics are pretty straightforward to collect (under the huge assumption that you are already using some management tool to handle the function). That means some kind of consoles for things like patching, vulnerabilities, configurations, and change management. Without one, aggregating metrics (and benchmarking relative to other companies) is likely too advanced and too much effort. Walk before you run, and automate/manage these key functions before you worry about counting. Incident Management: These metrics tend to be generated as part of the post-mortem/Quality Assurance step after closing the incident. Any post-mortem should be performed by a team, with the results communicated up the management stack, so you should have consensus/buy-in on metrics like incident cost, time to discover, and time to recover. We are looking for numbers with official units (like any good metric). Vulnerability, Patch, Configuration, and Change Management: These kinds of metrics should be stored by whatever tool you use for the specific function. The respective consoles should provide reports that can be exported (usually in XML or CSV format). Unless you use a metrics/benchmarking system that integrates with your tool, you’ll need to map its output into a format you can normalize, and use for reporting and comparing to peers. But make sure each console gets a full view of the entire process, including remediation. Be sure that every change, scan, and patch is logged in the system, so you can track the (mean) time to perform each function. Application Security: The metrics for application security tend to be a little more subjective than we’d prefer (like % of critical applications), but ultimately things like security test coverage can be derived from whatever tools are used to implement the application security process. This is especially true for web application security scanning, QA, and other processes that tend to be tool driven – as opposed to more amorphous functions such as threat modeling and code review. Financial: Hopefully you have a good relationship with your CFO and finance team, because they will have metrics on what you spend. You can gather direct costs such as software and personnel, but indirect costs are more challenging. Depending on the sophistication of your internal cost allocation, you may have very detailed information on how to allocate shared overhead, but more likely you will need to work with the finance team to estimate. Remember that precision is less important than consistency. As long as you estimate the allocations consistently, you can get valid trend data; if you’re comparing to peers you’ll need to be a bit more careful about your definitions. For the other areas we mentioned, including identity, network security, and endpoint protection, this data will be stored in the respective management consoles. As a rule of thumb, the more mature the product (think endpoint protection and firewalls), the more comprehensive the data. And most vendors have already had requests to export data, or built in more sophisticated management reporting/dashboards, for large scale deployments. But that’s not always the case – some consoles make it harder than others to export data to different analysis tools. These management consoles – especially the Big IT management stacks – are all about aggregating information from lots of places, not necessarily integrating with other analysis tools. That means as your metrics/benchmarking efforts mature, a key selection criterion will be the presence of an open interface to get data both in and out. Qualitative Metrics As discussed in the last post, qualitative metrics are squishy by definition and cannot meet the definition of a “good” metric. The numbers on awareness metrics should reside somewhere, probably in HR, but it’s not clear they are aggregated. And percentage of incidents due to employee error is clearly subjective, assessed as part of the incident response process, and stored for later collection. We recommend including that judgement as part of the general incident reporting process. Attitude is much squishier – basically you ask your users what they think of your organization. The best way to do that is an online survey tool. Tons of companies offer online services for that (we use SurveyMonkey, but there are plenty). Odds are your marketing folks already have one you can piggyback on, but they aren’t expensive. You’ll want to survey your employees at least a couple times a year and track the trends. The good news is they all make it very easy to get the data out. Systematic Collection This is the point in the series where we remind you that gathering metrics and benchmarking are not one-time activities. They are an ongoing adventure. So you need to scope out the effort as a repeatable process, and make sure you’ve got the necessary resources and automation to ensure you can collect this data over time. Collecting metrics on an ad hoc basis defeats their purpose, unless you are just looking for a binary (yes/no) answer. You need to collect data consistently and systematically to get real value from them. Without getting overly specific about data repository designs and the like, you’ll need a central place to store the information. That could be as simple as a spreadsheet or database, a more sophisticated business intelligence/analysis tool, or even an online service designed to collect metrics and present data. Obviously the more specific a tool is to security metrics, the less customization you’ll need to generate the dashboards and reports needed to use these metrics as a management tool. Now that you have a system in place for metrics collection we get to the meat of the series: benchmarking your metrics to a peer group. Over the next couple posts we’ll dig into exactly what that means, including how to

Share:
Read Post

FireStarter: Now What?

I have always believed that security – both physical and digital – is a self-correcting system. No one wants to invest any more into security than they need to. Locks, passwords, firewalls, well-armed ninja – they all take money, time, and effort we’d rather spend getting our jobs done, with our families, or on personal pursuits. Only the security geeks and the paranoid actually enjoy spending on security. So the world only invests the minimum needed to keep things (mostly) humming. Then, when things get really bad, the balance shifts and security moves back up the list. Not forever, not necessarily in the right order, and not usually to the top, but far enough that the system corrects itself enough to get back to business as usual. Or, far more frequently, until people perceive that the system has corrected itself – even if the cancer at the center merely moves or hides. Security never wins or loses – it merely moves up or down relative to an arbitrary line we call ‘acceptable’. Usually just below, and sometimes far below. We never fail as a whole – but sometimes we don’t succeed as well as we should in that moment. Over the past year we have gotten increasing visibility into a rash of breaches and incidents that have actually been going on for at least 5 years. From RSA and Comodo, to Epsilon, Nasdaq, and WikiLeaks. Everyone – from major governments, to trading platforms, to banks, to security companies, to grandma – has made the press. Google, Facebook, NASA, and HBGary Federal. We are besieged from China, Eastern Europe, and Anonymous mid-life men pretending to be teenage girls on 4chan. So we need to ask ourselves: Now what? The essential question we as security professionals need to ask is: is the quantum dot on the wave function of security deviating far enough from acceptable that we can institute the next round of changes? We know we can do more, and security professionals always believe we should do more, but does the world want us to do more? Will they let us? Because this is not a decision we ever get to make ourselves. The first big wave in modern IT security hit with LOVELETTER, Code Red, and Slammer. Forget the occasional website defacement – it was mass malware, and the resulting large-scale email and web outages, that drove our multi-billion-dollar addiction to firewalls and antivirus. Up and down the ride we started. The last time we were in a similar position was right around the time many of the current trends originated. Thanks to California SB1386, ChoicePoint became the first company to disclose a major breach back in 2005. This was followed by a rash of organizations suddenly losing laptops and backup tapes, and the occasional major breach credited to Albert Gonzales. PCI deadlines hit, HIPAA made a big splash (in vendor presentations), and the defense industry started quietly realizing they might be in a wee bit of trouble as those in the know noticed things like plans for top secret weapons and components leaking out. And there were many annual predictions that this year we’d see the big SCADA hack. The combined result was a more than incremental improvement in security. And a more than incremental increase in the capabilities of the bad guys. Never underestimate the work ethic of someone too lazy to get a legitimate job. In the midst of the current public rash of incidents, we have also seen far more than an incremental increase in the cost and complexity of the tools we use – not that they necessarily deliver commensurate value. And everyone still rotates user passwords every 90 days, without one iota of proof that any of the current breaches would have been stymied if someone had added another ! to the end of their kid’s birthday. 89 days ago. Are we deep into the next valley? Have things swung so far from acceptable that it will shift the market and our focus? My gut suspicion is that we are close, but the present is unevenly distributed — never mind the future. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.