Securosis

Research

Use THEIR data to tell YOUR story

I’m in the air (literally) on the way to Metricon 6; so I’m thinking a lot about metrics, quantification, and the like. Of course most of the discussion at Metricon will focus on how practitioners can build metrics programs to make their security programs more efficient, maybe more effective, and certainly more substantiated (with data, as opposed to faith). Justifiably so – to mature the practice of security we need to quantify it better. But I can’t pass up the opportunity to poke a bit at the type of quantification that comes from the vendor community. Surveys and analyses which always end up building a business case for security products and services. The latest masterpiece from the king of vendor-sponsored quantification, Larry Ponemon, is the 2nd annual cost of cyber-crime survey – sponsored by HP/ArcSight. To be clear, I’m not picking (too much) on Dr. Larry, but I wanted to put the data he presents in the report (PDF) in the proper context and talk briefly about how a typical end user should use reports like this. First of all, Ponemon interviewed 50 end users to derive his data. It’s been a long time since I’ve done the math to determine statistical significance, but I can’t imagine that a sample size of 50 qualifies. When you look at some of the results, his findings are all over the map. The high level sound bites include a median annualized cost of $5.9 million from “cyber crime,” whatever that means. The range of annualized losses goes from $1.5 to $36.5 million. That’s a pretty wide range, eh? His numbers are up fairly dramatically from last year, which plays into the story that things are bad and getting worse. Unsurprisingly, that’s good for generating FUD (Fear, Uncertainty, and Doubt). And that’s what we need to keep in mind about these surveys. Being right is less important than telling a good story, but we’ll get to that. Let’s contrast that against Verizon Business’s 2011 DBIR, which used 761 data points from their own data, data from the US Secret Service, and additional data from Dutch law enforcement as a supplement. 761 vs 50. I’m no mathematician, but which data set sounds more robust and representative of the overall population to you? Even better is one of Larry’s other findings, which I include in its entirety because it must be seen to be believed. The most costly cyber crimes are those caused by malicious code, denial of service, stolen or hijacked devices and malicious insiders. These account for more than 90 percent of all cyber crime costs per organization on an annual basis. Mitigation of such attacks requires enabling technologies such as SIEM and enterprise GRC solutions. Really? Mitigation of malicious code attacks requires SIEM and GRC? Maybe I’m splitting hairs here, but this kind of absolute statement make me nuts. The words matter. I understand the game. Ponemon needs to create some urgency for ArcSight’s prospects to justify the report, so throw a little love at SIEM and GRC. Rock on. Yeah, the cynic is in the house. This statement is then justified by some data that says surveyed customers using SIEM lost on average 25% less than those without SIEM. Those folks with SIEM were able to detect faster and contain more effectively. Which is true in my experience. But only if the company makes a significant and ongoing investment. Right – to the tune of millions of dollars. I wonder if any of those 50 companies had, let’s say, a failed SIEM implementation? Were they counted in the SIEM bucket? Again, let’s not confuse correctness of the data with the story you need to tell to do your job. That’s the value of these reports. They provide data, that is not your own, allowing you to tell a story internally. Lord knows our organizations want to see hard costs, showing real losses, to justify continued spending on security. This is the same message I deliver with our Data Breaches presentation. The data doesn’t matter – the story does. A key skill for any management position is the ability to tell a story. In the security business, our stories must paint a picture of what can happen if the organization takes its eyes off the ball. If the money is spent elsewhere and the flanks are left unprotected. Understand that your VP of Sales is telling his/her story, about how further investment in sales is important. VPs of manufacturing tell stories about the need to upgrade equipment in the factories, and so on and so forth. So your story needs to be good. Not all of us are graced with a breach to create instant urgency for continued security investment. Though if you believe Ponemon’s data, fewer and fewer escape unscathed each year. So you need to create your own story – preferably leveraging another organization’s pain rather than your own. In this case, the empirical correctness of the data isn’t important. It’s how the data allows you to make the points you need. Share:

Share:
Read Post

Fact-Based Network Security: Defining ‘Risk’

As we mentioned when introducing this series on fact-based network security, we increasingly need to use data to determine our priorities. This enables us to focus on activities that will have the greatest business impact. But that begs the question: how you determine what’s important? The place to start is with your organization’s assets. Truth be told, importance and beauty are both in the eye of the beholder, so this process challenges even the most-clued in security professionals. You will need to deal with subjectivity and the misery of building consensus (about what’s important), and ultimately the answer will continue to evolve in light of the dynamic nature of business. But you still need to do it. You can’t spend a bunch of time protecting devices no one cares about. But it’s always good to start conversations with a good idea of the answer, so we recommend you start by defining relative asset value. We have long held that estimating (value = purchase price + some number you make up – depreciation) is ridiculous. We haven’t stopped many folks from doing it, but we’ll just say there isn’t a lot of precision in that approach, and leave it at that. So what to do? Let’s get back to the concept of relative, which is the key. A reasonable approach would be to categorize assets into a handful of buckets (think 3-4) by their importance to the business. For argument’s sake we’ll call them: critical, important, and not so important. Then spend time looking through the assets and sorting them into those categories. You can use a quick and dirty method of defining relative value which I first proposed in the Pragmatic CSO. Ask a few simple questions of both yourself and business leadership about the assets… What does it cost us if this system goes down? This is the key question, and it’s very hard to get a precise answer, but try. Whether it’s lost revenue, or brand impact, or customer satisfaction, or whatever – push executives to really help you understand what happens to the business if that system is not available. Who uses this system? This is linked to the first question, but can yield different and interesting perspectives. If five people in Accounting use the system, that’s one thing. If every employee on the shop floor does, that’s another. And if every customer you have uses the system, that would be a much different thing. So a feel for the user community can give you an idea of the system’s criticality. How easy are the assets to replace? Of course, having a system fail is a bad thing, but how bad depends on replacement cost. If your CRM system goes down, you can go online to something like Salesforce.com and be up and running in an hour or two. Obviously that doesn’t include data migration, etc. But some systems are literally irreplaceable – or would require so much customization as to be effectively irreplaceable – and you need to know which are which. Understand you will to need to abstract assets into something bigger. Your business leadership doesn’t have an opinion about server #3254 in the data center. But if you discuss things like the order management system or the logistics system, they’ll be able to help you figure out (or at least confirm) relative importance of assets. With answers to those questions, you should be able to dump each group of assets into an importance bucket. The next step involves evaluating the ease of attacking these critical assets. We do this to understand the negative side of the equation – asset value to the business is the positive. If the asset has few security controls or resides in an area that is easy to get to (such as Internet-facing servers), the criticality of its issues increases. So when we prioritize efforts, we can factor in not just the value to the business, but also the likelihood of something bad happening if you don’t address an issue. By the way, try to keep delusion our of this calculation. It’s no secret that some parts of your infrastructure receive a lot of attention and protection and some don’t. Be brutally honest about that, because it will enable you to focus on brittle areas as needed. Like the asset side, focus on relative ease of attack and the associated threat models. You can use categories like: Swiss cheese, home safe, bank vault, and Fort Knox. And yes, we are joking about the category names. You should be left with a basic understanding of your ‘risk’. But don’t confuse this idea of risk with an economic quantification, which is how most organizations define risk. Instead this understanding provides an idea of where to find the biggest steaming pile of security FAIL. This is helpful as you weigh the inflow of events, alerts, and change requests in terms of their importance to your organization. And keep in mind that these mostly subjective assessments of value and ease of attack change – frequently. That’s why it’s so important to keep things simple. If you need to go back and revisit the priorities list every time you install a new server, the list won’t be useful for more than a day. So keep it high level, and plan to revisit these ratings every month or so. At this point, we need to start thinking about operational metrics we can/should gather to guide operations based on outcomes important to your business. That’s the subject of our next post. Share:

Share:
Read Post

Incite 8/3/2011: The Kids Are Our Future

The Boss and I have been getting into Fallen Skies lately. Yeah, it’s another sci-fi show with aliens trying to take down the human race and loot our planet for our resources. They’d better hurry up, since there may not be much left when the real aliens show up, but that’s another story. In the last episode we saw, the main guy (Noah Wyle of ER) made the point that our kids are our future, and we need to keep them safe. That thought resonates with me, and thankfully I’m not dealing with aliens trying to make them into drugged-out slaves. We are dealing with a lot of bad stuff that can happen online. The severity of the issue became very apparent to me in the spring, when XX1 made a comment about playing some online card game. My spidey sense started tingling, and I went into full interrogation mode. Turns out she clicked on some ad on one of her approved websites, which then took her to some kind of card game. So she clicked on that and started playing. And so on, and so on. Instantly I checked out the machine. Thankfully it’s a Mac and she can’t install software. I did a full cleaning of the stuff that could be problematic and then had to have that talk about why it’s bad to click ads on the Internet. We then talked a bit about Google searches, checking out images, and the like. But in reality, I didn’t have much clue of where to start and what to teach her. So I asked a few friends what they’ve done to prepare their kids for the online world. Yep – I got the same quizzical stare I saw in the mirror. That’s why I’m getting involved in the HacKid conference. Chris Hoff (yes, @beaker himself) started the conference in Boston last October, and there will be conferences in San Jose (Sept 17/18) and Atlanta (Oct 1/2) this year. HacKid is not just about security, by the way. It’s about getting our kids (ages 5-17) excited about technology, with lots of intro material on things like programming and robotics and soldering and a bunch of other stuff. Truth be told, orchestrating HacKid is a huge amount of work. Thankfully we’ve got a great board of advisors in ATL to help out, and I know it will be time well spent. I’m confident all the kids will gain some appreciation for technology, beyond the latest game for them to play on the iPad. I also have no doubt they’ll learn about about how to protect themselves online, which is near and dear to my heart. But most of all, I can’t wait to see that look of wonder. You know, when you think you’ve just seen the coolest, most amazing thing in the world. Hoff said there was a lot of that look in Boston, and I can’t wait to see it in Atlanta. Remember, the kids are our future and this is a great place to start teaching them about the role technology will play in their future. Registration is open for the Atlanta conference, so check it out, bring your kids, get involved, and reap the benefits. See you there. -Mike Photo credits: “Play, kids, learn, Mill Park Library, Yarra Plenty Library service” originally uploaded by Kathyrn Greenhill Incite 4 U Shopping list next: I can imagine it now. I’ll get the grocery list via text from the Boss, and then the follow up. “Don’t forget the DDoS, that neighbor is pissing me off again.” According to Krebs, it’s getting easier to buy all sorts of cyber attacks. Even down to a kit to build your own bot army. Can you imagine the horse trading that will happen on the playground with our kids? It’ll be like real-life Risk, with the kids trading 10,000 bots in India for 300 credit card numbers. Law enforcement seems to be getting better at finding and stopping these perps, but it’s still amazing how rapidly the cybercrime ecosystem evolves. – MR Don’t call it a comeback. Call it Back to FUD: Stuxnet is making a comeback?. Seriously Mr. McGurk? Does this mean we need to disconnect our uranium centrifuges from that Windows 98 machine I use to fuel my personal reactor? So if I see you at Black Hat, don’t hesitate to tell me I’m glowing. Does this mean we patch our OS and update our AV signatures? Or are you predicting 4 new 0-days we need to prepare for? Does it mean pissed-off US government employees foreign governments are going to attack the US infrastructure? Or are you asking for all public infrastructure to be rearchitected and redeployed? Oh, wait, it’s budget time – we need to get our FUD on. – AL Do they offer gardening in the big house? Looks like the good guys bagged one of anonymous/LulzSec’s top dogs, Topiary. This 18-year-old plant was hiding out in his folks’ basement in rural Scotland. Of course the spin unit of anon has jumped into gear and is talking about the inability to arrest an idea. That’s true, but a few more high-profile arrests (and they are coming) and we’ll see how willing these cloistered kids will be to give up their freedom. Rich tweeted what a lot of us think. These are a bunch of angry kids, who probably got bullied in schools and are now turning the tables. But they barked up the wrong tree by antagonizing governments and law enforcement. We’ll see how well they do in jail, where the bullies are much different. – MR Who’s afraid of the big, bad (cloud security) wolf?: Vivek Kundra is saying that cloud security fears are overblown and that the US government is not afraid of public cloud infrastructure. From our research I believe both these statements are absolutely correct! Cloud infrastructure is neither more nor less secure that traditional IT infrastructure – it all depends upon how you deploy,

Share:
Read Post

Words matter: You stop attacks, not breaches

Every so often, the way security marketeers manipulate words to mislead customers makes me cringe. I’m not going into specifics because that isn’t the point. I just want to clear up some terminology that many security companies misuse, which really makes them look silly. For example, security companies (who will remain nameless) have talked about how they could have stopped the RSA breach, if only you used their widget, device, god-box and/or holy grail. But this seems to require violation of the space/time continuum. Either that or Dr. Brown is at it again and the DeLorean hit 88 mph. Breaches happen only when data is actually lost. At least that’s how I define a breach. If the attack is not successful, it’s not a breach. It’s just an attack. Yes, I’m splitting hairs, and maybe these are my own definitions. Maybe we can come up with a standard definition for the term. A breach involves data loss, not the potential for data loss, right? The words matter. I’m a writer, and a big part of the Securosis value proposition is cutting through the crap and telling you what’s real and important. We pride ourselves on vilifying marketing buffoonery, mostly because we all deserve better. Come to think of it, I also object to the idea that any technology is going to “render the APT useless.” Yes, I took that right off a vendor’s invitation to a webcast. I have to wonder how they do that. Given that persistent attackers are, well, persistent. Maybe the vendor in question could have stopped the specific attack launched against RSA. But I assure you they cannot stop every attack. Therefore, they are not rendering much of anything useless. Except maybe their own credibility. Having spent quite a while in a VP Marketing role, I understand the game. The vendors need to rise above the noise and create a reason for a prospect to engage. So they manipulate words and don’t say anything that is provably incorrect, but the words sure are misleading. They count on the great unwashed not understanding the difference, and cash the check long before the customer has a chance to realize they just installed modern-day snake oil in their networks, on their endpoints, and in their data centers. We deserve better. Where is the Straight Talk Security Express when you need it? Oh yeah, that didn’t work out to well for Senator McCain either, did it? Yes I know. I’m tilting at windmills again. Dreaming the impossible dream. Sancho just gave me that “you’re an idiot” look again because this won’t change anything. The marketers will make their technology seem much bigger than it is. The sales folks will promise users that their products will actually solve whatever problem you have today. The customers will smile, write more checks, and wonder why their customer database keeps showing up on grey market sites in Estonia. It’s the game. I get it. But some days it’s harder to accept than others. This is one of those days. Guess it’s time to get back on my meds. Photo credit: [Don Quixote and Sancho Panza] originally uploaded by M Kuhn Share:

Share:
Read Post

Cloud Security Training: August 16-18, Washington DC

Hey everyone, Just a quick announcement that we are holding another CCSK training class in a few weeks. This one is in the DC area (Falls Church) and includes the Basic, Plus, and Train the Trainer options. The Basic class is a day of lecture covering the CSA Guidance and cloud security basics. The second day is all hands-on, and you’ll launch instances, build an application stack, encrypt stuff, and get really confused by federated identity. Hope to see you there, and you can register. Share:

Share:
Read Post

Security has always been a BigData problem

It seems like BigData is all the rage. With things like NoSQL and Hadoop getting all the database wonks hot under the collar, smart forward-thinking folks like Amrit and Hoff increasingly point out the applicability of these techniques to security, and they’re right. I certainly agree that many of these new technologies will have a huge impact on our ability to figure out what’s happening in our environments. And not a moment too soon. Hoff wrote a couple recent posts discussing the coming renaissance of Big Data and Security (InfoSec Fail: The Problem with BigData is Little Data and More on Security and BigData…Where Data Analytics and Security Collide), and Amrit followed up with BigData, Hadoop, and the Impending Informationpocalypse, making great points about the fragility of any (relatively) new technology, as well as the need to really know what we are looking for. That’s the biggest fly in this BigData/security ointment. We need proper context to draw useful conclusions about anything. More data does not provide more context. If anything, it provides less because these analysis tools are only as good as the rules they use to alert us to stuff. It’s non-trivial to get this right. Even with the best infrastructure, monitoring everything all the time, you still need to know what to look for. And it won’t get any easier. Knowing what to look for will get much more complicated. The volume of data promises to mushroom over the next few years, as full packet capture starts to hit the mainstream and more folks start seriously monitoring databases and applications. This will ripple through the entire monitoring ecosystem. Now any company claiming the ability to do security management/analysis will need to not only have some security ninja on staff (to know what to look for), but also some legitimate BigData qualifications. This isn’t a new direction for the SIEM players. More than one vendor calls what they do security intelligence, modeled after the business intelligence market. That entails a BigData approach to business analysis. To get there, the SIEM vendors have built their own BigData platforms. This means they each have a purpose-built data store that can provide the kind of analysis and correlation required to find the proverbial needle in a stack of haystacks. They invested not because they wanted to build their own data stores, but because no commercial or open source technology could satisfy their requirements. Do Hadoop and these other technologies change that? Maybe. As Amrit points out, new technologies can be brittle, so it will be a while before tools (or services) based on these latest technologies are ready for prime time. But the writing is on the wall. Security is a BigData problem, and it’s not a stretch to think that some enterprising souls will apply BigData technologies to the security intelligence problem. Which is a great thing – we certainly have not solved the problem. OMG, maybe we will see some innovation in security soon. But I’m not holding my breath. Share:

Share:
Read Post

Friday Summary: July 29, 2011

It’s that time of year again. It’s time for me and most of the Securosis crew to travel to cooler climes and enjoy the refreshing breeze of the Nevada desert. Well, it’s cooler than Phoenix, anyway. Yes, I am talking about going to the Black Hat and Def Con security conferences in Las Vegas this August 1-7th. Every year I see something amazing – from shipping iPhones loaded with malware to hack whatever passes by to wicked database attacks. Always educational and usually a bit of fun too. It is Las Vegas after all! We’ll be participating in a couple talks this year at Black Hat. James Arlen is presenting on Security when Nano-seconds count. I have heard the backstory and seen the preview, so I can tell you the presentation is much more interesting than the published outline. What I knew about these networks only scratched the surface of what is going on, so I think you will be surprised by Jamie’s perspective on this topic. I have spoken to many vendors over the last couple months, claiming they can secure these networks – to which I respond “Not!” You’ll understand why Thursday, August 4th, at 1:45 in the Augustus V + VI room(s). Highly recommend. I will be on the “Securing Applications at Scale” panel with Jeremiah Grossman, Brad Arkin, Alex Hutton, and John Johnson. We have been talking about the sheer scale of the insecure application problem for a number of years, but things are getting worse, not better. Many verticals (looking at you, retail) are just beginning to understand how big the problem is and looking at what appears to be the insurmountable task of fixing their insecure code. We’ll be talking about the threats and our panelists’ recommendations for dealing with insecure code at scale. The session is Thursday, August 4th, at 10:00am in Augustus V + VI – just after the keynote. Come and check it out and bring your questions! I plan to attend Bryan Sullivan’s talk on Server-side JavaScript Injection, Dino Dai Zovi’s Apple iOS Security Evaluation, and David Litchfield’s Forensicating Oracle. That means I will miss a few other highlights, but you have to make sacrifices somewhere. The rest of Wednesday and Thursday I’ll be running around trying to catch up with friends, so ping me if you want to meet up. Oh, and if you are new to these conferences, CGI Security has a good pre-conference check list for how to keep your computers and phones from being hacked. There will be real hackers wandering around and they will hack your stuff! My phone got hit two years ago. Just about everything with electricity has been hit at one time or another – including the advertising kiosks in the halls and elevators. Take this stuff seriously. And if you must use wireless, I recommend you look at setting up Tunnelblick before you go. Oh, I almost forgot Buzzword Bingo! See you there! On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences James Arlen’s presentation covered in eWeek. Adrian quoted on tokenization. Rich’s Palisades DLP Webinar. The business-security disconnect that won’t die. Mike pontificates on understanding the business at Network World. Favorite Securosis Posts Adrian Lane: The Scarlet (Security) Letter. Mike Rothman: How can you not understand the business? Yes, it’s lame to favorite your own piece, but I think this one is important. It’s about knowing how to get things done in your business, which means you have to understand your business. James Arlen: Donate Your Bone Marrow. You could save a life. Do it now. Other Securosis Posts Accept Apathy – Save Users from Themselves and You from Yourself. Incite 7/27/11: Negotiating in front of the crowd. Question for Oracle Database Users. FireStarter: The Time for Corporate Password Managers. Hacking Spikes and the Real Time Media. Friday Summary: July 22, 2011. Rise of the Security Monkeys. Favorite Outside Posts Adrian Lane: Big Data…Where Data Analytics and Security Collide. Chris does a nice job of explaining the issue – this is what some security vendors are scrambling to deal with behind the scenes. Especially with federated data sources. Mike Rothman: Risk Analysis is a Voyage. Jay Jacobs sums up a lot of what I’ve been saying for a long time. No model is perfect. Most are bad. But at some point you have to start somewhere. So do that. Just get started. Adapt and improve as you learn. James Arlen: Automated stock trading poses fraud risk Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Top News and Posts Feds Bust MIT Student. In the current climate the Feds are so desperate to get any success against hackers they sometimes go too far. They want 35 years in prison for a crime that demands 5 hours of community service. What a waste of time. Windows Malware Tricks Victims into Transferring Bank Funds. Cisco’s “unmitigated gall”. Police arrest ‘Topiary’. Sniffer hijacks secure traffic from unpatched iPhones. Korean Mega-hack. Earnings call transcript: Symantec. Earnings call transcript: Citrix Systems. Earnings call transcript: Fortinet. Apple Laptop Batteries Can Be Bricked. House panel approves data breach notification bill. Anti-Sec is not a cause, it’s an excuse. Azeri Banks Corner Fake AV, Pharma Market via Krebs. SIEM Montage. Gotta be a Montage! Anonymous Declares War on .mil. Apple Patches iOS PDF Exploit. Microsoft Patches Bluetooth Hole in July’s Patch Tuesday. Intego Releases iPhone Malware Scanner. Jury’s still out. Blog Comment of the Week Remember, for every

Share:
Read Post

New Blog Series: Fact-Based Network Security: Metrics and the Pursuit of Prioritization

As you can tell from our activity on the blog, we’ve been in the (relatively) slower summer season. Well, that’s over. Today we start one blog series, and another is hot on its heels (probably starting within 2 weeks). With our research pipeline, I suspect all three of us will be pretty busy through the fall. I’m pretty excited about the new series, which has the working title: Fact-based Network Security: Metrics and the Pursuit of Prioritization because it’s the next step in fleshing out many of our thoughts on network security. Over the past 18 months we have talked about the evolution of the enterprise firewall, quantifying the network security operations process, and benchmarking your efforts. These are key aspects of an increasingly mature network security program. Why is this important? Our current challenges of trying to protect our environments are no secret. The attackers only have to get it right once, and some of them are doing it more for Lulz than financial gain. We are also dealing with state-sponsored adversaries, which means they have virtually unlimited resources and you don’t. So you need to choose your activities wisely and optimize every bit of your resources, just to stay in the same place. Unfortunately we haven’t been choosing wisely. You see, most folks treat network security as a game of Whack-a-Mole. Each time a mole pops above the surface, you try to it smack down. Usually that mole squeals loudest, regardless of its actual importance. But we all know we’re spending a chunk of our time trying to satisfy certain people, hoping we can get them to stop calling – and that unfortunately that’s much more about annoyance and persistence than the actual importance of their demands. Responding to the internal squeaky wheels clearly isn’t working. Neither is the crystal ball, hocus pocus, or any other unscientific method. Clearly there must be a beter way. Let’s imagine a day when you could look at your list, and know which activities and tasks would cause the greatest risk reduction. How much would your blood pressure drop if you could tell the squeaky wheel that his top priority project was just not that much of a priority? And have the data to back it up? That’s what Fact-based Security is all about. Lots of folks have metrics, but are they chosen and collected with an eye toward specific outcomes that matter to your business? Gather metrics that guide and substantiate the decisions you need to make every day. Which change on which device is most important? Which attack path presents the biggest risk, and what’s required to fix it? The data for this analysis exists, but most organizations don’t use it. In this series we will investigate these issues and propose a philosophy to guide data-driven decisions. Of course, we aren’t talking about using SkyNet to determine what your security droids do on a daily basis. But your activities need to be weighed in terms of outcomes relevant to the business, which requires first understanding the risks you face – and more importantly assessing the relative values of what you need to protect. Then we’ll talk about what these reasonable outcomes should be and the operational metrics to get there. Only once we have a handle on those issues can we talk about an operational process to underlie everything done with these metrics. With outcomes as a backdrop, using that data to make decisions can have a huge impact on both the effectiveness and efficiency of any security organization. We all know that having and using metrics are totally different. Then we’ll dig into the compliance benefits of fact-based security, but suffice it to say that assessors love to see data – especially data relevant to good security outcomes. We’ll wrap the series by walking through a scenario where we actually apply these practices to a simple environment. That should give you the ammo you need to get started and to make a difference in your operational program(s). So strap in and get ready to roll. Let me remind everyone that our research process depends on critical feedback from you, our readers. If we are off-base, let us know in the comments. Between the last blog post and packaging up the research as a paper, we evolve the paper based on your comments. We really do. I’ll also mention that the rest of this series will show up in our Heavy Feed and on the email list, so make sure you subscribe if you want to see how the research sausage is made. Before we dive in, we should thank the sponsor of this research, RedSeal Systems. We are building the paper through our Totally Transparent Research process, so it’s all objective research, but don’t forget it’s through the generosity of our sponsors that you get to leverage our research for a pretty OK price. Share:

Share:
Read Post

Accept Apathy—Save Users from Themselves and You from Yourself

We’ve gone round and round on the challenges of doing security. As Shack says, your users just don’t give a f***. Actually you need to read Dave’s post. It lays out a lot of the issues we face every day. I’ll rephrase Dave’s point a little differently: apathy rules, and always will. Your employees are not paid to worry about security. They are paid to do their jobs, and more often than not security gets in the way of their actual responsibilities. Remember – the cold, hard truth is that security necessarily restricts access to some degree because there is no other way to protect information. As with most things Dave does, there is some collateral damage. Namely security awareness training, but I don’t entirely buy his recommendation to just stop trying and discard it. First of all, how can we expect users to understand what the hell they are supposed to do and not do, if we do not tell them? For a portion (dare I say majority), it’s not useful. But the training will resonate with some. Every organization has to evaluate whether the investment pays off. Yet, clearly a big issue is the crappy training we subject employees to. Forcing employees to sit through an hour of water torture awareness training via slides and policies wastes everyone’s time. I also believe training users to survive on the Internet is as much a life skill as a work skill, and diligent organizations should be teaching their employees these skills because it’s the right thing to do. But that’s a different story for a different day. What I really liked about Dave’s post is his focus on taking many of the decisions out of the user’s hands, stopping them from doing stupid things. Basically protecting them from themselves. As we’ve been saying for years, this involves locking down devices and adopting a default deny stand wherever you can. Tactics like whitelisting and NAC can help enmake sure folks don’t install bad things and get to the wrong areas of the network. That’s all good. And it’s similar to my Positivity concepts. But it’s a bumpy road. Mostly because users don’t want to be saved. They want to do what they want to do, when they want to do it. Don’t tell them they can’t use Skype. It saves the company money, right? Don’t tell them they can’t share credentials. They are saving time, because IT is so responsive to those provisioning requests. And don’t tell them they can’t roll out that new application to a few million users. That new app will change everything and drive all sorts of new revenue streams. Along with apathy about your charter to protect information, expect tremendous resistance to changing user experiences or adding hoops to any process. Regardless of the security/information protection benefits. Remember, users don’t give a f***. But let’s get back to the idea of Building Security In, which is another of Dave’s tactics, to address the fact that users couldn’t give less of a crap about security anything. The challenge is to get developers to change their behavior. You know, to do the pretty straightforward stuff that eliminates the easy application attacks. I know we have to continue fighting the good fight about application security because crappy, insecure code is a huge part of the macro problem we face in protecting information. I’ve looked at this issue up, down, left, right, and sideways. I don’t see another option, besides increasing the corporate loss provision and devoting most of our resources to cleaning up the messes. Things are going to get worse before they get better. I should say: if they get better. We can also address the issues at the application layer. Building Security In continues to be a goal of many organizations. There are plenty of issue with making this happen, but none more acute than the skills gap. Even if organizations want to do the right thing, they probably don’t have the expertise and resources to do anything. Details, details. Adrian is on a panel at Black Hat next week with some really smart folks including Jeremiah Grossman, Alex Hutton, and Brad Arkin talking about doing application security at scale. Maybe they’ll have some answers. Given this backdrop, it’s easy to be despondent about doing security. With good reason. Which is why acceptance needs to become your favorite word. You sanity literally depends on it. There is only so much you can do. Really. Sometimes it’s a technology issue. Sometimes it’s a political obstacle. Often it’s a business decision to accept a certain amount of risk. All these things can make you crazy. But only if you let them. That’s a key aspect of my Happyness presentation. You can’t own the responsibility to make your organization secure. You can only do what you can do. I know, easier said than done. It’s hard to come into work every day and feel like your contributions don’t matter. I assure you they do. Imagine the anarchy that would prevail if you didn’t keep fighting. So do what you can, and then go home. Seriously. Go home and accept that your users don’t give a f***. When you aren’t able to do that, you know it’s time to find something else to do. Share:

Share:
Read Post

Incite 7/27/11: Negotiating in front of the crowd

The NFL lockout is over. Hallelujah! I know nothing substantial was really lost, besides the Hall of Fame game, but the folly of billionaires bickering with millionaires annoyed pretty much everyone. I believe more folks were hanging on this negotiation than the crap going on in Washington over the debt ceiling. It seemed like a tug of war gone wild, with both sides digging in. Until they finally reached a critical point, when real money was at stake, and amazingly the deal got done. What’s interesting is how the negotiations played out in real time. With a small armada of folks (from NFL Network and ESPN) staking out the negotiations for months, there was always a real-time flow of information, rumor, innuendo, and positioning via Twitter. In fact, I’m pretty well convinced a bunch of disinformation and PR tactics were employed to manipulate public perception. That’s new, and it highlights Twitter’s proliferation. At least in the circles I follow. Back in 1987 (the last time the NFL lost games due to labor strife) there was no Twitter. I doubt there were folks staking out the negotiations, mostly because they happened in a room between the NFLPA head (the legendary Gene Upshaw) and Commissioner Paul Tagliabue. There was no minute by minute reporting of the ebbs and flows of negotiations. If anything, we should all now know that we probably don’t want to be privy to the ins and outs of a multi-billion dollar negotiation. I was getting seasick trying to follow all the ups and downs. Although I probably should come clean and admit that even if there were daily updates and twists and turns, I’d have been mostly oblivious in 1987. I was far more interested in following the Bud Man most nights of the week. So all’s well that ends well, at least in the NFL. But there are clearly lessons to be learned for those in public positions. The real-time generation is upon us. We are all privy to the roller coaster that is life. To whatever degree that you want to pay attention, that is. The next election cycle is going to be very interesting. Let me also mention one other topic related to the lockout. It seems a positive ball got rolling once the lawyers left the room, and the owners and players started negotiating directly. When they started building personal relationships between the parties. Besides reinforcing all those positive stereotypes about lawyers, it gets back to something I mentioned in yesterday’s post How can you not understand the business?. Most important stuff happens person to person. Not via social media. Not by text. And not via a Terminal window. So for those folks hoping to climb the corporate ladder as social misfits, sorry to burst your bubbles. That’s why I no longer worry about a corporate ladder… -Mike Photo credits: “Tug of War” originally uploaded by toffehoff Incite 4 U And you thought your health insurer was bad: I hate health insurance companies. Their processes are built to break you down and get you to stop trying to collect on declined claims. The Boss spends way too much time fighting about claims. Too bad I can’t bill those shysters for her time, but I digress. Every time someone asks me about cyber-insurance, I kind of chuckle. Without a lot of precedents for attacks, losses, liability, and the like, there are basically no rules. And when there is a loss the dance begins. Interestingly enough Zurich is proactively going after Sony, suing over maybe actually paying a claim under a general liability policy. Now they may have a case; they may not. The point is that companies pay crazy insurance premiums to protect against attacks, and then the finger pointing starts. Which insurance (if any) is liable? Guess the courts will need to figure that out. They really should be prepared to pay crazy legal fees to maybe even collect it. Sounds about right. Maybe Sony will give up and decide not to collect, which is all part of their evil plan. – MR Google+ -XSS: Feels like we are always calling out forms for having crap security, so we should occasionally call out when someone does something good. It looks like Google+ is taking browser security seriously – according to the Barracuda blog. Securing cookies and building in some frame-busting breaks many basic attacks that plagued Twitter and Facebook. Security folks aren’t likely to get very excited by minor advancements such as this, but a large site such as Google setting a positive security example is good news. Or think about it this way: companies like eTrade and many of the brokerage/retail sites I have visited recently did not have these header flags set. So give Google the nod for doing the right thing! – AL Don’t hold your breath for an authoritative web identity source: In the “we’ve seen this movie before” files, evidently Mozilla thinks it can be the authoritative source for web identity. Microsoft, VeriSign, Google, Facebook, and countless others have already tried this, haven’t they? Sure, establish a protocol and get everyone to buy into it. Then maybe they will still have a reason to exist as the browser war finishes mutating from Netscape vs. IE, to IE vs. Firefox, to the latest iteration: a Chrome vs. IE battle royale. Yeah, not so much. Like all the others, this effort will get a handful of sites supporting it, and then it will falter. Now if these folks would devote their energy to a standard (OAuth, anyone?). – MR That’s a lot of Moon River: Yes, that is a veiled homage to the proctologist scene in Fletch. But old movie nostalgia aside, our friends at Imperva have posted a very interesting analysis. Basically the web sites they monitored were probed once very two minutes. That frequency probably requires a case of KY. The most prevalent attacks were directory traversal, XSS, SQLi, and Remote File Inclusion. Surprise? Nope. But there is a

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.