Securosis

Research

Security has always been a BigData problem

It seems like BigData is all the rage. With things like NoSQL and Hadoop getting all the database wonks hot under the collar, smart forward-thinking folks like Amrit and Hoff increasingly point out the applicability of these techniques to security, and they’re right. I certainly agree that many of these new technologies will have a huge impact on our ability to figure out what’s happening in our environments. And not a moment too soon. Hoff wrote a couple recent posts discussing the coming renaissance of Big Data and Security (InfoSec Fail: The Problem with BigData is Little Data and More on Security and BigData…Where Data Analytics and Security Collide), and Amrit followed up with BigData, Hadoop, and the Impending Informationpocalypse, making great points about the fragility of any (relatively) new technology, as well as the need to really know what we are looking for. That’s the biggest fly in this BigData/security ointment. We need proper context to draw useful conclusions about anything. More data does not provide more context. If anything, it provides less because these analysis tools are only as good as the rules they use to alert us to stuff. It’s non-trivial to get this right. Even with the best infrastructure, monitoring everything all the time, you still need to know what to look for. And it won’t get any easier. Knowing what to look for will get much more complicated. The volume of data promises to mushroom over the next few years, as full packet capture starts to hit the mainstream and more folks start seriously monitoring databases and applications. This will ripple through the entire monitoring ecosystem. Now any company claiming the ability to do security management/analysis will need to not only have some security ninja on staff (to know what to look for), but also some legitimate BigData qualifications. This isn’t a new direction for the SIEM players. More than one vendor calls what they do security intelligence, modeled after the business intelligence market. That entails a BigData approach to business analysis. To get there, the SIEM vendors have built their own BigData platforms. This means they each have a purpose-built data store that can provide the kind of analysis and correlation required to find the proverbial needle in a stack of haystacks. They invested not because they wanted to build their own data stores, but because no commercial or open source technology could satisfy their requirements. Do Hadoop and these other technologies change that? Maybe. As Amrit points out, new technologies can be brittle, so it will be a while before tools (or services) based on these latest technologies are ready for prime time. But the writing is on the wall. Security is a BigData problem, and it’s not a stretch to think that some enterprising souls will apply BigData technologies to the security intelligence problem. Which is a great thing – we certainly have not solved the problem. OMG, maybe we will see some innovation in security soon. But I’m not holding my breath. Share:

Share:
Read Post

New Blog Series: Fact-Based Network Security: Metrics and the Pursuit of Prioritization

As you can tell from our activity on the blog, we’ve been in the (relatively) slower summer season. Well, that’s over. Today we start one blog series, and another is hot on its heels (probably starting within 2 weeks). With our research pipeline, I suspect all three of us will be pretty busy through the fall. I’m pretty excited about the new series, which has the working title: Fact-based Network Security: Metrics and the Pursuit of Prioritization because it’s the next step in fleshing out many of our thoughts on network security. Over the past 18 months we have talked about the evolution of the enterprise firewall, quantifying the network security operations process, and benchmarking your efforts. These are key aspects of an increasingly mature network security program. Why is this important? Our current challenges of trying to protect our environments are no secret. The attackers only have to get it right once, and some of them are doing it more for Lulz than financial gain. We are also dealing with state-sponsored adversaries, which means they have virtually unlimited resources and you don’t. So you need to choose your activities wisely and optimize every bit of your resources, just to stay in the same place. Unfortunately we haven’t been choosing wisely. You see, most folks treat network security as a game of Whack-a-Mole. Each time a mole pops above the surface, you try to it smack down. Usually that mole squeals loudest, regardless of its actual importance. But we all know we’re spending a chunk of our time trying to satisfy certain people, hoping we can get them to stop calling – and that unfortunately that’s much more about annoyance and persistence than the actual importance of their demands. Responding to the internal squeaky wheels clearly isn’t working. Neither is the crystal ball, hocus pocus, or any other unscientific method. Clearly there must be a beter way. Let’s imagine a day when you could look at your list, and know which activities and tasks would cause the greatest risk reduction. How much would your blood pressure drop if you could tell the squeaky wheel that his top priority project was just not that much of a priority? And have the data to back it up? That’s what Fact-based Security is all about. Lots of folks have metrics, but are they chosen and collected with an eye toward specific outcomes that matter to your business? Gather metrics that guide and substantiate the decisions you need to make every day. Which change on which device is most important? Which attack path presents the biggest risk, and what’s required to fix it? The data for this analysis exists, but most organizations don’t use it. In this series we will investigate these issues and propose a philosophy to guide data-driven decisions. Of course, we aren’t talking about using SkyNet to determine what your security droids do on a daily basis. But your activities need to be weighed in terms of outcomes relevant to the business, which requires first understanding the risks you face – and more importantly assessing the relative values of what you need to protect. Then we’ll talk about what these reasonable outcomes should be and the operational metrics to get there. Only once we have a handle on those issues can we talk about an operational process to underlie everything done with these metrics. With outcomes as a backdrop, using that data to make decisions can have a huge impact on both the effectiveness and efficiency of any security organization. We all know that having and using metrics are totally different. Then we’ll dig into the compliance benefits of fact-based security, but suffice it to say that assessors love to see data – especially data relevant to good security outcomes. We’ll wrap the series by walking through a scenario where we actually apply these practices to a simple environment. That should give you the ammo you need to get started and to make a difference in your operational program(s). So strap in and get ready to roll. Let me remind everyone that our research process depends on critical feedback from you, our readers. If we are off-base, let us know in the comments. Between the last blog post and packaging up the research as a paper, we evolve the paper based on your comments. We really do. I’ll also mention that the rest of this series will show up in our Heavy Feed and on the email list, so make sure you subscribe if you want to see how the research sausage is made. Before we dive in, we should thank the sponsor of this research, RedSeal Systems. We are building the paper through our Totally Transparent Research process, so it’s all objective research, but don’t forget it’s through the generosity of our sponsors that you get to leverage our research for a pretty OK price. Share:

Share:
Read Post

Accept Apathy—Save Users from Themselves and You from Yourself

We’ve gone round and round on the challenges of doing security. As Shack says, your users just don’t give a f***. Actually you need to read Dave’s post. It lays out a lot of the issues we face every day. I’ll rephrase Dave’s point a little differently: apathy rules, and always will. Your employees are not paid to worry about security. They are paid to do their jobs, and more often than not security gets in the way of their actual responsibilities. Remember – the cold, hard truth is that security necessarily restricts access to some degree because there is no other way to protect information. As with most things Dave does, there is some collateral damage. Namely security awareness training, but I don’t entirely buy his recommendation to just stop trying and discard it. First of all, how can we expect users to understand what the hell they are supposed to do and not do, if we do not tell them? For a portion (dare I say majority), it’s not useful. But the training will resonate with some. Every organization has to evaluate whether the investment pays off. Yet, clearly a big issue is the crappy training we subject employees to. Forcing employees to sit through an hour of water torture awareness training via slides and policies wastes everyone’s time. I also believe training users to survive on the Internet is as much a life skill as a work skill, and diligent organizations should be teaching their employees these skills because it’s the right thing to do. But that’s a different story for a different day. What I really liked about Dave’s post is his focus on taking many of the decisions out of the user’s hands, stopping them from doing stupid things. Basically protecting them from themselves. As we’ve been saying for years, this involves locking down devices and adopting a default deny stand wherever you can. Tactics like whitelisting and NAC can help enmake sure folks don’t install bad things and get to the wrong areas of the network. That’s all good. And it’s similar to my Positivity concepts. But it’s a bumpy road. Mostly because users don’t want to be saved. They want to do what they want to do, when they want to do it. Don’t tell them they can’t use Skype. It saves the company money, right? Don’t tell them they can’t share credentials. They are saving time, because IT is so responsive to those provisioning requests. And don’t tell them they can’t roll out that new application to a few million users. That new app will change everything and drive all sorts of new revenue streams. Along with apathy about your charter to protect information, expect tremendous resistance to changing user experiences or adding hoops to any process. Regardless of the security/information protection benefits. Remember, users don’t give a f***. But let’s get back to the idea of Building Security In, which is another of Dave’s tactics, to address the fact that users couldn’t give less of a crap about security anything. The challenge is to get developers to change their behavior. You know, to do the pretty straightforward stuff that eliminates the easy application attacks. I know we have to continue fighting the good fight about application security because crappy, insecure code is a huge part of the macro problem we face in protecting information. I’ve looked at this issue up, down, left, right, and sideways. I don’t see another option, besides increasing the corporate loss provision and devoting most of our resources to cleaning up the messes. Things are going to get worse before they get better. I should say: if they get better. We can also address the issues at the application layer. Building Security In continues to be a goal of many organizations. There are plenty of issue with making this happen, but none more acute than the skills gap. Even if organizations want to do the right thing, they probably don’t have the expertise and resources to do anything. Details, details. Adrian is on a panel at Black Hat next week with some really smart folks including Jeremiah Grossman, Alex Hutton, and Brad Arkin talking about doing application security at scale. Maybe they’ll have some answers. Given this backdrop, it’s easy to be despondent about doing security. With good reason. Which is why acceptance needs to become your favorite word. You sanity literally depends on it. There is only so much you can do. Really. Sometimes it’s a technology issue. Sometimes it’s a political obstacle. Often it’s a business decision to accept a certain amount of risk. All these things can make you crazy. But only if you let them. That’s a key aspect of my Happyness presentation. You can’t own the responsibility to make your organization secure. You can only do what you can do. I know, easier said than done. It’s hard to come into work every day and feel like your contributions don’t matter. I assure you they do. Imagine the anarchy that would prevail if you didn’t keep fighting. So do what you can, and then go home. Seriously. Go home and accept that your users don’t give a f***. When you aren’t able to do that, you know it’s time to find something else to do. Share:

Share:
Read Post

Incite 7/27/11: Negotiating in front of the crowd

The NFL lockout is over. Hallelujah! I know nothing substantial was really lost, besides the Hall of Fame game, but the folly of billionaires bickering with millionaires annoyed pretty much everyone. I believe more folks were hanging on this negotiation than the crap going on in Washington over the debt ceiling. It seemed like a tug of war gone wild, with both sides digging in. Until they finally reached a critical point, when real money was at stake, and amazingly the deal got done. What’s interesting is how the negotiations played out in real time. With a small armada of folks (from NFL Network and ESPN) staking out the negotiations for months, there was always a real-time flow of information, rumor, innuendo, and positioning via Twitter. In fact, I’m pretty well convinced a bunch of disinformation and PR tactics were employed to manipulate public perception. That’s new, and it highlights Twitter’s proliferation. At least in the circles I follow. Back in 1987 (the last time the NFL lost games due to labor strife) there was no Twitter. I doubt there were folks staking out the negotiations, mostly because they happened in a room between the NFLPA head (the legendary Gene Upshaw) and Commissioner Paul Tagliabue. There was no minute by minute reporting of the ebbs and flows of negotiations. If anything, we should all now know that we probably don’t want to be privy to the ins and outs of a multi-billion dollar negotiation. I was getting seasick trying to follow all the ups and downs. Although I probably should come clean and admit that even if there were daily updates and twists and turns, I’d have been mostly oblivious in 1987. I was far more interested in following the Bud Man most nights of the week. So all’s well that ends well, at least in the NFL. But there are clearly lessons to be learned for those in public positions. The real-time generation is upon us. We are all privy to the roller coaster that is life. To whatever degree that you want to pay attention, that is. The next election cycle is going to be very interesting. Let me also mention one other topic related to the lockout. It seems a positive ball got rolling once the lawyers left the room, and the owners and players started negotiating directly. When they started building personal relationships between the parties. Besides reinforcing all those positive stereotypes about lawyers, it gets back to something I mentioned in yesterday’s post How can you not understand the business?. Most important stuff happens person to person. Not via social media. Not by text. And not via a Terminal window. So for those folks hoping to climb the corporate ladder as social misfits, sorry to burst your bubbles. That’s why I no longer worry about a corporate ladder… -Mike Photo credits: “Tug of War” originally uploaded by toffehoff Incite 4 U And you thought your health insurer was bad: I hate health insurance companies. Their processes are built to break you down and get you to stop trying to collect on declined claims. The Boss spends way too much time fighting about claims. Too bad I can’t bill those shysters for her time, but I digress. Every time someone asks me about cyber-insurance, I kind of chuckle. Without a lot of precedents for attacks, losses, liability, and the like, there are basically no rules. And when there is a loss the dance begins. Interestingly enough Zurich is proactively going after Sony, suing over maybe actually paying a claim under a general liability policy. Now they may have a case; they may not. The point is that companies pay crazy insurance premiums to protect against attacks, and then the finger pointing starts. Which insurance (if any) is liable? Guess the courts will need to figure that out. They really should be prepared to pay crazy legal fees to maybe even collect it. Sounds about right. Maybe Sony will give up and decide not to collect, which is all part of their evil plan. – MR Google+ -XSS: Feels like we are always calling out forms for having crap security, so we should occasionally call out when someone does something good. It looks like Google+ is taking browser security seriously – according to the Barracuda blog. Securing cookies and building in some frame-busting breaks many basic attacks that plagued Twitter and Facebook. Security folks aren’t likely to get very excited by minor advancements such as this, but a large site such as Google setting a positive security example is good news. Or think about it this way: companies like eTrade and many of the brokerage/retail sites I have visited recently did not have these header flags set. So give Google the nod for doing the right thing! – AL Don’t hold your breath for an authoritative web identity source: In the “we’ve seen this movie before” files, evidently Mozilla thinks it can be the authoritative source for web identity. Microsoft, VeriSign, Google, Facebook, and countless others have already tried this, haven’t they? Sure, establish a protocol and get everyone to buy into it. Then maybe they will still have a reason to exist as the browser war finishes mutating from Netscape vs. IE, to IE vs. Firefox, to the latest iteration: a Chrome vs. IE battle royale. Yeah, not so much. Like all the others, this effort will get a handful of sites supporting it, and then it will falter. Now if these folks would devote their energy to a standard (OAuth, anyone?). – MR That’s a lot of Moon River: Yes, that is a veiled homage to the proctologist scene in Fletch. But old movie nostalgia aside, our friends at Imperva have posted a very interesting analysis. Basically the web sites they monitored were probed once very two minutes. That frequency probably requires a case of KY. The most prevalent attacks were directory traversal, XSS, SQLi, and Remote File Inclusion. Surprise? Nope. But there is a

Share:
Read Post

Incomplete Thought: The Scarlet (Security) Letter

I know we all have compliance fatigue. Some worse than others, but we all rue the day security became more about compliance and getting the rubber stamp than actually protecting something. The pragmatist in me continues to accept our lot in life and try to be somewhat optimistic about it. But at the end of the day, we (as an industry) pretty much suck at protecting things, and there are no real catalysts to change that. Out the other side of my mouth, I can talk about how compliance (PCI specifically) has added a low bar to the practice of security. And in the absence of that (admittedly) low bar, lord knows what the situation would be. But that’s not the point. It’s about making sure organizations consistently do the right thing. And that customers know that’s the case. I’m intrigued by a concept put forth by Lenny Zeltser, talking about a Letter Grade for Information Security. The idea is modeled after how NYC inspects their restaurants. Basically folks who get the highest grade only get assessed annually. Those sucking need to be assessed more often. Best of all, they all need to post their grades in public where their customers can see them. Can you imagine if a big retailer failed an assessment and had to post on their high-traffic website that they had issues? Kind of like making them wear the proverbial Scarlet Letter. That would be cool, and would also create a real disincentive to screw up an assessment. And maybe that would be the catalyst to start doing security right. Of course, this assumes a bunch of things: The bar is high enough: We consider PCI the bar, mostly because it’s the most detailed. But we need to figure out how much security is enough. And what set of guidelines best reflect that level – which is likely to change based on the organization’s size and transaction volume. A set of objective ratings: What is a “C” when evaluating a restaurant? No rats feasting in the pantry? I’m sure there is a long checklist and associated rating system. As Lenny points out, right now PCI is binary – you either pass or fail. I don’t suggest a FISMA style rating scale – that works so well – but we do need some means of measuring success and providing a grade. The assessment isn’t a joke: We’ve all heard about the unholy alliances between QSAs, their firms which provide all sorts of other services, and customers. Feels a lot like the old days when a public audit firm sold a crapload of consulting to customers they audited. Amazingly enough, the late Arthur Andersen gave firms like Enron a thumbs-up because they’d lose out on millions of other billings if they didn’t. Today a QSA is not prohibited from selling other products/services to company they assess. We need true objectivity for this to work. Mass market coverage: Assessing Tier 1 and even Tier 2 merchants is a no-brainer. There are thousands of Tier 3 and millions of Tier 4. How do you address the mass market? Self-assessment? See the previous bullet about the assessment being a joke. But much of today’s fraud targets these small fry (as the big folks get incrementally better at protecting themselves), this large swath of territory must be factored in. Truth in Advertising: What happens when someone fails a PCI assessment? They argue about it, which pushes back the date when their situation would cost them money? In Lenny’s example, NYC makes them post either the current grade or a sign saying grade is pending. That’s kind of interesting. We need to make sure companies come clean about porous data protection policies. Kind of like an extension of today’s disclosure laws. So customers are notified when organizations holding their personal information fail an assessment, whether there is data loss or not. Oversight with teeth: When did separation of duties take off? Basically when Sarbanes-Oxley made it clear a senior exec would go to jail if they screwed it up. We need similar oversight for security. Yes, this would be need to be legislated, and I’m fully aware of the ramifications. But how else can you create enough urgency to get something going? Or we could just continue on with the status quo. Since that’s so great. I’m not saying any of this is practical, and it’s kind of half-baked on my part. But parts of it may be workable. Like Lenny, I understand that this discussion brings up more questions than answers. But I am (like you) pretty frustrated some days about what we call success in security nowadays. And thanks to Lenny Z for once again providing great food for thought. Photo credit: “Hester Prynne” originally uploaded by Bill H-D Share:

Share:
Read Post

How can you *not* understand the business?

I usually agree with Jack Daniel. You know, we curmudgeons need to stick together. But one of the requirements of membership in the Curmudgeons Association is to call crap when we see it. And much as it pains me to say it, Jack’s latest rant on InfoSec’s misunderstanding of business is crap. Actually his conclusion is right on the money: In order to improve security in your organization, you need to understand how your organization works, not how it should work. [emphasis mine] I couldn’t agree more. The problem is how Jack reaches that conclusion. Basically by saying that understanding business is a waste of time. Instead, he suggests you understand greed and fear, then you’ll understand the motivations of the decision makers, and then you’ll be able to do your job. Right? Not so much. Mostly because I don’t understand how anyone understands how things get done in their organization without both understanding the business and also understanding the people. In my experience, you can’t separate the two. No way, no how. I totally agree that everyone (except maybe a monk) is driven by greed and fear. Sometimes those aspects are driven by the business. Maybe they want to make the quarter (and keep their BMW) or perhaps they need to move a key business process to the cloud to reduce headcount. Those are all motivations to do security, or not. How can you understand how to sell a project internally if you don’t understand what’s going on in the business? Your decisions makers may also have some personal issues that color their decisions. Could be an expensive divorce. Could be a sick parent. It could be anything, but any of those factors could get in the way of your project. Ignore the people aspect of the job at your own risk – which is really my point. A senior security position is not a technical job. It’s a job of persuasion. It’s a job of sales. And both those disciplines require a full understanding of all the factors that can work for or against you. One of the key trends I saw a few years ago involved senior security folks coming from the business, not from the ranks of the security team. These folks were basically tasked to fix security, which meant they had to know how to get things done in the organization. These folks could just as well be dealing with operational problems in Latin America as with cyberattacks. To Jack’s point, they do understand greed and fear. They may have pictures of senior execs in a vault somewhere, and then inexplicably get the funding they need for key projects. And they also understand the business. Share:

Share:
Read Post

Hacking Spikes and the Real Time Media

The Freakonomics blog assembled an interesting quorum on security. Industry heavyweights like Schneier weighed in on the following question: Why has there been such a spike in hacking recently? Or is it merely a function of us playing closer attention and of institutions being more open about reporting security breaches? Aside from Bruce there were opinions from folks at Imperva, IronKey, Aite Group, and BAE Systems – most of it decent. Some contradictory points, but get a bunch of folks to weigh in and that’s bound to happen. In something targeted to a mass market readership, some of these folks threw in the APT and PCI terminology. Seriously. Which really underscored to me how most security folks have no fracking clue on how to talk to a non-security audience. But that’s a story for another day. Since I wasn’t invited into the quorum (sad panda), I figured I’d rant a bit on the question. So if they kind folks at Freakonomics invited me to participate (hint, hint), here’s what I’d say. In general I have to agree with Bruce Schneier. There hasn’t been a huge spike in hacking. Sure, the number of data breaches is up, but the number of stolen identities is way down. The real change is the increased reporting on hacking. That’s right – security has finally come into your living room. And it’s a scary place for most folks. For instance, a few months back the Anonymous hacker collective broke into the website of Westboro Baptist Church – on live TV. Unless you’ve been to the Black Hat conference or a similar technical forum, you probably haven’t seen a lot of computer attacks happen live. That was cool. It was newsworthy. So the media picked up on this hacking stuff. Combined with the disclosure of previously off-limits information on sites like WikiLeaks and Pastebin, now you have real news. When the contact information of undercover Arizona police officers is posted on the Internet, or the tactics of The News of the World come to light, it’s going to make news. And it has. We do have more visible attacks as well. When hackers take down Sony’s PlayStation Network for weeks, that’s newsworthy. Steal some plans for the Joint Strike Fighter, which happened a few years ago, and it barely makes news. Take down a multi-player game and all hell breaks loose. This is the world we live in. We can talk about the increasing sophistication of the hackers (as a number of them did), but that’s crap. Most of these attacks have not been sophisticated at all. We can also talk about the laws requiring data breach disclosure, but that’s also crap. Disclosures have been happening for years, and this mainstreaming of hacking is much more recent. Compounding the issue is the real-time media cycle. Driven by anyone with a computer Tweeting whatever they want, and dimwit media outlets running with it without proper fact checking (or, often, even understanding what they’re saying), and you have a perfect way to game the system. We see it every day with the NFL labor negotiations. Some player – perhaps clued in but just as likely not – tweets something, and everyone thinks it’s gospel. Within seconds it’s broadcast on ESPN and NFL Network. It’s on TV so it must be right, right? It’s not gospel. It’s not anything besides what’s always been happening. Now it’s in plain sight, and that’s uncomfortable for most folks. Especially the ones who find their corporate and personal secrets on public web sites. Share:

Share:
Read Post

Incite 7/19/2011: The Case of the Disappearing Letters

Something didn’t add up. We got a call from the girl’s camp literally 3 days after they got there saying XX2 needed more stationery. We hoped this meant she was a prolific writer, and we’d be getting a couple updates a week. Almost 3 weeks later, we got 1 postcard. That’s it. A few of her friends got letters, but not nearly enough to have depleted her stash of letters/postcards. And the longer we went without a letter, the more ornery The Boss got. Mostly because she spent a bunch of time buying, stamping, addressing, and return labeling the additional letters. So to not get any mail was really adding insult to injury. Luckily we were going to see the girls on Visiting Day, so we’d get to the bottom of the situation. Maybe there were mail gremlins in the Post Office, getting their kicks by reading (almost) 8 year old chicken scratch. Maybe the small-town post office was just overwhelmed. Or maybe XX2 had screwed up a bunch of letters and just thrown them out, as opposed to trying to fix them. It could be anything, and we were determined to get to the bottom of it. When we got to the camp, we spent a few minutes with XX1, including meeting her counselors and seeing her bunk. It’s far from roughing it, but they still get a somewhat rustic experience. Then we made our way over the XX2’s bunk to do a similar assessment. With me as the bull in a china shop, I (of course) just blurted out what I know The Boss was thinking. “I’m so happy you are having a great time at camp, but what the hell? Who did you write to with all your stationary? It certainly wasn’t us!” XX2 looked very confused. She reiterated that she did write letters, and she wrote 3-4 to us. It looked like it might be a job for the late Columbo, who could solve this posthumously. Then we asked the key question: “When did you mail the letters?” She again looked at us quizzically. That’s when all the pieces fit together. “I need to mail one letter every three days to get into dinner. So I give them one letter.” Looks like we found the smoking gun. I then asked XX2 to show us her stationary box, and sure enough there were 6 letters and 3 postcards ready to go. I forget she is not even 8 years old yet. She took the instructions literally. She needed one letter to fulfill the requirement, and didn’t realize she could mail more than one letter at a time, or even on an off day. We got the characteristic, “oh well” shrug from her and then we all just busted out laughing. To be clear, I’m not sure we’d do anything different next time. I refuse to be one of those crazy, guilt-slinging parents who browbeat their kids about writing. If they aren’t writing, odds are they are having fun. And we may even save a few bucks in postage. That’s a win-win in my book. -Mike Photo credits: “Nobody Loves Me” originally uploaded by Robert Hruzek Incite 4 U The next wave of consumer security: Following (and participating in) the SIEM space, one of the biggest jokes was fraud detection. You know, you’d set your SIEM to look at transaction records and it would find fraud. It’s just data, right? Fraud is just another pattern, right? Not so much, but it’s still a magic chart requirement to have a solution in this space, even though the financial folks use purpose-built offerings to do it for real. But that doesn’t mean that reputation and pattern matching for fraud detection has no place in security. Actually, it does, and with a tip of the hat to Fred Wilson, I can point you to a new service called BillGuard that monitors your credit card transaction streams and can alert you to things that might be funky. Remember, consumers don’t care about security for its own sake. But they care about losing money to fraud and other nuisances, and this kind of offering should just kill it. Disclaimer: I haven’t used BillGuard, nor have I checked out their security. But the idea is right on the (proverbial) money. – MR Agile is the word: Uh-oh. The US Government is taking cyber-security lessons from businesses. Are things that bad? Actually, while the title of this post filled me with visions of Sony and other enterprises, the actual document is worth review. The government is effectively advocating an Agile process – its basic tenets read more like secure code development ideals than as network deployment. Most security experts urge building security into the products we deploy rather than bolting it on afterwards. And this encourages working with smaller (read: more innovative) security technology providers. Their guidance is a good fit with our own enterprise guidance. – AL A sign of the times: About a hundred years ago, I co-founded a company focused on driving broader adoption of PKI. We focused on application integration to add capabilities such as encryption and digital signatures. But it never took off, mostly because no one was willing to trade inconvenience for security. By the way, not much has changed. If security works, it’s behind the scenes, embedded within the user experience so users don’t need to know about it. Adobe is clearly going taking another run at digital signatures with their EchoSign buy. I’m not sure the outcome will be different this time around. EchoSign got some lift because it wasn’t about technology – it was about a seamless business process to eliminate paper from contract signing. We’ll see if Adobe learns from that, or just tries to add another option to the product – you know how the latter scenario ends. – MR Skype pwnage: It will likely be patched by the time you read this, but there is a cross site scripting vulnerability with Skype. In

Share:
Read Post

Rise of the Security Monkeys

As far back as I can remember, I have been a fan of testing your defenses. Some people call it pen testing, others refer to it as an assurance process, but the point is the same either way. The bad folks test your defenses every day, and if you aren’t using the same tactics to find out what they can get, you’re going to have a bad day. Maybe not today, maybe not even tomorrow. But the clock is ticking. Truly understanding your security posture gets even harder when you start thinking about the cloud and the complexities of architecting a totally new infrastructure. We have a zillion dollars worth of systems management installed to monitor and manage our data centers, although I reserve judgment on how suck-tastic that investment has been. Now that we are moving many things to the cloud (whatever that means), it’s time to revisit how we test our infrastructure. The existing systems management (and security) vendors are falling all over themselves to position their existing products as appropriate for managing cloud operations, but most of their solutions are heavy on slide decks and virtual appliances (same stuff, different wrapper), and lighter on the actual management technology. In fairness, it’s still early, so we shouldn’t totally count out the systems management incumbents, right? I mean, those are some innovative organizations, [sarcasm]no?[/sarcasm] Yet, this cloud thing will force us to totally rethink how we run operations, and thus how we test our environments. The good news is that many of the cloud services leaders are more than happy to share what they are doing, so you can learn what works and what doesn’t, avoiding the school of hard knocks. I mean, when before has a company basically shared its data center architecture? Thanks, Facebook. And now NetFlix is sharing some of their management approaches. Netflix’s concept is to use a Simian Army (not literal monkeys, but automated testing processes) to put their infrastructure through the ringer. To see where it breaks. To pinpoint performance issues. And to do it continuously, on an ongoing basis. They even have a chaos gorilla, which takes entire availability zones out of play, so they can see how their infrastructure reacts. The same discipline applies to security. You need to build a set of hacking simians to try to break your stuff. No, it won’t be easy, and you’ll need to do a lot of manual scripting and integration to build a security monkey. Although there are some offerings (like Core’s new Insight product), focused on running continuous testing processes, it’s still early in this market. So you’ll need to do a lot of the work. But the alternative is having your dirty undergarments posted on pastebin. But don’t forget my standard caveat: when you test using live ammo, be careful! Given the economics of cloudy things, you should have a test environment that looks an awful lot like your production environment. And let the monkeys loose on your test environment early and often. But some of these monkeys can/should be used on the production stuff. Although you can make the test environment look the same, it’s not. We learn that hard lesson over and over again. In the post, Netflix talks about shutting down production instances (with a lot of oversight, obviously), just to see what happens. They reminds me a bit of the kanban process in manufacturing, in that you mess with the working system to find the breaking points, to see where you can make it more efficient. The assumption that everything is working fine has never held water. The question is whether you search for what’s broken, or wait for it to find you. But most of all, I love both the metaphor and the message of Netflix’s approach. These guys test their stuff, so when half of Amazon AWS goes down they stay up. Obviously this isn’t a panacea (as their recent outage showed), but clearly there is something going on over at Netflix. So jump on the monkey bandwagon – they are taking over the world anyway. Share:

Share:
Read Post

Security Marketing FAIL: Claims of Risk Reduction

Every time I see the phrase “reduce your risk by X%,” I break out in hives. I agree that it is critical to think about risk (which to me is really about economic loss), but everyone has a different definition of risk. And to say anyone can reduce risk by a certain percentage triggers my bullcrap filter. Secunia recently did a study of their vulnerability database, which posits that if customers would patch only the 37 most popular Windows apps or their 12 most risky programs, they could reduce risk by 80%. There is that pesky word ‘risk’ again, because these numbers are questionable at best. They define risk as a sum of the number of vulnerabilities weighted by the criticality of the vulnerability. Huh? What about exploitability? Or the ability to exfiltrate data as required per the Data Breach triangle? How can you not factor in any other controls in place to mitigate and/or work around those ‘risks’? In fact, patching some of those apps is irrelevant because they pose no real risk to corporate assets. We are still fans of patching. In fact, it’s one of the anchor tenets of the Endpoint Security Fundamentals and a critical aspect of data center ops. I agree that most customers cannot patch everything within their typical maintenance windows, so some prioritization is necessary. But I’m not about to claim that patching will reduce anyone’s risk by an arbitrary percentage (or one calculated from an arbitrary formula). Any risk calculation needs to factor in the value of the data residing on the vulnerable device, not just the criticality of the vulnerability. For instance, what if a device has 50 critical vulnerabilities, but holds no corporate data? Is that a huge risk? I guess an attacker getting remote shell to the device could use it to stage further into the network, so that’s not good, but by itself that device doesn’t represent a real risk to the organization. Is it a bigger risk than a non-critical vulnerability on the server operating the business’ main transactional system? What would you patch first? Your patching process must include prioritization. If you are wondering how that works, Rich has done a ton of work on decomposing the granular processes of patching for our Patch Quant research – check it out. But we have beaten this horse enough. Let’s deal with the bigger issue: marketers’ efforts to quantify risk reduction. I’ve been there. The sales force needs some kind of catalyst to get customers to buy something. You figure if you do a little math, however wacky the assumptions, that will be good enough for customers to make a case to buy your stuff. You are wrong. It’s foolish to make blanket statements about risk reduction. Each organization’s perception of risk and its willingness to spend money to address or defer it is unique. But that won’t stop folks from trying. Despite my understanding, I still get annoyed by the attempts of security marketers to make bold statements with little real-world basis; and by the trade press biting hook, line, and sinker on pretty much anything described in percentages. But maybe that’s just me. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.