Securosis

Research

Friday Summary: September 9, 2011

I suppose that, all things considered, I’m a pretty nice guy. I tip well, stop my car so people can cross the street, and always put my laptop bag under the seat in front of me, instead of taking up valuable overhead luggage space. While I have had plenty of jobs that required the use of physical force over the years, I always made sure to keep my professional detachment and use the minimum amount necessary. (Okay, that’s to keep my ass out of jail as much as anything else, but still…). And animals? I’m a total sucker for them. I don’t mean in an inappropriate way, but I think they are just so darn cute. We even donate a bunch to local shelters and the Phoenix Zoo. Heck, all our cats are basically rescues… one of which randomly showed up in a relative’s yard during a BBQ, severely injured, and which we nursed back to health and kept. Which is why my current murderous rampage against the birds crapping on our patio is completely out of character. We like birds. We even used to fill a bird feeder in the yard. Then all our trees grew out, and it seems we have the best shade in the neighborhood. On any given day, once the temperature tops 100 or so, our back patio is covered with dozens of birds doing nothing more than standing in the shade and crapping. And you know what birds eat, don’t you? Berries. Lots and lots of berries. Think they digest it all? Think again. Our patio is stained so badly we will never be able to get it clean. How do I know? I paid someone to power spray and hand scrub it with the kinds of chemicals banned from Fukushima – all to no avail. Not even with the special stuff I smuggled across the border from Mexico. They’ve even hit my grill. The bastards. I’ve tried all sorts of things to keep them away, but I suspect I’ll need to build out something using an Arduino and chainsaw by next summer. This year is a loss – 2 weeks after the big cleaning, even with me spraying it down every few days, out patio is unusable. I haven’t killed them yet. To be honest I don’t think that will work – more likely it would just land me on the local news. But I do grill a lot more chicken and turkey out there. Oh yeah, smell the sweet smell of superior birds roasting in agony. Hey… did you hear some dudes named DigiNotar got hacked? On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s DR article on DAM. Adrian quoted on dangers to law enforcement from the recent hack. My Spanish is good, no? Adrian’s DR article on Fraud Detection and DAM. Favorite Securosis Posts Adrian Lane: Security Management 2.0: Vendor Evaluation. Mike’s pushing the envelope here, but this is the only way to figure out how the product really works. Mike Rothman & David Mortman: Data Security Lifecycle 2.0. With this cloud stuff, our underlying computing foundation is changing. This post assembles a lot of the latest and greatest about how to protect the data. Other Securosis Posts Speaking at OWASP: September 22 and 23. Incite 9/7/2011: Decisions, Decisions. Security Management 2.0: Vendor Evaluation – Culling the Short List. The New Path of Least Resistance. Making Bets. Favorite Outside Posts Gunnar: Do we know how to make software? David Mortman: Quick Blip: Hoff In The Cube at VMworld 2011: On VMware Security. Mike Rothman: The Good, Bad, and Ugly of Technical Acquisitions. Not sure what Amrit is doing now, besides writing great summaries of what happens when Big Company X buys small start-up Y. Adrian Lane: Don’t Hate The ‘Playas’ – Hate The Game. My fav this week is Mike’s Dark Reading post – it gets to the heart of the issue. Pepper: Protecting a Laptop from Simple and Sophisticated Attacks. Mike clearly thought hard about risks, and took some very unusual steps to protect them as well as he could manage. Rich: OS X won’t let you properly remove bad DigiNotar certificates. I know I need to write this up, but being sick has gotten in the way. Apple really needs to address this – for PR reasons as much as for user security. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Measuring and Optimizing Database Security Operations (DBQuant). Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Top News and Posts Copyright Troll Righthaven Goes on Life Support. Die, troll, die! Star Wars Fans Get Pwned. Fraudulent Google credential found in the wild. Evidence of Infected SCADA Systems Washes Up in Support Forums. VMware: The Console Blog: VMware Acquires PacketMotion. Don Norman: Google doesn’t get people, it sells them. Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Russ, in response to Incite 9/7/2011: Decisions, Decisions. Re Please Stop! Dear Adrian, While I believe one of the useful roles Securosis can play in the industry is to help turn down the hype on over-blown issues, in this particular case I’m not sure I agree with your conclusion. I spent a career in aviation safety, and found that what the average line pilot was talking about every day had nowhere near the amount of aviation safety content we as aviation safety advocates thought to be adequate (an example would be the extraneous cockpit conversation prior to the Colgan Air Flight 3407 crash in Buffalo). Could it be that the fact APTs is not brought up in your daily conversations with firms could be an indication of how far we have to go in creating a

Share:
Read Post

Security Management 2.0: Vendor Evaluation – Driving the PoC

As we discussed in the last post, when considering new security management platforms, it’s critical to cull your short list based on your requirements, and to then move into the next step of the evaluation process – the Proof of Concept (PoC). Our PoC process is somewhat controversial – mostly because vendors hate it. Why? Because it’s about you and your needs, not them and their product. But you are the buyer, right? Always remember that. Most SIEM vendors want to push you through a 3-5 day eval of their technology on their terms, with their guy driving. You already have a product in place so you know the drill. You defined a few use cases important to you, and then the vendor (and their SE) stood the product up and ran through those use cases. They brought in a defined set of activities for each day, and you ended the test with a good idea of how their technology works, right? Actually, wrong. The vendor PoC process is built to highlight their product strengths and hide their weaknesses. We know this from first hand experience – we have built them for vendors in our past roles. Your objective must be to work through your paces, not theirs. To find the warts now – not when you are responding to an incident. It’s wacky that some vendors get scared by a more open PoC process, but their goal is to win the deal, and they put a lot of sweat into scripting their process so it goes smoothly for everyone involved. We hate to say it, but smooth sailing is not the point! The vendor will always say “We can do that!” – it’s your job to find out how well – or how awkwardly. So set up evaluation criteria based on your requirements and use cases. Your criteria don’t need to be complicated. Your requirements should spell out the key capabilities you need, and then plan to further evaluate each challenger based on intangibles such as set-up/configuration, change management, customization, user experience/ease of use, etc. Before you start, have your team assess your current platform as a basis for comparison. As you start the PoC, we recommend you invest in screen capture technology. It’s hard to remember what these tools did and how they did it later – especially after you’ve seen a few of them work through the same procedures. So capture as much video as you can of the user experience – it will come in very handy when you need to make a decision. We’ll discuss that in the next post. Without further ado, let’s jump into the PoC. Stand it up, for reals One of the advantages of testing security management products is that you can actually monitor production systems without worrying about blowing them up, taking them down, or adversely impacting anything. So we recommend you do just that. Plan to pull data from your firewalls, your IDS/IPS systems, and your key servers. Not all devices, of course, but enough to get a feel for how you need to set up the collectors. You will also want to configure a custom data source or two and integrate with your directory store to see how that works. Actually do a configuration and bootstrap the system in your environment. Keep in mind that the PoC is a great time to get some professional services help – gratis. This is part of the sales process for the vendors, so if you want to model out a targeted attack and then enumerate the rules in the system, have the SE teach you how to do it yourself. Then model out another attack and build the rules yourself, without help. The key is to learn how to run the system and to get comfortable – if you do switch you will be living with your choice for a long time. Focus on visualization, your view into the system. Configure some dashboards and see the results. Mess around with the reports a bit. Tighten the thresholds of the alerts. Does the notification system work? Will the alerts be survivable at production levels for years? Is the information useful? These are all things you need to do as part of kicking each challenger’s tires. If compliance is your key requirement use PCI as an example. Start pulling data from your protected network segment. Pump that data through the PCI reporting process. Is the data correct and useful for everybody with an interest? Are the reports comprehensive? Will you need to customize the report for any reason? You need to answer this kind of questions during the PoC. Run a Red Team Run a simulated attack against yourself. We know actually attacking production systems would make you very unpopular with the ops folks, so set up a lab environment. But otherwise, you want this situation to be as realistic as possible. Have attackers breach test systems with attack tools. Have your defenders try to figure out what is going on, as it’s happening. Does the system alert as it should? Will you need to heavily customize the rule set? Can you identify the nature of the attack quickly? Does their super-duper forensic drill-down give you the view you need? The clock is ticking, so how easy is it to use the system to search for clues? Obviously this isn’t a real incident situation, so you’ll take some editorial liberties, and that’s fine. You want a feel for how the system performs in near-real-time. If an attacker is in your systems, will you find them? In time to stop or catch them? Once you know they are there, can you tell what they are doing? A Red Team PoC will help you determine that. Do a Post-Mortem Once you are done with the Red Team exercise, you should have a bunch of data that will make for a nice forensic investigation of what the attack team did, and perhaps what the defense team

Share:
Read Post

Speaking at OWASP: September 22 and 23

Gunnar Peterson and I will be presenting at OWASP September 20-23rd. OWASP AppSec USA will be at the Minneapolis Convention center in – you guessed it – Minneapolis, Minnesota. This year’s theme is “Your life is in the cloud”, so there are plenty of talks on mobile app security and how to weave security into your cloud environment. Gunnar is presenting on Mobile Web Services, discussing mobile application vulnerabilities in the web services layer. I’ll be presenting CloudSec 12-Step, a look at foundational security precautions developers need to consider when building and deploying cloud applications. They have scheduled many other great talks as well. And personally, I am willing to bet autumn weather in Minnesota will be awesome! Okay, perhaps my perspective is skewed – Arizona just set a record for the hottest August in history – some 33 days this summer over 110 degrees – but regardless, Minnesota should be very nice. Come by and check out the presentations. As always, we look forward to seeing friends – shoot us an email if you want to meet up that week. Share:

Share:
Read Post

Incite 9/7/2011: Decisions, Decisions

Making decisions is very hard for most people. Not for me. The Boss and I constantly discuss a single issue over and over again as she debates all aspects of a big decision. I try to be patient, but patience is, uh, not my forte. I know it’s her process and to rush that usually lands me a spot in the doghouse, but it’s still hard to understand. Decisions are easy for me. I do the work, look at the upside and downside, and make the call. Next. I don’t look back either. When I make a decision, I’m pretty confident it’s the right thing to do at that point in time. That’s the key. Any decision any of us make at any time is presumably the best decision right then. 10 minutes or 10 years from now things will have changed. Things always change. The question is how much. Sometimes you’ll find your decisions are wrong. Actually, often your decisions are wrong. Yeah, it’s that human thing. I’ve been known to weigh intuition higher than data in some decisions. Especially relative to my career choices. If it felt right, whatever that means, I would go for it. And I’ve been wrong in those choices, a lot. But I guess I come from the school that says it’s better to do stuff and screw up, than to not do anything – stuck in a cycle of analysis paralysis. I’m sure I’ll have regrets at some point, but it won’t be because I couldn’t make a decision. It’s worth mentioning that I’m not opposed to revisiting a decision, but only if something has changed that affects my underlying assumptions. Lots of folks stew over a decision, poring over the same data over and over again, in an endless cycle of angst and second guessing. If the data doesn’t change, neither should the decision. But these folks figure that if they question themselves constantly for long enough, the decision will become easy. But often, they never achieve peace of mind. Gosh, that has to be hard. I pay a lot more attention to the downside of any decision. In most cases, the worst case scenario is you upset someone or waste time and/or money. Obviously I want to avoid those outcomes where possible, but those are manageable downsides for me. So I don’t obsess over decisions. I make the decision and I move on. Second guessing isn’t productive. Part of life is taking risks and adapting as needed. And cleaning up the inevitable mess when you are wrong. I’m okay with that. -Mike Photo credit: “Lose your sleep before your decision, not after it” originally uploaded by Scott McLeod Incite 4 U Liar, liar, pants on fire: Any time I catch my kids telling me less than the truth, I break into the “Liar, liar” refrain over and over again. Yes, I look stupid, but they hate it even more, so it’s worth doing. One of the (former) Anonymous folks pretty much pinpoints the fundamental skill set of social engineering – lying. Okay, there is grey around lies, but ultimately that’s what it is. Does that make the ability to defend against lies any less important? Of course not. Nor am I judging folks who practice social engineering daily and professionally. But if it walks and quacks like a duck, you might as well call it a duck. – MR Misplaced confidence: There will be a lot written over the next weeks and months over the hack of the Certificate Authority DigiNotar, including a post I’m working on. But if you want to quickly learn a key lesson, check out these highlights from the investigation report – thanks to Ira Victor and the SANS forensics blog. No logging. Flat network. Unpatched Internet-facing systems. Total security fundamentals FAIL. Even better, they kept the breach hidden for a month. The breach probably happened many months earlier than their claimed date. Keep in mind this was a security infrastructure company. You know, the folks who are supposed to be providing a measure of trust on the Internet, and helping others secure themselves. Talk about making most of the mistakes in the book! And BTW – as I’ve said before I know for a fact other security companies have been breached in recent years and failed to disclose. How’s that for boosting consumer confidence? – RM They stole what?: When it come to breach notification laws, California has been at the forefront for more that a decade. Now California has updated its breach disclosure laws in order to disclose additional incident data. Most firms adhering to breach notification laws include so little information that the recipients of a breach notification have no clue what it means to them, nor what steps they need to take in order to protect themselves. Credit monitoring services are more of a red herring – and occasionally a devious revenue opportunity for breached companies to offset notification costs. So California Senate Bill 24 (SB-24) requires companies to include additional information on what happened, and explicitly state what type of data was leaked. Will it help? As usual, it depends on what the company decides to put in the letter, but I don’t have high hopes. Will security vendors be pitching monitoring software to aid companies in identifying what was stolen? Absolutely, but many firms’ legal teams will not be eager to have that data hanging around because it’s often a smoking gun, and they will choose ignorance over security to reduce liability. As they always do. – AL Ethics, hypocrisy, and certifications: You have to hand it to Jericho, one of the drivers of attrition.org. He puts the time in to build somewhat airtight cases, usually turning folks’ words against them in interesting ways. I wouldn’t want to take him on in a debate, that’s for sure. His recent post at Infosec Island, clearly pointing out the hypocrisy of the CISSP folks, is a hoot. As usual, you can find all

Share:
Read Post

Data Security Lifecycle 2.0

We reference this content a lot, so I decided to compile it all into a single post. This is the original content, including internal links, and has not been re-edited. Introduction Four years ago I wrote the initial Data Security Lifecycle and a series of posts covering the constituent technologies. In 2009 I updated it to better fit cloud computing, and it was incorporated into the Cloud Security Alliance Guidance, but I have never been happy with that work. It was rushed and didn’t address cloud specifics nearly sufficiently. Adrian and I just spent a bunch of time updating the cycle and it is now a much better representation of the real world. Keep in mind that this is a high-level model to help guide your decisions, but we think this time around we were able to identify places where it can more specifically guide your data security endeavors. (As a side note, you might notice I use “data security” and “information-centric security” interchangeably. I think infocentric is more accurate, but data security is more recognized, so that’s what I tend to use.) If you are familiar with the previous model you will immediately notice that this one is much more complex. We hope it’s also much more useful. The old model really only listed controls for data in different phases of the lifecycle – and didn’t account for location, ownership, access methods, and other factors. This update should better reflect the more complex environments and use cases we tend to see these days. Due to its complexity, we need to break the new Lifecycle into a series of posts. In this first post we will revisit the basic lifecycle, and in the next post we will add locations and access. The lifecycle includes six phases from creation to destruction. Although we show it as a linear progression, once created, data can bounce between phases without restriction, and may not pass through all stages (for example, not all data is eventually destroyed). Create: This is probably better named Create/Update because it applies to creating or changing a data/content element, not just a document or database. Creation is the generation of new digital content, or the alteration/updating of existing content. Store: Storing is the act committing the digital data to some sort of storage repository, and typically occurs nearly simultaneously with creation. Use: Data is viewed, processed, or otherwise used in some sort of activity. Share: Data is exchanged between users, customers, and partners. Archive: Data leaves active use and enters long-term storage. Destroy: Data is permanently destroyed using physical or digital means (e.g., cryptoshredding). These high-level activities describe the major phases of a datum’s life, and in a future post we will cover security controls for each phase. But before we discuss controls we need to incorporate two additional aspects: locations and access devices. Locations and Access In our last post we reviewed the Data Security Lifecycle, but other than some minor wording changes (and a prettier graphic thanks to PowerPoint SmartArt) it was the same as our four-year-old original version. But as we mentioned, quite a bit has changed since then, exemplified by the emergence and adoption of cloud computing and increased mobility. Although the Lifecycle itself still applies to basic, traditional infrastructure, we will focus on these more complex use cases, which better reflect what most of you are dealing with on a day to day basis. Locations One gap in the original Lifecycle was that it failed to adequately address movement of data between repositories, environments, and organizations. A large amount of enterprise data now transitions between a variety of storage locations, applications, and operating environments. Even data created in a locked-down application may find itself backed up someplace else, replicated to alternative standby environments, or exported for processing by other applications. And all of this can happen at any phase of the Lifecycle. We can illustrate this by thinking of the Lifecycle not as a single, linear operation, but as a series of smaller lifecycles running in different operating environments. At nearly any phase data can move into, out of, and between these environments – the key for data security is identifying these movements and applying the right controls at the right security boundaries. As with cloud deployment models, these locations may be internal, external, public, private, hybrid, and so on. Some may be cloud providers, other traditional outsourcers, or perhaps multiple locations within a single data center. For data security, at this point there are four things to understand: Where are the potential locations for my data? What are the lifecycles and controls in each of those locations? Where in each lifecycle can data move between locations? How does data move between locations (via what channel)? Access Now that we know where our data lives and how it moves, we need to know who is accessing it and how. There are two factors here: Who accesses the data? How can they access it (device & channel)? Data today is accessed from all sorts of different devices. The days of employees only accessing data through restrictive applications on locked-down desktops are quickly coming to an end (with a few exceptions). These devices have different security characteristics and may use different applications, especially with applications we’ve moved to SaaS providers – who often build custom applications for mobile devices, which offer different functionality than PCs. Later in the model we will deal with who, but the diagram below shows how complex this can be – with a variety of data locations (and application environments), each with its own data lifecycle, all accessed by a variety of devices in different locations. Some data lives entirely within a single location, while other data moves in and out of various locations… and sometimes directly between external providers. This completes our “topographic map” of the Lifecycle. In our next post we will dig into mapping data flow and controls. In the next few posts we will finish covering background material, and

Share:
Read Post

Security Management 2.0: Vendor Evaluation—Culling the Short List

So far we have discussed a bit of how security management platforms have evolved, how your requirements have changed since you first deployed the platform, and how you need to evaluate your current platform (Part 1, Part 2) in light of both. Now it’s time to get into the meat of the decision process by defining your selection criteria for your Security Management 2.0 platform. Much of defining your evaluation criteria is wading objectively through vendor hyperbole. As technology markets mature (and SIEM is pretty mature), the capabilities of each offering tend to get pretty close. The messaging is very similar and it’s increasingly hard to differentiate one platform from another. Given your unhappiness with your current platform (or you wouldn’t be reading this, right?), it’s important to distill down what a platform does and what it doesn’t, as early in the process as you can. We will look at the vendor evaluation process in two phases. In this post, we’ll help you define a short list of potential replacements. Maybe you use a formal RFP/RFI to cull the 25 companies in the space to 3-5, maybe you don’t. You’ll see soon enough why you can’t run 10 vendors through even the first stage of this process. At the conclusion of the short list exercise, you’ll need to test one or two new platforms during a Proof of Concept, which we’ll detail in the next post. We don’t recommend you skip directly to the test, by the way. Each platform has strengths and weaknesses and just because a vendor happens to be in the right portion of a magical chart doesn’t mean it’s the right choice for you. Do your homework. All of it. Even if you don’t feel like it. Defining the Short List A few aspects of the selection criteria should be evaluated with a broader group of challengers. Think 3-5 at this point. You need to prioritize each of these areas based on your requirements. That’s why you spent so much time earlier defining and gaining consensus on what’s important for replacing your platform. Your main tool in this stage of the process is what we kindly call the dog and pony show. That’s when the vendor brings in their sales folks and sales engineers (SEs) to tell you how their product is awesome and will solve every problem you have. Of course, what they won’t be ready for (unless they read this post as well) is the ‘intensity’ of your KGB-style interrogation techniques. Basically, you know what’s important to you and you need confidence that any vendor passing through this gauntlet (and moving on to the PoC) will be able to meet your requirements. Let’s talk a bit about tactics to get the answers you need, based on the areas where your existing product is lacking (from the platform evaluation). You need to detailed answers during these meetings. This meeting is not a 30 slide PowerPoint and a generic demo. Make sure the challenger understands those expectations ahead of the meeting, so they have right folks in the room. If they bring the wrong people, cross them off the short list. It’s as simple as that – it’s not like you have a lot of time to waste, right? Security: We recommend you put together a scenario as a case study for each challenger. You want to understand how they’d detect an attack based on the information sources they gather and how they configure their rule sets and alerts. Make it detailed, but not totally ridiculous. So basically, dumb down your existing environment a bit and run them through an attack scenario you’ve seen recently. This will be a good exercise for seeing how the data they collect is used to solve a major security management platform major use case, detecting an emerging attack quickly. Have the SE walk you through setting up or customizing a rule. Use your own scenario to reduce the likelihood of the SE having a pre-built rule. You want to really understand how the rules work, because you will spend a lot of time configuring your rules. Compliance: Next, you need to understand what level of automation exists for compliance purposes. Ask the SE to show you the process of preparing for an audit. And no, showing you a list of 2,000 reports, most called PCI X.X is not sufficient. Ask them to produce samples for a handful of critical reports you rely upon to see how closely they hit the mark – you can see the difference between reports developed by an engineer and those created by an auditor. You need to understand where the data is coming from, and hopefully they will have a demo data set to show you a populated report. The last thing you want to learn is that their reports don’t pull from the right data sources two days before an audit. Integration: In this part of the discussion delve into how the product integrates with your existing IT stack. How does the platform pull data from your identity management system? CMDB? What about data collection? Are the connectors pre-built and maintained by the vendor? What about custom connectors? Is there a SDK available, or does it require a bunch of professional services? Forensics: Vendors throw around the term root cause analysis frequently, while rarely substantiating how their tool is used to work through an incident. Have the SE literally walk you through an investigation based on their sample data set. Yes, you’ll test this yourself later, but get a feel for what tools they have built in and how they can be used by the SE who should really know how to use the system. Scalability: If your biggest issue is a requirement for more power, then you’ll want to know (at a very granular level) how each challenger solves the problem. Dive into their data model and their deployment architectures, and have them tell stories about their biggest implementations. If scalability is a

Share:
Read Post

The New Path of Least Resistance

It’s hard to believe it has been 10 years since the 9/11 terrorist attacks on the US. I remember that day like it was yesterday. I actually flew into the Boston airport that morning. In hindsight, those attacks opened our eyes to a previously overlooked attack vector – using a passenger jet as a missile. The folks running national security for the US had all sorts of scenarios for how we could be attacked on our own soil, but I’m not sure that vector was on their lists. It seems we security folks have to start thinking in a similarly orthogonal pattern. Since we started hearing some details of the EMC/RSA breach, and of the attacks on the Comodo and DigiNotar CAs, it has become clear the attackers have been re-thinking their paths of least resistance. Let me back up a bit. Attackers will follow the path of least resistance to their intended target – they always have. Over the past few years, the path of least resistance has clearly involved exploiting both application and user weakness, rather than breaking technical security measures in network infrastructure. Why break down a door if the nincompoop on the other side will just let you in, and key Internet-facing apps don’t even have locks? That’s what we are seeing in practice. If an attacker is trying to breach a soft target, the user and application attack vectors remain the path of least resistance for the foreseeable future. The skills gap between the ends is pretty ugly, and not getting better. That’s why we spend so much time focusing on Reacting Faster and Better – it’s pretty much the only way to survive in an age of inevitable compromise. But what if the target is not soft? By that I mean a well-fortified environment, without the typical user and/or application holes we typically see exploited. A well-segmented and heavily-monitored infrastructure without the standard attack vectors. For example, one of the big defense contractors, who protect the national secrets of the defense/industrial base. Breaking down the doors here is very hard, and in many cases not worth the effort. So the attackers have identified a new low-resistance path – the security infrastructure protecting those hard targets. It was very clear with the RSA attack. That was all about gaining access to the token seeds and using them to compromise the real targets: US defense contractors. Even if RSA was as well-protected as a defense contractor, breaking into RSA once provided a leg up on all the defense contractors using RSA tokens. It’s not as clear with the Comodo or DigiNotar attacks. Those seem to be more politically motivated, but still represent an interesting redefinition of the man in the middle attack: compromising the certificate trust chain that identifies legitimate websites. So what? What impact does this have on day to day operations? Frankly, not much – so many of us are so far behind on basic attempts to block and tackle on the stuff we already know about. But for those hard targets out there, it’s time to expand your threat models to look at the technology that enforces your security controls. I remember attending a Black Hat session a few years back by Tom Ptacek of Matasano, where he discussed his research into compromising pretty well known IT management technology. That’s the kind of analysis we need looking forward. Push vendors to provide information about how they attack their own products and what they find. But don’t expect much. Vendors do not, as a rule, proactively try to poke holes in their own stuff. And if they do, they don’t would admit weakness by admitting it. So be prepared to do (and fund) much of this work yourself. But that’s beside the point. It’s time to start thinking that the new path of least resistance may be your security technology. It’s a challenge to the folks that build security products, as well as to those of you who protect hard targets. Who will rise to this challenge? Photo credit: “Path of Least Resistance” originally uploaded by Billtacular Share:

Share:
Read Post

Friday Summary: September 2, 2011

I was reading Martin McKeay’s post Fighting a Bad Habit. Martin makes a dozen or so points in the post – and shares some career angst – but there is a key theme that really resonates with me. Most technology lifers I know have their own sense of self worth tied up in what they are able to contribute professionally. Without the feeling of building, contributing, or making things better, the job is not satisfying. In college a close friend taught me what his father taught him. Any successful career should include three facets: You should do research to stay ahead in your field. You should practice your craft to keep current. You should teach those around you what you know. I find these things make me happy and make me feel productive. I have been fortunate that over my career there has been balance in these three areas. The struggle is that the balance is seldom entirely within a single job. Usually any job role is dominated by one of the three, then I choose another role or job that allows me to move on to the next leg of the stool. I do know I am happiest when I get to do all three, but windows of time when they are in balance are vanishingly small. Another point of interest for me in Martin’s post was the recurring theme that – as security experts – we need to get outside the ‘security echo chamber’. The 6,000 or so dedicated security practitioners around the world who know their stuff and struggle in futility, trying to raise the awareness of those around them to security issues. And the 600 or so experts among them are seldom interested in the mundane – only the cutting edge, which keeps them even further from the realm of the average IT practitioner. It has become clear to me over the last year is this is a self-generated problem, and a byproduct of being in an industry few have noticed until recently. We are simply tired of having the same conversations. For example, I have been talking about information centric security since 1997. I have been actively writing about database security for a decade. On those subjects it feels as if every sentence I write, I have written before. Every thought is a variation on a long-running theme. Frankly, it’s tiring. It’s even worse when I watch Rich as he struggles with waning passion for DLP. I won’t mince words – I’ll come out and say it: Rich knows more about DLP than anyone I have ever met. Even the CTOs of the vendor companies – while they have a little more technical depth on their particular products – lack Rich’s understanding of all the available products, deployment models, market conditions, buying centers, and business problems DLP can realistically solve. And we have a heck of a time getting him to talk about it because he has been talking about it for 8 years, over and over again. The problem is that what is old hat for us is just becoming mainstream. DAM, DLP, and other security. So when Martin or Rich or I complain about having the same conversations over and over, well, tough. Suck it up. There are a lot of people out there who are not part of the security echo chamber, who want to learn and understand. It’s not sexy and it ain’t getting you a speaking slot at DefCon, but it’s beneficial to a much larger IT audience. I guess this is that third facet of a successful career. Teach. It’s the education leg of our jobs and it needs to be done. With this blog – and Martin’s – we have the ability to teach others what we know, to a depth not possible with Twitter and Facebook. Learning you have an impact on a larger audience is – and should be – a reward in and off itself. On to the Summary: Favorite Securosis Posts Adrian Lane: Detecting and Preventing Data Migrations to the Cloud. Mike Rothman: Fact-Based Network Security: Compliance Benefits. Theory is good. Applying theory to practice is better. That’s why I like this series. Application of many of the metrics concepts we’ve been talking about for years. Check out all the posts. Rich: Since our posting is a bit low this week, I dug into the archives for The Data Breach Triangle. Mostly since Ed Bellis cursed me out for it earlier this week and I never learned why. Other Securosis Posts Security Management 2.0: Platform Evaluation, Part 1. Incite 8/31/2011: The Glamorous Life. Detecting and Preventing Data Migrations to the Cloud. Fact-Based Network Security: Operationalizing the Facts. The Mobile App Sec Triathlon. Friday Summary (Not Too Morbid Edition): August 26, 2011. Favorite Outside Posts Mike Rothman: Preparing to Fire an Executive. Ultra-VC Ben Horowitz provides a guide to getting rid of a bad fit. These principles apply whether you’ve got to take out the CEO or a security admin. If you manage people, read this post. Now. Adrian and Gunnar: Those Who Can’t Do, Audit. Mary Ann calls out SASO – er, OK, she called out Veracode. And Chris Wysopal fired back with Musings on Custer’s Last Stand. Not taking a side here as both are about 80% right in what they are saying, but this back and forth is a fascinating read. On a different note, check out MAD’s book recommendations – they rock! Rich: Veracode defends themselves from an Oracle war of words. I’m with Chris on this one… Oracle has yet to build the track record to support this sort of statements. Other companies have. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Top News and Posts Mac

Share:
Read Post

Making Bets

Being knee deep in a bunch of research projects doesn’t give me enough time to comment on the variety of interesting posts I see each week. Of course we try to highlight them both in the Incite (with some commentary) and in the Friday Summary. But some posts deserve a better, more detailed treatment. We haven’t done an analysis, but I’d guess we find a pretty high percentage of what Richard Bejtlich writes interesting. Here’s a little hint: it’s because he’s a big brained dude. Early this week he posted a Security Effectiveness Model to document some of his ideas on threat-centric vs. vulnerability-centric security. I’d post the chart here but without Richard’s explanations it wouldn’t make much sense. So check out the post. I’ll wait. When I took a step back, Richard’s labels didn’t mean much to me. But there is an important realization in that Venn diagram. Richard presents a taxonomy to understand the impact of the bets we make every day. No, I’m not talking about heading off to Vegas on a bender that leaves you… well, I digress. But the reality is that security people make bets every day. Lots of them. We bet on what’s interesting to the attackers. We bet on what defenses will protect those interesting assets. We bet on how stupid our employees are (they remain the weakest link). We also bet on how little we can do to make the auditors go away, since they don’t understand what we are trying to do anyway. And you thought security was fundamentally different than trading on Wall Street? Here’s the deal. A lot of those bets are wrong, and Richard’s chart shows why. With limited resources we have to make difficult choices. So we start by guessing what will be interesting to attackers (Richard’s Defensive Plan). Then we try to protect those things (Live Defenses). Ultimately we won’t know everything that’s interesting to attackers (Threat Actions). We do know we can’t protect everything, so some of the stuff we think is important will go unprotected. Oh well. Even better, we won’t be right on what we assume the attackers want, nor on what defenses will work. Not entirely. So some of the stuff we think is important isn’t. So of our defenses protect things that aren’t important. As in advertising, a portion of our security spend is wasted – we just don’t know which portion. Oh well. We’ll also miss some of the things the attacker thinks are important. That makes it pretty easy for them, eh? Oh, well. And what about when we are right? When we think something will be a target, and the attackers actually want it? And we have it defended? Well, we can still lose – a persistent attacker will still get its way, regardless of what we do. Isn’t this fun? But the reason I so closely agree with most of what Richard writes is pretty simple. We realize the ultimate end result, which he summed up pretty crisply on Twitter (there are some benefits to a 140 character limit): “Managing risk,” “keeping the bad guys out,” “preventing compromise,” are all failed concepts. How fast can you detect and correct failures? and http://twitter.com/taosecurity/status/108527362597060608: The success of a security program then ultimately rests w/ the ability to detect & respond to failures as quickly & efficiently as possible. React Faster and Better anyone? Share:

Share:
Read Post

Security Management 2.0: Platform Evaluation, Part 2

In the second half of Platform Evaluation for Security Management 2.0, we’ll cover evaluating other SIEM solutions. At this point in the process you have documented your requirements, and rationally evaluated your current SIEM platform to determine what’s working and what’s not. This step is critical because a thorough understanding of your existing platform’s strengths and weaknesses is the yardstick against which all other options will be measured. As you evaluate new platforms, you can objectively figure out if it’s time to move on and select another platform. Again, at this point no decision has been made. You are doing your homework – no more, no less. We’ll walk you through the process of evaluating other security management platforms, in the context of your key requirements and your incumbent’s deficiencies. There are two major difficulties during this phase of the process. First, you need to get close to some of the other SIEM solutions in order to dig in and determine what the other SIEM providers legitimately deliver, and what is marketing fluff. Second, you’re not exactly comparing apples to apples. Some new platforms offer advantages because they use different data models and deployment options, which demands require careful analysis of how a new tool can and should fit into your IT environment and corporate culture. Accept that some capabilities require you to push into new areas that are likely outside your comfort zone. Let’s discuss the common user complaints – and associated solutions – which highlight differences in function, architecture, and deployment. The most common complaints we hear include: the SIEM does not scale well enough, we need more and better data, the product needs to be easier to use while providing more value, and we need to react faster given the types of attacks happening today. Scale: With the ever growing number of events to monitor, it’s simply not enough to buy bigger and/or more boxes to handle the exponential growth in event processing. Some SIEM vendors tried segregating reporting and alerting from collection and storage to offload processing requirements, which enables tuning each server to its particular role. This was followed by deployment models, where log management collected the data (to meet scalability needs) and delivered a heavily filtered event stream to the SIEM to reduce the load it needed to handle. But this is a stopgap. New platforms address many of the architectural scaling issues, with purpose-built data stores provide fully distributed processing. These platforms can flexibly divide event processing/correlation, reporting, and forensic analysis. For more information on SIEM scaling architectures, consult our Understanding and Selecting a SIEM/Log Management report. Data: Most platforms continue to collect data from an increasing number of devices, but many fail in two areas. First, they have failed to climb out of the network and server realm, to start monitoring applications in more depth. Second, many platforms suffer from over-normalization – literally normalizing the value right out of collected data. For many platforms, normalization is critical to address scalability concerns. This, coupled with poorly executed correlation and enrichment, produces data of limited value for analysis and reporting – which defeats the purpose. For example, if you need detailed information for business analytics, you’ll need new agents on business systems – collecting application, file system, and database information that is not included in syslog! Oh, horrors, going beyond syslog is really no longer an option. The format of this data is non-standard, and the important aspects of an application event or SQL query are not easily extracted, and vary by application. At times, you might feed these events through a typical data normalization routine and see nothing out of the ordinary. But if you examine the original transaction and dig into the actual query, you might find SQL injection. Better data means both broader data collection options and more effective processing of the collected data. Easier: This encompasses several aspects: automation of common tasks, (real) centralized management, better visualization, and analytics. Rules that ship out of the box have traditionally been immature (and mostly useless) as they were written by tech companies with little understanding of your particular requirements. Automated reporting and alerting features got a black eye because they returned minimally useful information, requiring extensive human intervention to comb through thousands of false positives. The tool was supposed to help – not create even more work. Between better data collection, more advanced analytics engines, and easier policy customization, the automation capabilities of SIEM platforms have evolved quickly. Centralized management is not just a reporting dashboard across several products. We call that integration on the screen. To us, centralized management means both reporting events and the ability to distribute rules from a central policy manager and tune the rules on an enterprise basis. This is something most products cannot do, but is very important in distributed environments where you want to push processing closer to the point of attack. Useful visualization – not just shiny pie charts, but real graphical representations of trends, meaningful to the business – can help make decisions easier. Speed: Collection, moving the data to a central location, aggregation, normalization, correlation, and then processing is a somewhat antiquated SIEM model. Welcome to 2002. Newer SIEMs inspect events and perform some pre-processing prior to storage to ensure near-real-time analysis, as well as performing post-correlation analysis. These actions are computationally expensive, so recognize these advancements are predicated on an advanced product architecture and an appropriate deployment model. As mentioned in the data section, this requires SIEM deployment (analysis, correlation, etc.) to be pushed closer to the collector nodes – and in some cases even into the data collection agent. Between your requirements and the SIEM advances you need to focus on, you are ready to evaluate other platforms. Here’s a roadmap: Familiarize yourself with SIEM vendors: Compare and contrast their capabilities in the context of your requirements. There are a couple dozen vendors, but it’s fairly easy to eliminate many by reviewing their product marketing materials. You use a “magic

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.