Securosis

Research

Security Management 2.0: Revisiting Requirements

Given the evolution of both the technology and the attacks, it’s time to revisit your specific requirements and use cases – both current and evolving. You also need to be brutally honest about what your existing product or service does and does not do, as well as your team’s ability to support and maintain it. This is essential – you need a fresh look at the environment to understand what you need today and tomorrow, and what kind of resources and expertise you can bring to bear, unconstrained by what you need and do today. Many of you have laundry lists of things you would like to be able to do with current systems, but can’t. Those are a good place to start, but you also need to consider the trends for your industry and look at what’s coming down the road in terms of security and business challenges that will emerge over the next couple years. Capturing the current and foreseeable needs is what our Security Management 2.0 process is all about. Blank Slate In order to figure out the best path forward for security management, start with the proverbial blank slate. That means revisiting why you need a security management platform with fresh eyes. It means taking a critical look at use cases and figuring out their relative importance. As we described in our Understanding and Selecting a SIEM/Log Management Platform paper, the main use cases for security management really break down into 3 buckets: Improving security, increasing efficiency, and automating compliance. When you think about it, security success in today’s environment comes down to a handful of key imperatives. First we need to improve the security of our environment. We are losing ground to the bad guys, and we need to make some inroads on figuring out what’s being attacked more quickly and protecting it. Unfortunately nobody’s selling (working) crystal balls that tell you how and when you will be attacked, so the blank slate strategy entail monitoring more and determining how your detection and response systems will react more quickly. Next we need to do more with less. It does look like the global economy is improving but we can’t expect to get back to the halcyon days of spend first, ask questions later – ever. And while that may sound like “work smarter, not harder” management double-speak, there are specific automation and divide & conquer strategies that help reduce the burden. With more systems under management, we have more to worry about and less time to spend poring over reports, looking for the proverbial needle in the haystack. Given the number of new attacks – counted by any metric you like – we need to increase the efficiency of resource utilization. Finally, auditors show up a few times a year, and they want their reports. Summary reports, detail reports, and reports that validate other reports. The entire auditor dance focuses on convincing the audit team that you have the proper security controls implemented and effective. That involves a tremendous amount of data gathering, analysis, and reporting to set up – with continued tweaking required over time. It’s basically a full time job to get ready for the audit, dropped on folks who already have full time jobs. So we must automate those compliance functions to the greatest degree possible. Increasingly technologies that monitor up the stack are helping in all three areas by collecting additional data types like identity, database activity monitoring, application support, and configuration management – along with different ways of addressing the problems. As attacks target these higher-level functions and require visibility beyond just the core infrastructure, the security management platform needs to detect attacks in the context of the business threat. Don’t forget about the need for advanced forensics, given the folly of thinking you can block every attack. So a security management platform to help React Faster and Better within an incident response context may also be a key requirement moving forward. You might also be looking for a more integrated user experience across a number of security functions. For example, you may have separate vendors for change detection, vulnerability management, firewall and IDS monitoring, and database activity monitoring. You may be wearing out your swivel chair switching between all the consoles, and simplification via vendor consolidation can be a key driver. Understand that your general requirements may not have changed dramatically, although you may prioritize the use cases a little differently now. For example, perhaps you first implemented Log Management to crank out some compliance reports. It wouldn’t be the first time we’ve seen that as the primary driver. But you just finished cleaning up a messy security incident your existing SIEM missed. If so, you probably now put a pretty high value on making sure correlation works better. Once you are pretty clear within your team about the requirements for a security management team, start to discuss the topic a bit with external influencers. You can consult the ops teams, business users, and perhaps the general counsel about their requirements. Doing this confirms the priorities you already know and sets the stage to provide you support if the decision involves moving to a new platform. Critical Evaluation Now it’s time to check your ego at the door. Unless you weren’t part of the original selection team – then you can blame the old regime. Okay, we’re kidding. Either way the key to this step involves a brutally honest assessment of how your existing platform meets the needs that drove the initial implementation. This post-mortem type analysis evaluates the platform in terms of each of the main use cases (security, efficiency, compliance automation), as well as some other aspects of real world use. Even better, you’ll need to determine why the product/service isn’t measuring up. Common reasons we see include: Ease of use: Are there issues getting the product/service up and running? Did it require tons of professional services? Were you able to set up sufficiently granular rule sets

Share:
Read Post

Spotting That DAM(n) Fake

I awoke at 2:30am to a 90-degree bedroom. Getting up to discover why the air conditioning was not working, I found a dog pooped on my couch. Neatly in the corner – perhaps hoping I would not notice. Depositing the aforementioned ‘present’ in the garbage can, I almost stepped on both a bark scorpion and a millipede – eyeing one another suspiciously – just outside the garage door. After a while, air conditioning on and couch thoroughly scrubbed, I returned to bed only to find my wife had laid claim to all the covers and pillows. Since I was up, what the heck – I made coffee, ran the laundry, and baked muffins while the sun came up. I must admit I started work today with a jaundiced eye, and a strong desire to share some of my annoyance publicly. As part of some research work I am doing, I was looking at the breadth of functions from a couple different vendors in different security markets. In the process, I noticed many firms have decided Database Activity Monitoring (DAM) is sexy as hell, and are advertising that capability as a core part of their various value propositions. The only problem is that many of the vendors I reviewed don’t actually offer DAM. I went back to my briefing notes and, sure enough, what’s advertised does not match actual functionality. Imagine that! A vendor jumping on a hot market with some vapor. Today I thought at least someone should benefit from my sour mood, so I want to share my quick and dirty tips on how to spot fake DAM. First, as a reminder, here is the definition of DAM that Rich came up with 5 years ago: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms; and can generate alerts on policy violations. So how do you spot a fake? If the product does not have the option of a kernel agent, memory scanner, or some equivalent way to collect all SQL activity – either on the server or inside the database – the product is not DAM. If the product does not store queries – along with their response codes – for a minimum of 30 days – the product is not DAM. If the product is blocking activity without understanding the FROM clause, the WHERE clause, or several query and metadata specific attributes, the product is not DAM. If the vendor claims ‘deep packet inspection’ is equivalent to DAM, they are wrong. That’s not DAM either. Do us a favor and call them on it. They probably aren’t even doing deep packet inspection, but that’s a different problem. IDS, IPS, DLP, Netflow analysis, and other technologies can provide a subset of the DAM analysis capabilities, but they are not DAM. Use these four checks to see who is telling you the truth. Remember, we are just talking about the basics here – not the more advanced and esoteric features that real DAM vendors have included over the years. Now I am off to the DMV – I figure that’s just the place for my current demeanor to fit right in. Share:

Share:
Read Post

Incite 8/24/2011: Living Binary

The Boss constantly reminds me I have no middle ground. On/Off. Black/White. No dimmer. No gray (besides on my head). Moderation is non-existent, which is why I never tried hard drugs. I knew myself well enough (even at a young age) to know it wouldn’t end well. Sure I’d be the best presenter in the crack den, but that would have impeded my plans for world domination. It’s not just the mind altering stuff where I don’t do moderation. Let’s talk food. I became a vegetarian about 3 years ago, mostly because I couldn’t eat just five chicken wings. I’d eat 20 and then feel like crap. As much as my logical brain would say ‘STOP’, my monkey brain would plow through the tray of wings. I want to live to be 90, so then my kids can change my diapers. So I needed to figure out a method to deal with this lack of control. I figured it would be easier to go cold turkey. No red meat, no chicken (or turkey), no pork. Done. I can shut it off. I just can’t moderate. A few weeks ago I needed to take some action. My weight was creeping up, mostly because I couldn’t work out with the intensity that used to keep things under control, because of injuries. I don’t eat terribly, but when we run out of veggies and fruit, I’ve been known to knock back some chips. OK, a bag of chips. Or a couple bowls of cereal. Or a few mini-bagels. It’s that moderation thing again. I’ve been hearing many of my friends talk about this Primal thing for a while. Stories of how they feel a lot better. They certainly look better. I’m used to eating a big ass salad most days, and a lot of fruit/veggies. It can’t be that hard, right? Best of all, it plays into my binary nature. If I just stop eating bread and most starchy carbs, that can work. Now I don’t have to worry about digging into the bag of chips or grabbing 3-4 mini-bagels. That switch is off. Binary. It’s actually gone pretty well. I haven’t dropped a ton of weight, but I adjusted pretty well. No headaches, no severe hunger pains. I’m not as draconian as I am with the meat. I don’t go nuts (no pun intended) if there are breaded do-dads on a salad. And I’ll eat potatoes, just not frequently. Maybe twice a week. Mostly with an omelet when I’m on the road (instead of 3 bagels). Living binary may not be for everyone, but it works for me. I know I have got little control. Rather than trying to figure out how to gain control, I put myself in situations where I can be successful. Is this forever? Who knows? But it’s OK for now, so I’ll go with it. -Mike Photo credits: “Binary cupcakes” originally uploaded by alicetragedy Incite 4 U Slowing down your denial: I’m not sure where it came from, but I love the idea of slowing down to speed up. Many times when things feel out of control, if I just take a step back and focus, I start moving things forward. Seems the denial of service attackers take a similar approach. Kick ass post here from Rybolov about slow denial of service (SDoS). Of course, our friend RSnake was one of the first (if not the first) guys to talk about slow HTTP attacks, so I’m glad he’s on our side. The post tells you what you need to know about this attack, delving into its devastating nature, the challenges of detecting it, and how to defend against it. It’s much harder to track, compared to brute force DDoS, so it seems likely we’ll see a lot more SDoS. Good thing Rybolov doesn’t miss the opportunity to reiterate that throwing a bunch of servers and bandwidth at SDoS may be one of the only mitigations we have. And good thing Akamai has a lot of both, eh? – MR Blood Donation: Having been to China a few times I’m pretty sure they have some of my biometric information. Just like in the US, they take a photo and fingerprint on entry to the country. While I don’t consider China evil by any means, they are definitely a bit more of a rival to most Western nations (and pretty much any democracy). So I’m amused at this project to collect DNA sequences for people with high intelligence. Now I think this is a real research project, but they do report to the government in the end. Is anything at risk? Probably not for any of us. Is it amusing, in light of everything else going on these days? Certainly! – RM You get the check… Cellarix is creating a mobile payment system. All you have to do is provide Cellarix (or more likely their credit card processing partner) with your credit card number – the merchant’s POS system essentially calls your phone to confirm payment. Think of it as a reverse Point-of-Sale system. I saw something almost identical to this demonstrated by Ericsson in 1997 – payment was handled simply by dialing the phone number on the front of a vending machine, in order to get train tickets or a pack of cigarettes. The idea was that you could leverage your phone provider’s existing payment relationships – at the end of the month, your phone bill would include your purchases. The obvious vulnerability is the device itself. If you lose your phone, you could have your bank account or credit card drained almost instantly, which is awesome. The Cellarix model is not much different, with the merchant calling you for verification. But nowdays losing the phone is just one of many threats – MITM and rogue apps could just as easily fake authorization by controlling that second factor. Most people can’t help leaking email credentials at Starbucks – is there any reason to believe your payment data would

Share:
Read Post

Fact-based Network Security: Outcomes and Operational Data

In our first post on Fact-based Network Security, we talked about the need to make decisions based on data, as opposed to instinct. Then we went in search of the context to know what’s important, because in order to prioritize effectively you need to know what presents the most value to your organization. Now let’s dig a little deeper into the next step, which is determining the operational metrics on which to base decisions. But security metrics can be a slippery slope. First let’s draw a distinction between outcome-based metrics and operational metrics. Outcomes are the issues central to business performance, and as such are both visible and important to senior management. Examples may include uptime/availability, incidents, disclosures, etc. Basically, outcomes are the end results of your efforts. Where you are trying to get to, or stay away from (for negative outcomes). We recommend you start by establishing some goals for improvement of these outcomes. This gives you an idea of what you are trying to achieve and defines success. To illustrate this we can examine availability as an outcome – it’s never bad to improve availability of key business systems. Of course we are simplifying a bit – availability consists of more than just security. But we can think about availability in the context of security, and count issues/downtimes due to security problems. Obviously many types of activities impact availability. Device configuration changes can cause downtime. So can vulnerabilities that result in successful attacks. Don’t forget application problems that may cause performance anomalies. Traffic spikes (perhaps resulting from a DDoS) can also take down business systems. Even seemingly harmless changes to a routing table can open up an attack path from external networks. That’s just scratching the surface. The good news is that you can leverage operational data to isolate the root causes of these issues. What kinds of operational data do we need? Configuration data: Tracking configurations of network and security devices can yield important information about attack paths through your network and/or exploitable services running on these devices. Change information: Understanding when changes and/or patches take place helps isolate when devices need to be checked or scanned again to ensure new issues have not been not introduced. Vulnerabilities: Figuring out the soft spots of any device can yield valuable information about possible attacks. Network traffic: Keeping track of who is communicating with whom can help baseline an environment, which is important for detecting anomalous traffic and deciding whether it requires investigation. Obviously as you go deeper into the data center, applications, and even endpoints, there is much more operational data that can be gathered and analyzed. But remember the goal. You need to answer the core question of “what to do first,” establishing priorities among a infinite number of possible activities. We want to focus efforts on the activities that will yield the biggest favorable impact on security posture. A simple structure for this comes from the Securosis Data Breach Triangle. In order to have a breach, you need data that someone wants, an exploit to expose that data, and an egress path to exfiltrate it. If you break any leg of the triangle, you prevent a successful breach. Data (Attack Path) If the attacker can’t see the data, they can’t steal it, right? So we can focus some of our efforts on ensuring direct attack paths don’t make it easy for an attacker to access the data they want. Since you know your most critical business systems and their associated assets, you can watch to make sure attack paths don’t develop which expose this data. How? Start with proper network segmentation to separate important data from unauthorized people, systems, and applications. Then constantly monitor your network and security devices to ensure attack paths don’t put your systems at risk. Operational data such as router and firewall configurations is a key source for this analysis. You can also leverage network maps and ongoing discovery activities to check for new paths. Any time there is a change to a firewall setting or a network device, revisit your attack path analysis. That way you ensure there’s no ripple effect from a change that opens an exposure. Think of it as regression testing for network changes. Given the complexity of most enterprise-class networks, this isn’t something you can do manually, and it’s most effective in a visual context. Yes, in this case a picture is worth a million log records. A class of analysis tools has emerged to address this. Some look at firewall and network configurations to build and display a topology of your network. These tools constantly discover new devices and keep the topology up to date. We also see evolution of automated penetration testing tools, which focus on continuously trying to find attack paths to critical data, without requiring a human operator. There is no lack of technology to help model and track attack paths. Regardless of the technology you select to analyze the attack paths, this is key to understanding what to fix first. If a direct path to important data results from a configuration change, you know what to do (roll it back!). Likewise, if a rogue access point emerges on a critical network (with a direct path to important data), you need to get rid of it. These are the kind of activities that make an impact and need to be prioritized. Exploit Even if an attack path exists, it may not be practical to exploit the target device. This is where server configuration, as well as patch and vulnerability monitoring, are very useful. Changes that happen outside of authorized maintenance windows tend to be suspicious, especially on devices either containing or providing access to important data. Likewise, the presence of an exploitable critical vulnerability should bubble to the top of the priority list. Again, if there is no attack path to the vulnerable device, the priority of fixing the issue is reduced. But overall you must track what needs to be fixed on

Share:
Read Post

Beware Anti-Malware Snake Oil

It’s hard to believe, but over the past 24 hours I’ve had 3 separate briefings with companies innovating in the area of anti-malware. Just ask them. Each started the discussion with the self-evident point that the existing malware detection model is broken. Then they each proceeded to describe (at a high level) how what they are doing isn’t anti-virus per se, but something different. Something that detects the new malware we are seeing. They didn’t want to replace the anti-malware engine. They just think they address the areas where traditional anti-malware sucks. Yeah, that’s a big job. These vendors are not wrong. The existing approach of largely signature-based engines, recently leveraging a cloud extension, is broken. Clearly we need a new approach. True innovation, as opposed to marketing innovation. It’s easy to shoot holes in AV, with its sub-50% detection rate. It’s hard to actually do something sustainably different. We don’t need to poke more holes in AV, we need something that works better. Having been in this business for 20 years or so, this isn’t the first time attacks have gotten ahead of detection. You could make the case that detection has never caught up. Each time, a new set of innovators emerges with new models and products and capabilities, seemingly built to address the latest and greatest attack. Right, solving yesterday’s problems tomorrow. But that’s nothing new. It’s the security business as we know it. The problem is separating the wheat from the chaff. One of the companies I spoke with seems to have a better mousetrap. Maybe it is. Maybe it isn’t. The point is that it’s not the same mousetrap. But it will be an uphill battle for these folks to get a hearing, because endpoint security vendors have been lying to customers for years, saying their products actually stop new attacks. Now customers are highly skeptical, and are not very open to trying something different. Customers have heard it all before. This is just another cycle, compounded by the incumbents trying to sound different, while entirely focused on milking their cash cows. They will pay lip service to innovation, they always do. In reality they are more focused on reducing their agents’ footprints and improving performance, because those are costing them deals – not on the fact that they can’t detect an eskimo in Alaska. Another factor is the total farce of anti-malware testing labs. It seems like another pops up every week, commissioned to say one vendor performs better than the others. Awesome. Granted I was born skeptical, but these guys are not helping me believe in anything. So what to do? Same as it ever was. Endpoint protection is one of many tactics that can help identify and eventually contain malware. Layers are still good. Though we do expect innovation over the next year, so keep your eyes open. There is a pony somewhere in there, it’s just not clear which one is it. The rest will go down in the annals of security history as snake oil. Same as it ever was. There is very little benefit in being early with these new products/companies right now, spending time figuring out what really works. In other words, if I have an incremental $10, I’m spending it on monitoring and incident response technologies. But you already knew that. Prevention has (mostly) failed us. You know that too. Until some new anti-malware widget is vetted as making a difference (by people you trust), spend your time figuring out what went wrong. There is no lack of material there. Share:

Share:
Read Post

Cloud Security Q&A from the Field: Questions and Answers from the DC CCSK Class

One of the great things about running around teaching classes is all the feedback and questions we get from people actively working on all sorts of different initiatives. With the CCSK (cloud security) class, we find that a ton of people are grappling with these issues in active projects and different things in various stages of deep planning. We don’t want to lose this info, so we will to blog some of the more interesting questions and answers we get in the field. I’ll skip general impressions and trends today to focus on some specific questions people in last week’s class in Washington, DC, were grappling with: We currently use XXX Database Activity Monitoring appliance, is there any way to keep using it in Amazon EC2? This is a tough one because it depends completely on your vendor. With the exception of Oracle (last time I checked – this might have changed), all the major Database Activity Monitoring vendors support server agents as well as inline or passive appliances. Adrian covered most of the major issues between the two in his Database Activity Monitoring: Software vs. Appliance paper. The main question for cloud )especially public cloud) deployments is whether the agent will work in a virtual machine/instance. Most agents use special kernel hooks that need to be validated as compatible with your provider’s virtual machine hypervisor. In other words: yes, you can do it, but I can’t promise it will work with your current DAM product and cloud provider. If your cloud service supports multiple network interfaces per instance, you can also consider deploying a virtual DAM appliance to monitor traffic that way, but I’d be careful with this approach and don’t generally recommend it. Finally, there are more options for internal/private cloud where you can route the traffic even back to a dedicated appliance if necessary – but watch performance if you do. How can we monitor users connecting to cloud services over SSL? This is an easy problem to solve – you just need a web gateway with SSL decoding capabilities. In practice, this means the gateway essentially performs a man in the middle attack against your users. To work, you install the gateway appliance’s certificate as a trusted root on all your endpoints. This doesn’t work for remote users who aren’t going through your gateway. This is a fairly standard approach for both web content security and Data Loss Prevention, but those of you just using URL filtering may not be familiar with it. Can I use identity management to keep users out of my cloud services if they aren’t on the corporate network? Absolutely. If you use federated identity (probably SAML), you can configure things so users can only log into the cloud service if they are logged into your network. For example, you can configure Active Directory to use SAML extensions, then require SAML-based authentication for your cloud service. The SAML token/assertion will only be made when the user logs into the local network, so they can’t ever log in from another location. You can screw up this configuration by allowing persistent assertions (I’m sure Gunnar will correct my probably-wrong IAM vernacular). This approach will also work for VPN access (don’t forget to disable split tunnels if you want to monitor activity). What’s the CSA STAR project? STAR (Security, Trust & Assurance Registry) is a Cloud Security Alliance program where cloud providers perform and submit self assessments of their security practices. How can we encrypt big data sets without changing our applications? This isn’t a cloud-specific problem, but does come up a lot in the encryption section. First, I suggest you check out our paper on encryption: Understanding and Selecting a Database Encryption or Tokenization Solution. The best cloud option is usually volume encryption for IaaS. You may also be able to use some other form of transparent encryption, depending on the various specifics of your database and application. Some proxy-based in-the-cloud encryption solutions are starting to appear. That’s it from this class… we had a ton of other questions, but these stood out. As we teach more we’ll keep posting more, and I should get input from other instructors as they start teaching their own classes. Share:

Share:
Read Post

Security Management 2.0: Platform Evolution

Our motivation for launching the Security Management 2.0 research project lies in the general dissatisfaction with SIEM implementations – which in some cases have not delivered the expected value. The issues typically result from failure to scale, poor ease of use, excessive effort for care and feeding, or just customer execution failure. Granted some of the discontent is clearly navel-gazing – parsing and analyzing log files as part of your daily job is boring, mundane, and error-prone work you’d rather not do. But dissatisfaction with SIEM is largely legitimate and has gotten worse, as system load has grown and systems have been subjected to additional security requirements, driven by new and creative attack vectors. This all spotlights the fragility and poor architectural choices of some SIEM and Log Management platforms, especially early movers. Given that companies need to collect more – not less – data, review and management just get harder. Exponentially harder. This post is not to focus on user complaints – that doesn’t help solve problems. Instead let’s focus on the changes in SIEM platforms driving users to revisit their platform decisions. There are over 20 SIEM and Log Management vendors in the market, most of which have been at it for 5-10 years. Each vendor has evolved its products (and services) to meet customer requirements, as well as provide some degree of differentiation against the competition. We have seen new system architectures to maximize performance, increase scalability, leverage hybrid deployments, and broaden collection via CEF and universal collection format support. Usability enhancements include capabilities for data manipulation; addition of contextual data via log/event enrichment; as well as more powerful tools for management, reporting, and visualization. Data analysis enhancements include expanding supported data types to include dozens of variants for monitoring, correlating/alerting, and reporting on change controls; configuration, application, and threat data; content analysis (poor man’s DLP) and user activity monitoring. With literally hundreds of new features to comb through, it’s important to recognize that not all innovation is valuable to you, and you should keep irrelevancies out of your analysis of benefits of moving to a new platform. Just because the shiny new object has lots of bells and whistles doesn’t mean they are relevant to your decision. Our research shows the most impactful enhancements have been the enhancements in scalability, along with reduced storage and management costs. Specific examples include mesh deployment models – where each device provides full logging and SIEM functionality – moving real-time processing closer to the event sources. As we described in Understanding and Selecting SIEM/Log Management: the right architecture can deliver the trifecta of fast analysis, comprehensive collection/normalization/correlation of events, and single-point administration – but this requires a significant overhaul of early SIEM architectures. Every vendor meets the basic collection and management requirements, but only a few platforms do well at modern scale and scope. These architectural changes to enhance scalability and extend data types are seriously disruptive for vendors – they typically require a proverbial “brain transplant”: an extensive rebuild of the underlying data model and architecture. But the cost in time, manpower, and disrupted reliability was too high for some early market leaders – as a result some instead opted instead to innovate with sexy new bells and whistles which were easier and faster to develop and show off, but left them behind the rest of the market on real functionality. This is why we all too often see a web console, some additional data sources (such as identity and database activity data) and a plethora of quasi-useful feature enhancements tacked onto a limited scalability centralized server: that option cost less with less vendor risk. It sounds trite, but it is easy to be distracted from the most important SIEM advancements – those that deliver on the core values of analysis and management at scale. Speaking of scalability issues, coinciding with the increased acceptance (and adoption) of managed security services, we are seeing many organizations look at outsourcing their SIEM. Given the increased scaling requirements of today’s security management platforms, making compute and storage more of a service provider’s problem is very attractive to some organizations. Combined with the commoditization of simple network security event analysis, this has made outsourcing SIEM all the more attractive. Moving to a managed SIEM service also allows customers to save face by addressing the shortcomings of their current product without needing to acknowledge a failed investment. In this model, the customer defines the reports and security controls and the service provider deploys and manages SIEM functions. Of course, there are limitations to some managed SIEM offerings, so it all gets back to what problem you are trying to solve with your SIEM and/or Log Management deployment. To make things even more complicated, we also see hybrid architectures in early use, where a service provider does the fairly straightforward network (and server) event log analysis/correlation/reporting, while an in-house security management platform handles higher level analysis (identity, database, application logs, etc.) and deeper forensic analysis. We’ll discuss these architectures in more depth later in this series. But this Security Management 2.0 process must start with the problem(s) you need to solve. Next we’ll talk about how to revisit your security management requirements, ensuring that you take a fresh look at the issues to make the right decision for your organization moving forward. Share:

Share:
Read Post

Friday Summary: August 19, 2011

Here’s to the neighbors. I live in a rural area with a pretty low population density and 1.5 acre lot minimum. My closest neighbor is 60 feet away – most are over 300 feet or more. The area is really quiet. Usually all you can hear are birds. You can see the Milky Way at night. On any given day I may see javelina, coyotes, horny toads, road runners, vultures, hawks, barn owls, cottontails, jackrabbits, ground squirrels, mice, scorpions, one of a half-dozen varieties of snake, and a dozen varieties of birds. If you like nature, it’s a neat place to live. I am very fortunate that the house closest mine is owned by the world’s best neighbor. That’s not some coffee mug slogan – he’s just cool! He always has some incredible project going on – from welding custom turbo brackets to his friend’s drag bike, to machining a custom suspension for his truck from raw blocks of steel. His wife’s cool. And he has the same hobbies I do. He listens to the same radio stations. He drinks the same beer I do. If you need something he’s there to help. When the tree blew over between our properties, he was there to help me prop it back up. When my wife’s car got a flat during my last business trip, he put down his dinner and fixed it as if it was the evening’s entertainment. Every week I drop by with a six pack and we sit in his gi-normous garage/machine shop, and talk about whatever. Living next to the world’s best neighbor was offset by three of the 5 other residents within shouting range being asshats. Yeah, I hate to say it, but they were. Quiet, keep-to-themselves folks, but highly disagreeable and dysfunctional. My mean neighbor – mean until he got cancer, then he got really nice just before he left – was foreclosed on after 2 years struggling with the bank. My really mean neighbor – and I mean the North American pit viper cheat-little-children-out-of-their-lunch-money variety – died. Since snakey left his money to his kids, his lovely new wife could no longer afford the house, and was forced to sell to make ends meet. The neighbor behind me – let’s call him packrat, because he’s never seen a pile of junk he did not want to hoard – was also foreclosed on. After rummaging in his own trash for several months looking for scrap metal to sell, packrat smashed up cars, trailers, camper shells, ATVs, and construction supplies with his backhoe. Selling trash and trashing valuables is a special kind of mental illness. He even did me a favor and knocked over my trees by the property line so I have a full view of the debris field. While I am never happy to see people lose their homes, especially given the banking shenanigans going on all around us, I am in some small way a beneficiary. The bad neighbors are gone and the new neighbors are really nice! The people who replaced snakey are very pleasant. The neighbors across the street are wonderful – I helped them move some of their belongings and ended up talking and drinking beer till the sun went down. And while I am going to have to look at the junk pile behind me for a long time – nobody will be moving into the rat’s nest for some time – no neighbor is better than a deranged one. It dawned on me that, surrounded by nice people, I am now the un-cool neighbor. Oh, well. I plan to throw a party for the block to welcome them – and hopefully they will overlook my snarky personality. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rick and Martin on NetSec Podcast. Adrian’s DR post on Database Auditing. Mike’s DR post on DefCon Kids. Adrian quoted on PCI Council’s guidance on tokenization. Adrian quoted on tokenization. Favorite Securosis Posts Mike Rothman: Hammers and Homomorphic Encryption. Something about saying “homomorphic” makes me laugh. Yes, I’m a two year old… Adrian Lane: Proxies and the Cloud (Public and Private). Yes, you can encrypt SaaS via proxy. It works but it’s clunky. And fragile. Rich: Security Management 2.0: Time to Replace Your SIEM? (new series). This is going to be an awesome series. David Mortman: New White Paper: Tokenization vs. Encryption. Dave Lewis: Nothing Changed: Black Hat’s Impact on ICS Security. Other Securosis Posts Incite 8/17/2011: Back to School. Friday Summary: August 12, 2011. Favorite Outside Posts Mike Rothman: Stop Coddling the Super-Rich. Buffett is the man. No, Rich, Warren. But if we tax the super wealthy more, they may not donate as much to the campaign funds. Dave Lewis: The Three Laws Of Security Professionals. Adrian Lane: 15 Years of Software Security: Looking Back and Looking Forward. I met Adam around this time – and used to pass that guide out to my programming team. David Mortman: Nymwars: Thoughts on Google+. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Top News and Posts Spyeye Source ‘Leaked’. BART Blocks Cell Services. Anonymous says ‘F-U’. Beware of Juice-Jacking. Funny – but how many people forget these are just USB connectors? New Attack on AES. AES is hardly doomed by this, but any progress beating the most popular symmetric cipher is important. Persistent Flash Cookies. Microsoft Security Program & Vulnerability Data Now Available. German hacker collective expels OpenLeaks founder. On immigration, a step in the right direction. IT Admin Hacker Caught By McDonald’s Purchase. Don’t just read the headline on this one – look at

Share:
Read Post

Security Management 2.0: Time to Replace Your SIEM? (new series)

Mike and I are launching our next blog series today, one we know is pretty timely from the conversations with have with organizations almost every day. The reality is that many organizations have spent millions and years trying to get productivity out of their SIEM – with mediocre results. Combined with a number of the large players being acquired by mega IT companies and taking their eyes off the ball a bit, most customers need to start asking themselves some key questions. Is it time? Are you waving the white flag? Has your SIEM failed to perform to expectations despite your significant investment? If you are questioning whether your existing product can get the job done, you are not alone. You likely have some battle scars from the difficulty managing, scaling, and actually doing something useful with SIEM. Given the rapid evolution of SIEM/Log Management offerings – and the evolution of requirements with new application models and this cloud thing – you should be wondering whether there is a better, easier, and less expensive solution to set your needs. As market watchers, we don’t have to be brain surgeons to notice the smaller SIEM and Log Management vendors innovating their various ways to relevance – with new deployment models, data storage practices, analysis techniques, and security features. Some vendors are actively evolving their platforms – adding new capabilities on top of what they have, evolving from SIEM features into broader security management suites. Yet there are others in the space basically “milking the cash cow” by collecting maintenance revenue, while performing the technical equivalent of “putting lipstick on a pig”. (Yes, we just used two farm animal analogies in one sentence.) You may recognize this phenomenon as the unified dashboard approach for hiding obsolescence. Or maybe “Let’s buy another start-up with a shiny object!” … to distract customers from what we haven’t done with our core product strategy. Let’s face it – public company shareholders love milking cash cows, while minimizing additional research and development costs. Security companies (especially those acquired by mega IT) have no problem with this model. Meanwhile customers tend to end up holding the bag, with few options for getting the innovation they need. From our alarmingly consistent conversations with SIEM customers, we know it’s time to focus on this dissatisfaction and open the SIEM replacement process up to public scrutiny. Don’t be scared – in some ways SIEM replacement can be easier than the initial installation (yes, you can breathe a sigh of relief), but only if you leverage your hard-won knowledge and learn from your mistakes. Security Management 2.0: Time to Replace Your SIEM? will take a brutally candid look at triggers for considering a new security management platform, walk through each aspect of the decision, and present a process to migrate – if the benefits of moving outweigh the risks. In this series we will cover: Platform Evolution: We will discuss what we see in terms of new features, platform advancements, and deployment models that improve scalability and performance. We’ll also cover the rise of managed services to outsource, and deploying hybrid configurations. Requirements: We’ll examine the evolution of customer requirements in the areas of security, compliance, and operations management. We will also cover some common customer complaints, but to avoid descending into a customer gripe session, we’ll also go back and look at why some of you bought SIEM to begin with. Platform Evaluation: We’ll help walk through an in-depth examination of our current environment and its effectiveness. This will be a candid examination of what you have today – considering both what works and an objective assessment of what you’re unhappy about. Decision Process: We’ll help re-evaluate your decisions by re-examining original requirements and helping remove bias from the process as you look at shiny new features. Selection Process: This is an in-depth look at how to tell the difference between various vendors’ capabilities, and at which areas are key for your selection. Every vendor will tell you they are “class leading” and “innovative”, but most are not. We’ll help you cut through the BS and determine what’s what. We will also define a set of questions and evaluation criteria to help prioritize what’s important and how to weigh your decision. Negotiation: You will be dealing with an incumbent vendor, and possibly negotiating with a new provider. We’ll help you factor in the reality that your incumbent vendor will try to save the business, and how to leverage that as you move to solidify a new platform. Migration: If you are moving to something else, how do you get there? We’ll provide a set of steps for migration, examining how to manage multiple products during the migration. Don’t assume that SIEM replacement is always the answer – that’s simply not the case. In fact, after this analysis you may feel much better about your original SIEM purchase, with a much better idea (even a plan!) to increase usage and success. But we believe you owe it to yourself and your organization to ask the right questions and to do the work to get those answers. It’s time to slay the sacred cow of your substantial SIEM investment, and figure out objectively what offers you the best fit moving forward. This research series is designed to help. Share:

Share:
Read Post

New White Paper: Tokenization vs. Encryption

I am proud to announce the availability of our newest white paper, Tokenization vs. Encryption: Options for Compliance. The paper was written to close some gaps in our existing tokenization research coverage. I believe it is particularly important for two reasons. First, I was unable to find a critical examination of tokenization’s suitability for compliance. There are many possible applications of tokenization, but some of the claimed uses are not practical. Second, I wanted to dispel the myth that tokenization is a replacement technology for encryption, when in fact it’s a complimentary solution that – in some cases – makes regulatory compliance easier. While I was writing the paper, the PCI council did not officially accept tokenization. As of August 12, 2011, the Council does offer guidance (if not full acceptance) on using tokenization as a suitable control for payment data. However, the guidance casts doubt on the suitability of hashing and format preserving encryption as methods to improve security and reduce the scope of an audit – which is consistent with this paper. Please review the PCI official announcement (PDF) for additional information. The paper discusses the use of tokenization for payment data, personal information, and health records. This paper was written to address questions regarding the business applicability of tokenization, and is thus less technical than most of our research papers. I hope you enjoy reading it as much as I enjoyed writing it. A special thanks to Prime Factors for sponsoring this research! Download: Tokenization vs. Encryption: Options for Compliance (PDF) Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.