Securosis

Research

Friday Summary: September 2, 2011

I was reading Martin McKeay’s post Fighting a Bad Habit. Martin makes a dozen or so points in the post – and shares some career angst – but there is a key theme that really resonates with me. Most technology lifers I know have their own sense of self worth tied up in what they are able to contribute professionally. Without the feeling of building, contributing, or making things better, the job is not satisfying. In college a close friend taught me what his father taught him. Any successful career should include three facets: You should do research to stay ahead in your field. You should practice your craft to keep current. You should teach those around you what you know. I find these things make me happy and make me feel productive. I have been fortunate that over my career there has been balance in these three areas. The struggle is that the balance is seldom entirely within a single job. Usually any job role is dominated by one of the three, then I choose another role or job that allows me to move on to the next leg of the stool. I do know I am happiest when I get to do all three, but windows of time when they are in balance are vanishingly small. Another point of interest for me in Martin’s post was the recurring theme that – as security experts – we need to get outside the ‘security echo chamber’. The 6,000 or so dedicated security practitioners around the world who know their stuff and struggle in futility, trying to raise the awareness of those around them to security issues. And the 600 or so experts among them are seldom interested in the mundane – only the cutting edge, which keeps them even further from the realm of the average IT practitioner. It has become clear to me over the last year is this is a self-generated problem, and a byproduct of being in an industry few have noticed until recently. We are simply tired of having the same conversations. For example, I have been talking about information centric security since 1997. I have been actively writing about database security for a decade. On those subjects it feels as if every sentence I write, I have written before. Every thought is a variation on a long-running theme. Frankly, it’s tiring. It’s even worse when I watch Rich as he struggles with waning passion for DLP. I won’t mince words – I’ll come out and say it: Rich knows more about DLP than anyone I have ever met. Even the CTOs of the vendor companies – while they have a little more technical depth on their particular products – lack Rich’s understanding of all the available products, deployment models, market conditions, buying centers, and business problems DLP can realistically solve. And we have a heck of a time getting him to talk about it because he has been talking about it for 8 years, over and over again. The problem is that what is old hat for us is just becoming mainstream. DAM, DLP, and other security. So when Martin or Rich or I complain about having the same conversations over and over, well, tough. Suck it up. There are a lot of people out there who are not part of the security echo chamber, who want to learn and understand. It’s not sexy and it ain’t getting you a speaking slot at DefCon, but it’s beneficial to a much larger IT audience. I guess this is that third facet of a successful career. Teach. It’s the education leg of our jobs and it needs to be done. With this blog – and Martin’s – we have the ability to teach others what we know, to a depth not possible with Twitter and Facebook. Learning you have an impact on a larger audience is – and should be – a reward in and off itself. On to the Summary: Favorite Securosis Posts Adrian Lane: Detecting and Preventing Data Migrations to the Cloud. Mike Rothman: Fact-Based Network Security: Compliance Benefits. Theory is good. Applying theory to practice is better. That’s why I like this series. Application of many of the metrics concepts we’ve been talking about for years. Check out all the posts. Rich: Since our posting is a bit low this week, I dug into the archives for The Data Breach Triangle. Mostly since Ed Bellis cursed me out for it earlier this week and I never learned why. Other Securosis Posts Security Management 2.0: Platform Evaluation, Part 1. Incite 8/31/2011: The Glamorous Life. Detecting and Preventing Data Migrations to the Cloud. Fact-Based Network Security: Operationalizing the Facts. The Mobile App Sec Triathlon. Friday Summary (Not Too Morbid Edition): August 26, 2011. Favorite Outside Posts Mike Rothman: Preparing to Fire an Executive. Ultra-VC Ben Horowitz provides a guide to getting rid of a bad fit. These principles apply whether you’ve got to take out the CEO or a security admin. If you manage people, read this post. Now. Adrian and Gunnar: Those Who Can’t Do, Audit. Mary Ann calls out SASO – er, OK, she called out Veracode. And Chris Wysopal fired back with Musings on Custer’s Last Stand. Not taking a side here as both are about 80% right in what they are saying, but this back and forth is a fascinating read. On a different note, check out MAD’s book recommendations – they rock! Rich: Veracode defends themselves from an Oracle war of words. I’m with Chris on this one… Oracle has yet to build the track record to support this sort of statements. Other companies have. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. Top News and Posts Mac

Share:
Read Post

Security Management 2.0: Platform Evaluation, Part 2

In the second half of Platform Evaluation for Security Management 2.0, we’ll cover evaluating other SIEM solutions. At this point in the process you have documented your requirements, and rationally evaluated your current SIEM platform to determine what’s working and what’s not. This step is critical because a thorough understanding of your existing platform’s strengths and weaknesses is the yardstick against which all other options will be measured. As you evaluate new platforms, you can objectively figure out if it’s time to move on and select another platform. Again, at this point no decision has been made. You are doing your homework – no more, no less. We’ll walk you through the process of evaluating other security management platforms, in the context of your key requirements and your incumbent’s deficiencies. There are two major difficulties during this phase of the process. First, you need to get close to some of the other SIEM solutions in order to dig in and determine what the other SIEM providers legitimately deliver, and what is marketing fluff. Second, you’re not exactly comparing apples to apples. Some new platforms offer advantages because they use different data models and deployment options, which demands require careful analysis of how a new tool can and should fit into your IT environment and corporate culture. Accept that some capabilities require you to push into new areas that are likely outside your comfort zone. Let’s discuss the common user complaints – and associated solutions – which highlight differences in function, architecture, and deployment. The most common complaints we hear include: the SIEM does not scale well enough, we need more and better data, the product needs to be easier to use while providing more value, and we need to react faster given the types of attacks happening today. Scale: With the ever growing number of events to monitor, it’s simply not enough to buy bigger and/or more boxes to handle the exponential growth in event processing. Some SIEM vendors tried segregating reporting and alerting from collection and storage to offload processing requirements, which enables tuning each server to its particular role. This was followed by deployment models, where log management collected the data (to meet scalability needs) and delivered a heavily filtered event stream to the SIEM to reduce the load it needed to handle. But this is a stopgap. New platforms address many of the architectural scaling issues, with purpose-built data stores provide fully distributed processing. These platforms can flexibly divide event processing/correlation, reporting, and forensic analysis. For more information on SIEM scaling architectures, consult our Understanding and Selecting a SIEM/Log Management report. Data: Most platforms continue to collect data from an increasing number of devices, but many fail in two areas. First, they have failed to climb out of the network and server realm, to start monitoring applications in more depth. Second, many platforms suffer from over-normalization – literally normalizing the value right out of collected data. For many platforms, normalization is critical to address scalability concerns. This, coupled with poorly executed correlation and enrichment, produces data of limited value for analysis and reporting – which defeats the purpose. For example, if you need detailed information for business analytics, you’ll need new agents on business systems – collecting application, file system, and database information that is not included in syslog! Oh, horrors, going beyond syslog is really no longer an option. The format of this data is non-standard, and the important aspects of an application event or SQL query are not easily extracted, and vary by application. At times, you might feed these events through a typical data normalization routine and see nothing out of the ordinary. But if you examine the original transaction and dig into the actual query, you might find SQL injection. Better data means both broader data collection options and more effective processing of the collected data. Easier: This encompasses several aspects: automation of common tasks, (real) centralized management, better visualization, and analytics. Rules that ship out of the box have traditionally been immature (and mostly useless) as they were written by tech companies with little understanding of your particular requirements. Automated reporting and alerting features got a black eye because they returned minimally useful information, requiring extensive human intervention to comb through thousands of false positives. The tool was supposed to help – not create even more work. Between better data collection, more advanced analytics engines, and easier policy customization, the automation capabilities of SIEM platforms have evolved quickly. Centralized management is not just a reporting dashboard across several products. We call that integration on the screen. To us, centralized management means both reporting events and the ability to distribute rules from a central policy manager and tune the rules on an enterprise basis. This is something most products cannot do, but is very important in distributed environments where you want to push processing closer to the point of attack. Useful visualization – not just shiny pie charts, but real graphical representations of trends, meaningful to the business – can help make decisions easier. Speed: Collection, moving the data to a central location, aggregation, normalization, correlation, and then processing is a somewhat antiquated SIEM model. Welcome to 2002. Newer SIEMs inspect events and perform some pre-processing prior to storage to ensure near-real-time analysis, as well as performing post-correlation analysis. These actions are computationally expensive, so recognize these advancements are predicated on an advanced product architecture and an appropriate deployment model. As mentioned in the data section, this requires SIEM deployment (analysis, correlation, etc.) to be pushed closer to the collector nodes – and in some cases even into the data collection agent. Between your requirements and the SIEM advances you need to focus on, you are ready to evaluate other platforms. Here’s a roadmap: Familiarize yourself with SIEM vendors: Compare and contrast their capabilities in the context of your requirements. There are a couple dozen vendors, but it’s fairly easy to eliminate many by reviewing their product marketing materials. You use a “magic

Share:
Read Post

Security Management 2.0: Platform Evaluation, Part 1

To understand the importance of picking a platform, as opposed to a product, when discussing Security Management 2.0, let’s draw a quick contrast between what we see when talking to customers of either Log Management or SIEM. Most of the Log Management customers we speak with are relatively happy with their products. They chose a log-centric offering based on limited use cases – typically compliance-driven and requiring only basic log collection and reporting. These products keep day-to-day management overhead low, and if they support the occasional forensic audit customers are generally happy. Log Management is an important – albeit basic – business tool. Think of it like buying a can opener – it needs to perform a basic function and should always perform as expected. Customers don’t want their can opener to sharpen knives, tell time, or let the cat out – they just want to open cans. It’s not that hard. Log Management benefits from its functional simplicity – and even more from relatively modest expectations. Contrast that against conversations we have with SIEM customers. They have been at it for 5 years (maybe more), and as a result the scopes of their installation are massive – in terms of both infrastructure and investment. They grumble about the massive growth in event collection driven by all these new devices. They need to collect nearly every event type, and often believe they need real-time response. The product had better be fast and provide detailed forensic audits. They depend on the compliance reports for their non-technical audience, along with detailed operational reports for IT. SIEM customers have a daily yin vs. yang battle between automation and generic results; between efficiency and speed; between easy and useful. It’s like a can opener attached to an entire machine shop, so everything is a lot more complicated. You can open a can, but first you have to fabricate it from sheet metal. We use this analogy because it’s important to understand that there are a lot of moving parts in security management, and setting appropriate expectations is probably more important than any specific technical feature or function. So your evaluation of whether to move to a new platform needs to stay laser focused on the core requirements to be successful. In fact, the key to the entire decision-making process is understanding your requirements as we outlined in the last post. We keep harping on this because it’s the single biggest determinant of the success of your project. When it comes to evaluating your current platform, you need to think about the issue from two perspectives, so we will break this discussion into two posts. First is the formal evaluation of how well your platform addresses your current and foreseeable requirements. This is necessary to quantify both critical features you depend on, as well as to identify significant deficiencies. A side benefit is that you will be much better informed if you do decide to look for a replacement. Second, we will look at some of the evolving use cases and the impact of newer platforms on operations and deployment – both good and bad. Just because another vendor offers more features and performance does not mean it’s worth replacing your SIEM. The grass is not always greener on the other side. The former is critical for the decision process later in this series; the latter is critical for understanding the ramifications of replacement. The first step in the evaluation process is to use the catalog of requirements you have built to critically assess how the current SIEM platform achieves your needs. This means spelling out each business function, how critical it is, and whether the current platform gets it done. You’ll need to discuss these questions with stakeholders from operations, security, compliance, and any other organizations that participate in the management of SIEM or take advantage of it. You cannot make this decision in a vacuum, and lining up support early in the process will pay dividends later on. Trust us on that one. Operations will be the best judge of whether the platform is easy to maintain and how straightforward it is to implement new policies. Security will have the best understanding of whether forensic auditing is adequate, and compliance teams are the best source of information on suitability of reports for preparing for an audit. Each audience provides a unique perspective on the criticality of the function, and the effectiveness of the current platform. In some cases, you will find that the incumbent platform flat-out does not fill a requirement – that makes the analysis pretty easy. In other cases the system works perfectly, but is a nightmare in terms of maintenance and care & feeding for any system or rule changes. In most cases you will find that performance is less than ideal, but it’s not clear what that really means, because the system could always be faster when investigating a possible breach. It may turn out the SIEM functions as desired, but simply lacks capacity to keep up with all the events you need to collect, or takes too long to generate actionable reports. Act like a detective, collecting these tidbits of information, no matter how small, to build the story of the existing SIEM platform in your environment. This information will come into play later when you weigh options, and we recommend using a format that makes it easy to compare and contrast issues. We offer the following table as an example of one method of tracking requirements, based on minimum attributes you should consider. Security, compliance, management, integration, reporting, analysis, performance, scalability, correlation, and forensic analysis are all areas you need to evaluate in terms of your revised requirements. Prioritization of existing and desired features helps streamline the analysis. We reiterate the importance of staying focused on critical items to avoid “shiny object syndrome” driving you to select the pretty new thing, perhaps ignoring a cheap dull old saw that gets the work done. As we mentioned, evaluating

Share:
Read Post

The Mobile App Sec Triathlon

A quick announcement for those of you interested in Mobile Application Security: Our very own Gunnar Peterson is putting on a 3 day class with Ken van Wyk this coming November. The Mobile App Sec Triathlon will provide a cross-platform look at mobile application security issues, and spotlight critical areas of concern. The last two legs of the Triathlon cover specific areas of Android and iOS security that are commonly targeted by attackers. You’ll be learning from some of the best – Ken is well known for his work in secure coding, and Gunnar is one of the world’s best at Identity Management. Classes will be held at the eBay/PayPal campus in San Jose, California. Much more information is on the web site, including a picture of Gunnar with his ‘serious security’ face, so check it out. If you have specific questions or want to make sure specific topics are covered during the presentation, go ahead and email info@mobileappsectriathlon.com. Share:

Share:
Read Post

Spotting That DAM(n) Fake

I awoke at 2:30am to a 90-degree bedroom. Getting up to discover why the air conditioning was not working, I found a dog pooped on my couch. Neatly in the corner – perhaps hoping I would not notice. Depositing the aforementioned ‘present’ in the garbage can, I almost stepped on both a bark scorpion and a millipede – eyeing one another suspiciously – just outside the garage door. After a while, air conditioning on and couch thoroughly scrubbed, I returned to bed only to find my wife had laid claim to all the covers and pillows. Since I was up, what the heck – I made coffee, ran the laundry, and baked muffins while the sun came up. I must admit I started work today with a jaundiced eye, and a strong desire to share some of my annoyance publicly. As part of some research work I am doing, I was looking at the breadth of functions from a couple different vendors in different security markets. In the process, I noticed many firms have decided Database Activity Monitoring (DAM) is sexy as hell, and are advertising that capability as a core part of their various value propositions. The only problem is that many of the vendors I reviewed don’t actually offer DAM. I went back to my briefing notes and, sure enough, what’s advertised does not match actual functionality. Imagine that! A vendor jumping on a hot market with some vapor. Today I thought at least someone should benefit from my sour mood, so I want to share my quick and dirty tips on how to spot fake DAM. First, as a reminder, here is the definition of DAM that Rich came up with 5 years ago: Database Activity Monitors capture and record, at a minimum, all Structured Query Language (SQL) activity in real time or near real time, including database administrator activity, across multiple database platforms; and can generate alerts on policy violations. So how do you spot a fake? If the product does not have the option of a kernel agent, memory scanner, or some equivalent way to collect all SQL activity – either on the server or inside the database – the product is not DAM. If the product does not store queries – along with their response codes – for a minimum of 30 days – the product is not DAM. If the product is blocking activity without understanding the FROM clause, the WHERE clause, or several query and metadata specific attributes, the product is not DAM. If the vendor claims ‘deep packet inspection’ is equivalent to DAM, they are wrong. That’s not DAM either. Do us a favor and call them on it. They probably aren’t even doing deep packet inspection, but that’s a different problem. IDS, IPS, DLP, Netflow analysis, and other technologies can provide a subset of the DAM analysis capabilities, but they are not DAM. Use these four checks to see who is telling you the truth. Remember, we are just talking about the basics here – not the more advanced and esoteric features that real DAM vendors have included over the years. Now I am off to the DMV – I figure that’s just the place for my current demeanor to fit right in. Share:

Share:
Read Post

Security Management 2.0: Platform Evolution

Our motivation for launching the Security Management 2.0 research project lies in the general dissatisfaction with SIEM implementations – which in some cases have not delivered the expected value. The issues typically result from failure to scale, poor ease of use, excessive effort for care and feeding, or just customer execution failure. Granted some of the discontent is clearly navel-gazing – parsing and analyzing log files as part of your daily job is boring, mundane, and error-prone work you’d rather not do. But dissatisfaction with SIEM is largely legitimate and has gotten worse, as system load has grown and systems have been subjected to additional security requirements, driven by new and creative attack vectors. This all spotlights the fragility and poor architectural choices of some SIEM and Log Management platforms, especially early movers. Given that companies need to collect more – not less – data, review and management just get harder. Exponentially harder. This post is not to focus on user complaints – that doesn’t help solve problems. Instead let’s focus on the changes in SIEM platforms driving users to revisit their platform decisions. There are over 20 SIEM and Log Management vendors in the market, most of which have been at it for 5-10 years. Each vendor has evolved its products (and services) to meet customer requirements, as well as provide some degree of differentiation against the competition. We have seen new system architectures to maximize performance, increase scalability, leverage hybrid deployments, and broaden collection via CEF and universal collection format support. Usability enhancements include capabilities for data manipulation; addition of contextual data via log/event enrichment; as well as more powerful tools for management, reporting, and visualization. Data analysis enhancements include expanding supported data types to include dozens of variants for monitoring, correlating/alerting, and reporting on change controls; configuration, application, and threat data; content analysis (poor man’s DLP) and user activity monitoring. With literally hundreds of new features to comb through, it’s important to recognize that not all innovation is valuable to you, and you should keep irrelevancies out of your analysis of benefits of moving to a new platform. Just because the shiny new object has lots of bells and whistles doesn’t mean they are relevant to your decision. Our research shows the most impactful enhancements have been the enhancements in scalability, along with reduced storage and management costs. Specific examples include mesh deployment models – where each device provides full logging and SIEM functionality – moving real-time processing closer to the event sources. As we described in Understanding and Selecting SIEM/Log Management: the right architecture can deliver the trifecta of fast analysis, comprehensive collection/normalization/correlation of events, and single-point administration – but this requires a significant overhaul of early SIEM architectures. Every vendor meets the basic collection and management requirements, but only a few platforms do well at modern scale and scope. These architectural changes to enhance scalability and extend data types are seriously disruptive for vendors – they typically require a proverbial “brain transplant”: an extensive rebuild of the underlying data model and architecture. But the cost in time, manpower, and disrupted reliability was too high for some early market leaders – as a result some instead opted instead to innovate with sexy new bells and whistles which were easier and faster to develop and show off, but left them behind the rest of the market on real functionality. This is why we all too often see a web console, some additional data sources (such as identity and database activity data) and a plethora of quasi-useful feature enhancements tacked onto a limited scalability centralized server: that option cost less with less vendor risk. It sounds trite, but it is easy to be distracted from the most important SIEM advancements – those that deliver on the core values of analysis and management at scale. Speaking of scalability issues, coinciding with the increased acceptance (and adoption) of managed security services, we are seeing many organizations look at outsourcing their SIEM. Given the increased scaling requirements of today’s security management platforms, making compute and storage more of a service provider’s problem is very attractive to some organizations. Combined with the commoditization of simple network security event analysis, this has made outsourcing SIEM all the more attractive. Moving to a managed SIEM service also allows customers to save face by addressing the shortcomings of their current product without needing to acknowledge a failed investment. In this model, the customer defines the reports and security controls and the service provider deploys and manages SIEM functions. Of course, there are limitations to some managed SIEM offerings, so it all gets back to what problem you are trying to solve with your SIEM and/or Log Management deployment. To make things even more complicated, we also see hybrid architectures in early use, where a service provider does the fairly straightforward network (and server) event log analysis/correlation/reporting, while an in-house security management platform handles higher level analysis (identity, database, application logs, etc.) and deeper forensic analysis. We’ll discuss these architectures in more depth later in this series. But this Security Management 2.0 process must start with the problem(s) you need to solve. Next we’ll talk about how to revisit your security management requirements, ensuring that you take a fresh look at the issues to make the right decision for your organization moving forward. Share:

Share:
Read Post

Friday Summary: August 19, 2011

Here’s to the neighbors. I live in a rural area with a pretty low population density and 1.5 acre lot minimum. My closest neighbor is 60 feet away – most are over 300 feet or more. The area is really quiet. Usually all you can hear are birds. You can see the Milky Way at night. On any given day I may see javelina, coyotes, horny toads, road runners, vultures, hawks, barn owls, cottontails, jackrabbits, ground squirrels, mice, scorpions, one of a half-dozen varieties of snake, and a dozen varieties of birds. If you like nature, it’s a neat place to live. I am very fortunate that the house closest mine is owned by the world’s best neighbor. That’s not some coffee mug slogan – he’s just cool! He always has some incredible project going on – from welding custom turbo brackets to his friend’s drag bike, to machining a custom suspension for his truck from raw blocks of steel. His wife’s cool. And he has the same hobbies I do. He listens to the same radio stations. He drinks the same beer I do. If you need something he’s there to help. When the tree blew over between our properties, he was there to help me prop it back up. When my wife’s car got a flat during my last business trip, he put down his dinner and fixed it as if it was the evening’s entertainment. Every week I drop by with a six pack and we sit in his gi-normous garage/machine shop, and talk about whatever. Living next to the world’s best neighbor was offset by three of the 5 other residents within shouting range being asshats. Yeah, I hate to say it, but they were. Quiet, keep-to-themselves folks, but highly disagreeable and dysfunctional. My mean neighbor – mean until he got cancer, then he got really nice just before he left – was foreclosed on after 2 years struggling with the bank. My really mean neighbor – and I mean the North American pit viper cheat-little-children-out-of-their-lunch-money variety – died. Since snakey left his money to his kids, his lovely new wife could no longer afford the house, and was forced to sell to make ends meet. The neighbor behind me – let’s call him packrat, because he’s never seen a pile of junk he did not want to hoard – was also foreclosed on. After rummaging in his own trash for several months looking for scrap metal to sell, packrat smashed up cars, trailers, camper shells, ATVs, and construction supplies with his backhoe. Selling trash and trashing valuables is a special kind of mental illness. He even did me a favor and knocked over my trees by the property line so I have a full view of the debris field. While I am never happy to see people lose their homes, especially given the banking shenanigans going on all around us, I am in some small way a beneficiary. The bad neighbors are gone and the new neighbors are really nice! The people who replaced snakey are very pleasant. The neighbors across the street are wonderful – I helped them move some of their belongings and ended up talking and drinking beer till the sun went down. And while I am going to have to look at the junk pile behind me for a long time – nobody will be moving into the rat’s nest for some time – no neighbor is better than a deranged one. It dawned on me that, surrounded by nice people, I am now the un-cool neighbor. Oh, well. I plan to throw a party for the block to welcome them – and hopefully they will overlook my snarky personality. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rick and Martin on NetSec Podcast. Adrian’s DR post on Database Auditing. Mike’s DR post on DefCon Kids. Adrian quoted on PCI Council’s guidance on tokenization. Adrian quoted on tokenization. Favorite Securosis Posts Mike Rothman: Hammers and Homomorphic Encryption. Something about saying “homomorphic” makes me laugh. Yes, I’m a two year old… Adrian Lane: Proxies and the Cloud (Public and Private). Yes, you can encrypt SaaS via proxy. It works but it’s clunky. And fragile. Rich: Security Management 2.0: Time to Replace Your SIEM? (new series). This is going to be an awesome series. David Mortman: New White Paper: Tokenization vs. Encryption. Dave Lewis: Nothing Changed: Black Hat’s Impact on ICS Security. Other Securosis Posts Incite 8/17/2011: Back to School. Friday Summary: August 12, 2011. Favorite Outside Posts Mike Rothman: Stop Coddling the Super-Rich. Buffett is the man. No, Rich, Warren. But if we tax the super wealthy more, they may not donate as much to the campaign funds. Dave Lewis: The Three Laws Of Security Professionals. Adrian Lane: 15 Years of Software Security: Looking Back and Looking Forward. I met Adam around this time – and used to pass that guide out to my programming team. David Mortman: Nymwars: Thoughts on Google+. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. Research Reports and Presentations Tokenization vs. Encryption: Options for Compliance. Security Benchmarking: Going Beyond Metrics. Understanding and Selecting a File Activity Monitoring Solution. Database Activity Monitoring: Software vs. Appliance. React Faster and Better: New Approaches for Advanced Incident Response. Top News and Posts Spyeye Source ‘Leaked’. BART Blocks Cell Services. Anonymous says ‘F-U’. Beware of Juice-Jacking. Funny – but how many people forget these are just USB connectors? New Attack on AES. AES is hardly doomed by this, but any progress beating the most popular symmetric cipher is important. Persistent Flash Cookies. Microsoft Security Program & Vulnerability Data Now Available. German hacker collective expels OpenLeaks founder. On immigration, a step in the right direction. IT Admin Hacker Caught By McDonald’s Purchase. Don’t just read the headline on this one – look at

Share:
Read Post

Security Management 2.0: Time to Replace Your SIEM? (new series)

Mike and I are launching our next blog series today, one we know is pretty timely from the conversations with have with organizations almost every day. The reality is that many organizations have spent millions and years trying to get productivity out of their SIEM – with mediocre results. Combined with a number of the large players being acquired by mega IT companies and taking their eyes off the ball a bit, most customers need to start asking themselves some key questions. Is it time? Are you waving the white flag? Has your SIEM failed to perform to expectations despite your significant investment? If you are questioning whether your existing product can get the job done, you are not alone. You likely have some battle scars from the difficulty managing, scaling, and actually doing something useful with SIEM. Given the rapid evolution of SIEM/Log Management offerings – and the evolution of requirements with new application models and this cloud thing – you should be wondering whether there is a better, easier, and less expensive solution to set your needs. As market watchers, we don’t have to be brain surgeons to notice the smaller SIEM and Log Management vendors innovating their various ways to relevance – with new deployment models, data storage practices, analysis techniques, and security features. Some vendors are actively evolving their platforms – adding new capabilities on top of what they have, evolving from SIEM features into broader security management suites. Yet there are others in the space basically “milking the cash cow” by collecting maintenance revenue, while performing the technical equivalent of “putting lipstick on a pig”. (Yes, we just used two farm animal analogies in one sentence.) You may recognize this phenomenon as the unified dashboard approach for hiding obsolescence. Or maybe “Let’s buy another start-up with a shiny object!” … to distract customers from what we haven’t done with our core product strategy. Let’s face it – public company shareholders love milking cash cows, while minimizing additional research and development costs. Security companies (especially those acquired by mega IT) have no problem with this model. Meanwhile customers tend to end up holding the bag, with few options for getting the innovation they need. From our alarmingly consistent conversations with SIEM customers, we know it’s time to focus on this dissatisfaction and open the SIEM replacement process up to public scrutiny. Don’t be scared – in some ways SIEM replacement can be easier than the initial installation (yes, you can breathe a sigh of relief), but only if you leverage your hard-won knowledge and learn from your mistakes. Security Management 2.0: Time to Replace Your SIEM? will take a brutally candid look at triggers for considering a new security management platform, walk through each aspect of the decision, and present a process to migrate – if the benefits of moving outweigh the risks. In this series we will cover: Platform Evolution: We will discuss what we see in terms of new features, platform advancements, and deployment models that improve scalability and performance. We’ll also cover the rise of managed services to outsource, and deploying hybrid configurations. Requirements: We’ll examine the evolution of customer requirements in the areas of security, compliance, and operations management. We will also cover some common customer complaints, but to avoid descending into a customer gripe session, we’ll also go back and look at why some of you bought SIEM to begin with. Platform Evaluation: We’ll help walk through an in-depth examination of our current environment and its effectiveness. This will be a candid examination of what you have today – considering both what works and an objective assessment of what you’re unhappy about. Decision Process: We’ll help re-evaluate your decisions by re-examining original requirements and helping remove bias from the process as you look at shiny new features. Selection Process: This is an in-depth look at how to tell the difference between various vendors’ capabilities, and at which areas are key for your selection. Every vendor will tell you they are “class leading” and “innovative”, but most are not. We’ll help you cut through the BS and determine what’s what. We will also define a set of questions and evaluation criteria to help prioritize what’s important and how to weigh your decision. Negotiation: You will be dealing with an incumbent vendor, and possibly negotiating with a new provider. We’ll help you factor in the reality that your incumbent vendor will try to save the business, and how to leverage that as you move to solidify a new platform. Migration: If you are moving to something else, how do you get there? We’ll provide a set of steps for migration, examining how to manage multiple products during the migration. Don’t assume that SIEM replacement is always the answer – that’s simply not the case. In fact, after this analysis you may feel much better about your original SIEM purchase, with a much better idea (even a plan!) to increase usage and success. But we believe you owe it to yourself and your organization to ask the right questions and to do the work to get those answers. It’s time to slay the sacred cow of your substantial SIEM investment, and figure out objectively what offers you the best fit moving forward. This research series is designed to help. Share:

Share:
Read Post

New White Paper: Tokenization vs. Encryption

I am proud to announce the availability of our newest white paper, Tokenization vs. Encryption: Options for Compliance. The paper was written to close some gaps in our existing tokenization research coverage. I believe it is particularly important for two reasons. First, I was unable to find a critical examination of tokenization’s suitability for compliance. There are many possible applications of tokenization, but some of the claimed uses are not practical. Second, I wanted to dispel the myth that tokenization is a replacement technology for encryption, when in fact it’s a complimentary solution that – in some cases – makes regulatory compliance easier. While I was writing the paper, the PCI council did not officially accept tokenization. As of August 12, 2011, the Council does offer guidance (if not full acceptance) on using tokenization as a suitable control for payment data. However, the guidance casts doubt on the suitability of hashing and format preserving encryption as methods to improve security and reduce the scope of an audit – which is consistent with this paper. Please review the PCI official announcement (PDF) for additional information. The paper discusses the use of tokenization for payment data, personal information, and health records. This paper was written to address questions regarding the business applicability of tokenization, and is thus less technical than most of our research papers. I hope you enjoy reading it as much as I enjoyed writing it. A special thanks to Prime Factors for sponsoring this research! Download: Tokenization vs. Encryption: Options for Compliance (PDF) Share:

Share:
Read Post

Hammers and Homomorphic Encryption

Researchers at Microsoft are presenting a prototype of encrypted data which can be used without decrypting. Called homomorphic encryption, the idea is to keep data in a protected state (encrypted) yet still useful. It may sound like Star Trek technobabble, but this is a real working prototype. The set of operations you can perform on encrypted data is limited to a few things like addition and multiplication, but most analytics systems are limited as well. If this works, it would offer a new way to approach data security for publicly available systems. The research team is looking for a way to reduce encryption operations, as they are computationally expensive – their encryption and decryption demand a lot of processing cycles. Performing calculations and updates on large data sets becomes very expensive, as you must decrypt the data set, find the data you are interested in, make your changes, and then re-encrypt altered items. The ultimate performance impact varies with the storage system and method of encryption, but overhead and latency might typically range from 2x-10x compared to unencrypted operations. It would be a major advancement if they could dispense away with the encryption and decryption operations, while still enabling reporting on secured data sets. The promise of homomorphic encryption is predictable alteration without decryption. The possibility of being able to modify data without sacrificing security is compelling. Running basic operations on encrypted data might remove the threat of exposing data in the event of a system breach or user carelessness. And given that every company even thinking about cloud adoption is looking at data encryption and key management deployment options, there is plenty of interest in this type of encryption. But like a lot of theoretical lab work, practicality has an ugly way of pouring water on our security dreams. There are three very real problems for homomorphic encryption and computation systems: Data integrity: Homomorphic encryption does not protect data from alteration. If I can add, multiply, or change a data entry without access to the owner’s key: that becomes an avenue for an attacker to corrupt the database. Alteration of pricing tables, user attributes, stock prices, or other information stored in a database is just as damaging as leaking information. An attacker might not know what the original data values were, but that’s not enough to provide security. Data confidentiality: Homomorphic encryption can leak information. If I can add two values together and come up with a consistent value, it’s possible to reverse engineer the values. The beauty of encryption is that when you make a very minor change to the ciphertext – the data you are encrypting – you get radically different output. With CBC variants of encryption, the same plaintext has different encrypted values. The question with homomorphic encryption is whether it can be used while still maintaining confidentiality – it might well leak data to determined attackers. Performance: Performance is poor and will likely remain no better than classical encryption. As homomorphic performance improves, so do more common forms of encryption. This is important when considering the cloud as a motivator for this technology, as acknowledged by the researchers. Many firms are looking to “The Cloud” not just for elastic pay-as-you-go services, but also as a cost-effective tool for handling very large databases. As databases grow, the performance impact grows in a super-linear way – layering on a security tool with poor performance is a non-starter. Not to be a total buzzkill, but I wanted to point out that there are practical alternatives that work today. For example, data masking obfuscates data but allows computational analytics. Masking can be done in such a way as to retain aggregate values while masking individual data elements. Masking – like encryption – can be poorly implemented, enabling the original data to be reverse engineered. But good masking implementations keep data secure, perform well, and facilitate reporting and analytics. Also consider the value of private clouds on public infrastructure. In one of the many possible deployment models, data is locked into a cloud as a black box, and only approved programatic elements ever touch the data – not users. You import data and run reports, but do not allow direct access the data. As long as you protect the management and programmatic interfaces, the data remains secure. There is no reason to look for isolinear plasma converters or quantum flux capacitors when when a hammer and some duct tape will do. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.