Securosis

Research

Database Encryption, Part 5: Key Management

This is Part 5 of our Database Encryption Series. Part 1, Part 2, Part 3, Part 4, and the supporting posts on Database vs. Application Encryption, & Database Encryption: Fact or Fiction are online. I think key management scares people. Application developers, IT managers, and database administrators all know effective key management support for encryption is critical, but it remains scary for most practitioners. Despite the incredible mathematical complexity behind the ciphers and the finesse required to implement those ciphers in a secure fashion, they don’t have to understand the gears and cogs inside the machine. Encryption is provided as libraries or fully functional services, so you send out clear text, and you get back encrypted data – easy. Key management worries people because if you don’t get the key management piece right, the whole system fails, and you are the responsible party. To illustrate what I mean, I want to share a couple stories about developers and IT practitioners who manage these systems. Building database applications from scratch, developers have access to good crypto libraries, but generally little understanding of key management practices and few key management resources. The application developers I know took great pride in securing database fields through encryption, but when I asked them how they stored the key, the answer was usually “in the properties file”. That meant the key was stored on the disk, unencrypted, and in a directory readable by anyone who could access the application. When I pressed the point, I was assured that the key ‘needed’ to be there, otherwise the application would not be able to get the key and thus fail to restart. I have even had developers tell me this is a “chicken vs. egg” conundrum, that if you encrypt the key you cannot access it, therefore a key needed to be kept in clear text somewhere. I kid you not, with my last employer (who, by the way, developed security products), this was the reason the ‘senior’ programmer implemented key management this way, and why he didn’t see a problem with it. The argument always ends the same: a key as a tangible object is fine, but obfuscated and hidden is not. The unspoken reason is something every programmer knows: code has bugs, and a key management bug could be devastating and unrecoverable. On the IT side, administrators I know have a different, equally frightening, set of problems with key management. Every IT manager I have spoken with has one or more of these questions: What happens if/when I lose keys? How to I back keys up securely? How do I replicate keys across multiple key servers for redundancy? I have 1,000 users reliant on public key cryptography, so how do I share these keys for all these users? If I expire and rotate keys, do I lose access to data archives? If I try to recover data from a tape, how do I get the right key? If I am using specialized key management hardware, how do I recover from fire or other disasters? All are risks in the minds of IT professionals every day. Lose your key, lose your data. Lose your data, lose your job. And that scares the heck out of people! Our goal in this section is to discuss key management options for database encryption. The introduction is meant to underscore the state of key management services today, and help illustrate why key management products are deployed the way they are. Yes, you can download excellent encryption tools for free, you can mix and match best of breed features, and you can develop your own operational key management process that is application agnostic, but this approach is becoming a rarity. And that’s a good thing because key management needs to performed by people who know what they are doing. Centralized, automated, embedded, pre-packaged and available as a complete service is the common choice. This removes the complexity and the responsibility of management, and much of the worry about reliability of your developers and IT administrators. Make no mistake, this trade-off comes at a price. Let’s dig into some key management practices for databases and how they are used. Internally Managed For database encryption, we define “internal key management” as key services within the database and provided by the database vendor. All of the relational database management platforms provide encryption packages, and included in these packages are key management functions. Typical services include key creation, storage, and retrieval, security; and most systems can handle symmetric and public key encryption options. Usage of the keys can be handled by proxy, as with the transparent encryption options, or through direct API calls to the database package. The keys are stored within the database, usually within a special table that resides in the administrative database or schema. Each vendor’s approach to securing keys varies significantly. Some vendors rely upon simple access controls tied to administrative accounts, some encrypt individuals’ keys with a single master key, while others do not allow any access to keys at all, and perform all key functions within a proxied service. If you are using transparent encryption options (see Part 2: Selection Process Overview for terminology definitions) provided by the database vendor, all key operations is performed on the users’ behalf. For example, when a query for data within an encrypted column is made, the database performs the typical authorization checks, and when successfully authorized, automatically decrypts the data for the users. Neither the users nor the application need be aware the data is encrypted, needs to make a specific request to decrypt, or needs to supply a decryption key to the database. It is all handled on their behalf, and often performed without their knowledge. Transparent key management’s greatest asset is just that: its transparency. The storage, management, security, sharing, and backup of the keys is handled by the database. With internally managed encryption keys, there is not a lot for the application to do, or even care about, since all

Share:
Read Post

Creating a Standard for Data Breach Costs

One thing that’s really tweaked me over the years when evaluating data breaches is the complete lack of consistency in costs reporting. On one side we have reports and surveys coming up with “per record” costs, often without any transparency as to where the numbers came from. On the other side are those that try and look at lost share value, or directly reported losses from public companies in their financial statements, but I think we all know how inconsistent those numbers are as well. Also, from what I can tell, in most of the “per record” surveys, the biggest chunk (by far) are fuzzy soft costs like “reputation damage”. Not that there aren’t any losses due to reputation damage, but I’ve never seen any sort of justified model that accurately measures those costs over time. Take TJX for example – they grew sales after their breach. So here’s a modest proposal for how we could break out breach costs in a more consistent manner: Per Incident (Hard Costs): Incident investigation Incident remediation/recovery PR/media relations costs Optional: Legal fees Optional: Compliance violation penalties Optional: Legal settlements Per Record (Hard Costs): Notification costs (list creation, printing, postal fees). Optional: Customer response costs (help desk per-call costs). Optional: Customer protection costs (fraud alerts, credit monitoring). Per Incident (Soft Costs… e.g., not always directly attributable to the incident): Trending is key here – especially trends that predate the incident. Customer Churn (% increase over trailing 6 month rate): 1 week, 1 month, 6 months, 12 months, n months. Stock Hit (not sure of best metric here, maybe earnings per share): 1 week, 1 month, 6 months, 12 months, n months. Revenue Impact (compared to trailing 12 months): 1 week, 1 month, 6 months, 12 months, n months. I tried to break them out into hard and soft costs (hard being directly tied to the incident, soft being polluted by other factors). Also, I recognize that not every organization can measure every category for every incident. Not that I expect everyone to magically adopt this for standard reporting, but until we transition to a mechanism like this we don’t have any chance of really understanding breach costs. Share:

Share:
Read Post

You Don’t Own Yourself

Gee, is anyone out there surprised by this? Out of business, Clear may sell customer data. Here’s the thing – when you share your information with a company – any company, they view that information as one of their assets. As far as they are concerned, they own it, not you. This also includes any information any company can collect on you through legal means. Our laws (in the U.S. – it isn’t as bad in Europe and a few other regions) fully support this business model. Think you are protected by a customer agreement or privacy policy? Odds are you aren’t – the vast majority are written carefully so the company can change them pretty much whenever they want. Amazon’s done it, as have most major sites/organizations. If you don’t have a signed contract saying you own your own data, you don’t. This is especially true when companies go out of business – one of the key assets sold to recoup investor losses is customer data. In the case of Clear, this includes biometrics (fingerprint and iris scan) – never mind all the financial data and the background checks they ran. But don’t worry; they’ll only sell it to another registered traveler company. Trust them. Share:

Share:
Read Post

Friday Summary, July 10, 2009

************intro*********** And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. We are at over 70 responses, and will release the raw data when we hit 100. -author******* And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Favorite Securosis Posts Other Securosis Posts Project Quant Posts Favorite Outside Posts Adrian: Rich: Top News and Posts Wow. This is bad. Who exactly thought it was a good idea to let SMS run as root, exactly? Black Hat presentation on ATM hacking on hold. The evolution of click-fraud on SecurityFix. InfoWorld article highlights the need to review WAF traffic. Cracking a 200 year old cipher. Blog Comment of the Week This week’s best comment comes from x in response to y: Share:

Share:
Read Post

Friday Summary: June 26, 2009

Yesterday I had the opportunity to speak at a joint ISSA and ISACA event on cloud computing security down in Austin (for the record, when I travel I never expect it to be hotter AND more humid than Phoenix). I’ll avoid my snarky comments on the development and use of the term “cloud”, since I think we are finally hitting a coherent consensus on what it means (thanks in large part to Chris Hoff). I’ve always thought the fundamental technologies now being lumped into the generic term are extremely important advances, but the marketing just kills me some days. Since I flew in and out the same day, I missed a big chunk of the event before I hopped on stage to host a panel of cloud providers – all of whom are also cloud consumers (mostly on the infrastructure side). One of the most fascinating conclusions of the panel was that if the data or application is critical, don’t send it to a public cloud (private may be okay). Keep in mind, every one of these panelists sells external and/or public cloud services, and not a single one recommended sending something critical to the cloud (hopefully they’re all still employed on Monday). By the end of a good Q&A session, we seemed to come to the following consensus, which aligns with a lot of the other work published on cloud computing security: In general, the cloud is immature. Internal virtualization and SaaS are higher on the maturity end, with PaaS and IaaS (especially public/external) on the bottom. This is consistent with what other groups, like the Cloud Security Alliance, have published. Treat external clouds like any other kind of outsourcing – your SLAs and contracts are your first line of defense. Start with less-critical applications/uses to dip your toes in the water and learn the technologies. Everyone wants standards, especially for interoperability, but you’ll be in the cloud long before the standards are standard. The market forces don’t support independent development of standards, and you should expect standards-by-default to emerge from the larger vendors. If you can easily move from cloud to cloud it forces the providers to compete almost completely on price, so they’ll be dragged in kicking and screaming. What you can expect is that once someone like Amazon becomes the de facto leader in a certain area, competitors will emulate their APIs to steal business, thus creating a standard of sorts. As much as we talk SLAs, a lot of users want some starting templates. Might be some opportunities for some open projects here. I followed the panel with a presentation – “Everything You Need to Know About Cloud Security in 30 Minutes or Less”. Nothing Earth-shattering in it, but the attendees told me it was a good, practical summary for the day. It’s no Hoff’s Frogs, and is more at the tadpole level. I’ll try and get it posted on Monday. And one more time, in case you wanted to take the Project Quant survey and just have not had time: Stop what you are doing and hit the SurveyMonkey. We are over 70 responses, and will release the raw data when we hit 100. -Rich And now for the week in review: Webcasts, Podcasts, Outside Writing, and Conferences Rich provides a quote on his use of AV for CSO magazine Rich & Martin on the Network Security Podcast #155. Rich hosted a panel and gave a talk at the Austin ISSA/ISACA meeting. Rich was quoted on database security over at Network Computing. Favorite Securosis Posts Rich: Cyberskeptic: Cynicism vs. Skepticism. I’ve been addicted to The Skeptics’ Guide to the Universe podcast for a while now, and am looking for more ways to apply scientific principles to the practice of security. Adrian: Rich’s post on How I Use Social Media. I wish I could say I understood my own stance towards these media as well as Rich does. Appropriate use was the very subject Martin McKeay and I discussed one evening during RSA, and neither of us were totally comfortable for various reasons of privacy and paranoia. Good post! Other Securosis Posts You Don’t Own Yourself Database Patches, Ad Nauseum Mike Andrews Releases Free Web and Application Security Series SIEM, Today and Tomorrow Kindle and DRM Content Database Encryption: Fact vs. Fiction Project Quant Posts Project Quant: Deploy Phase Project Quant: Create and Test Deployment Package Project Quant: Test and Approve Phase Favorite Outside Posts Adrian: Adam’s comment in The emergent chaos of fingerprinting at airports post: ‘… additional layers of “no” will expose conditions unimagined by their designers’. This statement describes most software and a great number of the processes I encounter. Brilliantly captured! Rich: Jack Daniel nails one of the biggest problems with security metrics. Remember, the answer is always 42-ish. Top News and Posts TJX down another $9.75M in breach costs. Too bad they grew, like, a few billion dollars after the breach. I think they can pull $9.75M from the “need a penny/leave a penny” trays at the stores. Boaz talks about how Nevada mandates PCI – even for non-credit-card data. I suppose it’s a start, but we’ll have to see the enforcement mechanism. Does this mean companies that collect private data, but not credit card data, have to use PCI assessors? The return of the all-powerful L0pht Heavy Industries. Microsoft releases a Beta of Morro– their free AV. I talked about this once before. Lori MacVittie on clickjacking protection using x-frame-options in Firefox. Once we put up something worth protecting, we’ll have to enable that. German police totally freak out and clear off a street after finding a “nuke” made by two 6-year-olds. Critical Security Patch for Shockwave. Spam ‘King’ pleads guilty. Microsoft AntiVirus Beta Software was announced, and informally reviewed over at Digital Soapbox. Clear out of business, and they’re selling all that biometric data. Blog Comment of the Week This week’s best comment comes from Andrew in response to Science, Skepticism, and Security: I’d love to see skepticism

Share:
Read Post

Mildly Off Topic: How I Use Social Media

This post doesn’t have a whole heck of a lot to do with security, but it’s a topic I suspect all of us think about from time to time. With the continuing explosion of social media outlets, I’ve noticed myself (and most of you) bouncing around from app to app as we figure out which ones work best in which contexts, and which are even worth our time. The biggest challenge I’ve found is compartmentalization – which tools to use for which jobs, and how to manage my personal and professional online lives. Again, I think it’s something we all struggle with, but for those of us who use social media heavily as part of our jobs it’s probably a little more challenging. Here’s my perspective as an industry analyst. I really believe I’d manage these differently if I were in a different line of work (or with a different analyst firm), so I won’t claim my approach is the right one for anyone else. Blogs: As an analyst, I use the Securosis blog as my primary mechanism for publishing research. I also think it’s important to develop a relationship (platonic, of course) with readers, which is why I mix a little personal content and context in with the straighter security posts. For blogging I deliberately use an informal tone which I strip out of content that is later incorporated into research reports and such. Our informal guidelines are that while not everything needs to be directly security related, over 90% of the content should be dedicated to our coverage areas. Of our research content, 80% should be focused on helping practitioners get their jobs done, with the remaining 20% split between news and more forward-looking thought leadership. We strive for a minimum of 1 post a day, with 3 “meaty” content posts each week, a handful of “drive-by” quick responses/news items a week, and our Friday summary. Yes, we really do think about this stuff that much. I don’t currently have a personal blog outside of the site due to time, and (as we’ll get to) Twitter takes care of a lot of that. I also read a ton of other blogs, and try to comment and link to them as much as possible. I also consider the blog the most powerful peer-review mechanism for our research on the face of the planet. It’s the best way to be open and transparent about what we do, while getting important feedback and perspectives we never could otherwise. As an analyst, it’s absolutely invaluable. Podcasts: My primary podcast is co-hosting The Network Security Podcast with Martin McKeay. This isn’t a Securosis-specific thing, and I try not to drag too much of my work onto the show. Adrian and I plan on doing some more podcasts/webcasts, but those will be oriented towards specific topics and filling out our other content. Running a regular podcast is darn hard. I like the NetSecPodcast since it’s more informal and we get to talk about any off the wall topic (generally in the security realm) that comes to mind. Twitter: After the blog, this is my single biggest outlet. I initially started using Twitter to communicate with a small community of friends and colleagues in the Mac and security communities, but as Twitter exploded I’ve had to change how I approach it. Initially I described Twitter as a water cooler where I could hang out and chat informally with friends, but with over 1200 followers (many of them PR, AR, and other marketing types) I’ve had to be a little more careful about what I say. Generally, I’m still very informal on Twitter and fully mix in professional and personal content. I use it to share and interact with friends, highlight some content (but not too much, I hate people who use Twitter only to spam their blog posts), and push out my half-baked ideas. I’ve also found Twitter especially powerful to get instant feedback on things, or to rally people towards something interesting. I really enjoy being so informal on Twitter, and hope I don’t have to tighten things down any more because too many professional types are watching. It’s my favorite way to participate in the wider online community, develop new collaboration, toss out random ideas, and just stay connected with the outside world as I hide in my home office day after day. The bad side is I’ve had to reduce using it to organize meeting up with people (too many random followers in any given area), and some PR types use it to spy on my personal life (not too many; some of them are also in the friends category, but it’s happened). The @Securosis Twitter account is designed for the corporate “voice”, while the @rmogull account is my personal one. I tend to follow people I either know or who contribute positively to the community dialog. I only follow a few corporate accounts, and I can’t possibly follow everyone who follows me. I follow people who are interesting and I want to read, rather than using it as a mass-networking tool. With @rmogull there’s absolutely no split between my personal and professional lives; it’s for whatever I’m doing at the moment, but I’m always aware of who is watching. LinkedIn: I keep going back and forth on how I use LinkedIn, and recently decided to use it as my main business networking tool. To keep the network under control I generally only accept invitations from people I’ve directly connected with at some point. I feel bad turning down all the random connections, but I see social networks as having power based on quality rather than quantity (that’s what groups are for). Thus I tend to turn down connections from people who randomly saw a presentation or listened to a podcast. It isn’t an ego thing; it’s that, for me, this is a tool to keep track of my professional network, and I’ve never been one of those business card collectors. Facebook:

Share:
Read Post

Database Patches, Ad Nauseum

When I lived in the Bay Area, each Spring we had the same news repeat. Like clockwork, every year, year after year, and often by the same reporter. The story was the huge, looming danger of forest or grass fires. And the basis for the story was either because the rainfall totals were above normal and had created lots of fuel, or that the below-average rainfall had dried everything out. For Northern California, there really are no other outcomes. Pretty much they were saying you’re screwed no matter what. And no one on their editorial staff considered this contradiction because there it was, every spring, and I guess they had nothing else all that interesting to report. I am reminded of this every time I read posts about how Oracle databases remain un-patched for one, or *gasp* two whole patch cycles. Every few months I read this story, and every few months I shake my head. Sure, as a security practitioner I know it’s important to patch, and bad things may happen if I don’t. But any DBA who has been around for more than a couple years has gone through the experience of applying a patch and causing the database to crash hard. Now you get to spend the next 24-48 sleepless hours rolling back the patches, restoring the data, and trying to get the entire system working again. And it only cost you a few days of your time, a few thousand lost hours of employee productivity, and professional ridicule. Try telling a database admin how urgent it is to apply a security patch when they have gone through that personal hell! A dead database tells no tales, and patching it becomes a moot point. And yet the story every year is the same: you’re really in danger if you don’t patch your databases. But practitioners know they could be just as screwed if they do patch. Most don’t need tools to tell them how screwed they are – they know. Dead databases are a real, live (well, not so ‘live’), noisy threat, whereas hackers and data theft are considerably more abstract concepts. DBA’s and IT will demand that database patches, urgent or otherwise, are tested prior to deployment. That means a one or two cycle lag in most cases. If the company is really worried about security, they will implement DAM or firewalls; not because it is necessarily the right choice, but so they don’t have to change the patching cycles and increase the risk of IT instability. It’s not that we will never see a change in the patch process, but in all likelihood we will continue to see this story every year, year after year, ad nauseum. Share:

Share:
Read Post

Science, Skepticism, and Security

This is part 2 of our series on skepticism in security. You can read part 1 here. Being a bit of a science geek, over the past year or so I’ve become addicted to The Skeptics’ Guide to the Universe podcast, which is now the only one I never miss. It’s the Skeptics’ Guide that first really exposed me to the scientific skeptical movement, which is well aligned with what we do in security. We turn back to Wikipedia for a definition of scientific skepticism: Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a scientific or practical, epistemological position in which one questions the veracity of claims lacking empirical evidence. … Scientific skepticism utilizes critical thinking and inductive reasoning while attempting to oppose claims made which lack suitable evidential basis. … Characteristics: Like a scientist, a scientific skeptic attempts to evaluate claims based on verifiability and falsifiability rather than accepting claims on faith, anecdotes, or relying on unfalsifiable categories. Skeptics often focus their criticism on claims they consider to be implausible, dubious or clearly contradictory to generally accepted science. This distinguishes the scientific skeptic from the professional scientist, who often concentrates their inquiry on verifying or falsifying hypotheses created by those within their particular field of science. The skeptical movement has expanded well beyond merely debunking fraudsters (such as that Airborne garbage or cell phone radiation absorbers) into the general promotion of science education, science advocacy, and the use of the scientific method in the exploration of knowledge. Skeptics battle the misuse of scientific theories and statistics, and it’s this aspect I consider essential to the practice of security. In the security industry we never lack for theories or statistics, but very few of them are based on sound scientific principles, and often they cannot withstand scientific scrutiny. For example, the historic claim that 70% of security attacks were from the “insider threat” never had any rigorous backing. That claim was a munged up “fact” based on the free headline from a severely flawed survey (the CSI/FBI report), and an informal statement from one of my former coworkers made years earlier. It seems every day I see some new numbers about how many systems are infected with malware, how many dollars are lost due to the latest cybercrime (or people browsing ESPN during lunch), and so on. I believe that the appropriate application of skepticism is essential in the practice of security, but we are also in the position of often having to make critical decisions without the amount of data we’d like. Rather than saying we should only make decisions based on sound science, I’m calling for more application of scientific principles in security, and increased recognition of doubt when evaluating information. Let’s recognize the difference between guesses, educated guesses, facts, and outright garbage. For example – the disclosure debate. I’m not claiming I have the answers, and I’m not saying we should put everything on hold until we get the answers, but all sides do need to recognize we have no effective evidentiary basis for defining general disclosure policies. We have personal experience and anecdote, but no sound way to measure the potential impact of full disclosure vs. responsible disclosure vs. no disclosure. Another example is the Annualized Loss Expectancy (ALE) model. The ALE model takes losses from a single event and multiplies that times the annual rate of occurrence, to give ‘the probable annual loss’. Works great for defined assets with predictable loss rates, such as lost laptops and physical theft (e.g., retail shrinkage). Nearly worthless in information security. Why? Because we rarely know the value of an asset, or the annual rate of occurrence. Thus we multiply a guess by a guess to produce a wild-assed guess. In scientific terms neither input value has precision or accuracy, and thus any result is essentially meaningless. Skepticism is an important element of how we think about security because it helps us make decisions on what we know, while providing the intellectual freedom to change those decisions as what we know evolves. We don’t get as hung up on sticking with past decisions merely to continue to validate our belief system. In short, let’s apply more science and formal skepticism to security. Let’s recognize that just because we have to make decisions from uncertain evidence, we aren’t magically turning guesses and beliefs into theories or facts. And when we’re presented with theories, facts, and numbers, let’s apply scientific principles and see which ones hold up. Share:

Share:
Read Post

Cyberskeptic: Cynicism vs. Skepticism

Note: This is the first part of a two part series on skepticism in security; click here for part 2. Securosis: A mental disorder characterized by paranoia, cynicism, and the strange compulsion to defend random objects. For years I’ve been joking about how important cynicism is to be an effective security professional (and analyst). I’ve always considered it a core principle of the security mindset, but recently I’ve been thinking a lot more about skepticism than cynicism. My dictionary defines a cynic as: a person who believes that people are motivated purely by self-interest rather than acting for honorable or unselfish reasons : some cynics thought that the controversy was all a publicity stunt. * a person who questions whether something will happen or whether it is worthwhile : the cynics were silenced when the factory opened. 1. (Cynic) a member of a school of ancient Greek philosophers founded by Antisthenes, marked by an ostentatious contempt for ease and pleasure. The movement flourished in the 3rd century BC and revived in the 1st century AD. Cynicism is all about distrust and disillusionment; and let’s face it, those are pretty important in the security industry. As cynics we always focus on an individual’s (or organization’s) motivation. We can’t afford a trusting nature, since that’s the fastest route to failure in our business. Back in physical security days I learned the hard way that while I’d love to trust more people, the odds are they would abuse that trust for self-interest, at my expense. Cynicism is the ‘default deny’ of social interaction. Skepticism, although closely related to cynicism, is less focused on individuals, and more focused on knowledge. My dictionary defines a skeptic as: a person inclined to question or doubt all accepted opinions. * a person who doubts the truth of Christianity and other religions; an atheist or agnostic. 1. Philosophy an ancient or modern philosopher who denies the possibility of knowledge, or even rational belief, in some sphere. But to really define skepticism in modern society, we need to move past the dictionary into current usage. Wikipedia does a nice job with its expanded definition: an attitude of doubt or a disposition to incredulity either in general or toward a particular object; the doctrine that true knowledge or knowledge in a particular area is uncertain; or the method of suspended judgment, systematic doubt, or criticism that is characteristic of skeptics (Merriam-Webster). Which brings us to the philosophical application of skepticism: In philosophy, skepticism refers more specifically to any one of several propositions. These include propositions about: an inquiry, a method of obtaining knowledge through systematic doubt and continual testing, the arbitrariness, relativity, or subjectivity of moral values, the limitations of knowledge, a method of intellectual caution and suspended judgment. In other words, cynicism is about how we approach people, while skepticism is about how we approach knowledge. For a security professional, both are important, but I’m realizing it’s becoming ever more essential to challenge our internal beliefs and dogmas, rather than focusing on distrust of individuals. I consider skepticism harder than cynicism, because we are often forced to challenge our own internal beliefs on a regular basis. In part 2 of this series I’ll talk about the role of skepticism in security. Share:

Share:
Read Post

SIEM, Today and Tomorrow

Last week, Mike Rothman of eIQ wrote a thoughtful piece on the struggles of the SIEM industry. He starts the post by saying the Security Information and Event Management space has struggled over the last decade because the platforms were too expensive, too hard to implement, and (paraphrasing) did not scale well without investing a pound of flesh. All accurate points, but I think these items are secondary to the real issues that plagued the SIEM market. The issue with SIEM’s struggles in my mind was twofold: fragmented offerings and disconnection with customer issues. It is clear that the data SIM, SEM, and log management vendors collected could be used to provide insights into many different security issues, compliance issues, data collection functions, or management functions – but each vendor covered a subset. The fragmentation of this market, with some vendors doing one thing well but sucking at other important aspects, while claiming only their niche merited attention, was the primary reason the segment has struggled. They created a great deal of confusion through attempts to differentiate and get a leg up. Some did a good job at real-time analysis, some provide forensic analysis and compliance, and others excel at log collection and management. They targeted security, they targeted compliance, they targeted criminal forensics, and they targeted systems management – but the customer need was always ‘all of the above’. Mike is dead on that the segment has struggled and it’s their own fault due to piecemeal offerings that solved only a portion of the problems that needed solving. More attention was being paid to competitive positioning than actually solving customer problems. For example, the entire concept of aggregation (boiling all events into a single lowest common denominator format) was ‘innovation’ for the benefit of the vendor platform and was a detriment for solving customer problems. Sure, it reduced storage requirements and sped up reporting, but those were the vendor’s problems more than customer problems. The SIEM marketplace has gotten beyond this point, and it is no longer a segment struggling for an identity. The offerings have matured considerably in the last 3-4 years, and gone is the distinction between SIM, SEM and log management. Now you have all three or you don’t compete. While you still see some vendors pushing to differentiate one core value proposition over another, most vendors recognize the convergence as a requirement, as evidenced by this excellent article from Dominique Levin at Loglogic on the Convergence of SIEM and log management, as well as this IANS interview with Chris Peterson of LogRhythm. The convergence is necessary if you are going to meet the requirements and customer expectations. While I was more interested in some of the problems SIEM has faced over the years, I have to acknowledge the point Mike was making in his post: the SIEM market is being hurt as platforms are oversold. Are vendors over-promising, per Dark Reading? You bet they are, but when have you met a successful software salesperson who didn’t oversell to some degree? A common example I used to see was some of the sales teams claiming they offered DLP equivalent value. While some of the vendors pay lip service to the ability to provide ‘deep content inspection’ and business analytics, we need to be clear that regular expression checks are not deep content analysis, and capturing network packets is a long way from providing transactional analysis for fraud detection or policy compliance. What gets oversold in any given week will vary, but any technology where the customer has limited understanding of the real day-to-day issues is a ripe target. Conversely, I find customers I speak with being equally guilty as they promote the ‘overselling’ behavior. SIEM platforms are at the point where they can collect just about every meaningful piece of event data within the enterprise, and they will continue to evolve what is possible in analysis and applicability. Customers are not stupid – they see what is possible with the platforms, and push vendor as hard as they can to get what they want for less. Think about it this way: If you are a customer looking for tools to assist with PCI-DSS, and the platform cannot a) provide the near-real time analysis, b) provide forensic analysis, and c) safely protect its transaction archives, you move onto the next vendor who can. The first vendor who can (or successfully lies about it) wins. Salesmen are incentivized to win, and telling the customer what they want to hear is a proven strategy. So while they are not stupid, customers do make mistakes, and they need to perform their due diligence and challenge vendor claims, or hire someone who can do it for them, to avoid this problem. I am very interested to see how each vendor invests in technology advancement, and what they think the next important step in meeting business requirements will be. What I have seen so far indicates most will “cover more and do more”, meaning more platform coverage and more analysis, which is a safe choice. Similarly, most continue to offer more policies, reports, and configurations that speed up deployment and reduce set-up costs. Some have the vision to ‘move up the stack’, and look at business processing; some will continue to push the potential of correlation; while others will provide meaningful content inspection of the data they already have. Given that there are a handful of leading vendors in this space on a pretty even footing, which advancement they choose, and how they spin that value, can very quickly alter who leads and who follows. The value proposition provided by SIEM today is clearer than at any time in the segment’s history, and perhaps more than anything else, SIEM platforms are being leveraged for multiple business requirements across multiple business units. And that is why we are seeing SIEM expand despite economic recession. Because many of the vendors are meeting revenue goals, we will both see new investments in the technology, and

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.