Securosis

Research

Oracle CVSS: ‘Partial+’ is ‘Useful-’

Oracle announced the April 2011 CPU this week, with just a few moderate security issues for the database. Most DBAs monitor Oracle’s Critical Patch Updates (CPU) and are already familiar with the Common Vulnerability Scoring System (CVSS). For those of you who are not, it’s a method of calculating the relative risk of software and hardware vulnerabilities, resulting in a score that describes the potential severity of the vulnerability if an attacker were to exploit the problem. The scores are provided to help IT and operations teams decide what to patch and when. Vendors are cagey about providing vulnerability information – under the belief that any information helps attackers create exploits – so CVSS is a compromise to help customers without overly helping adversaries. Oracle uses CVSS scoring to categorize vulnerabilities, and publishes the scores with the quarterly release of their CPUs. When Oracle database vulnerabilities are found, they provide the raw data fed into the scoring system to generate the score included with the patch announcement. Most of the DBA community is not happy with the CVSS system, as it provides too little information to make informed decisions. The scoring methodology of assembling ‘base metrics’ with time and environmental variables is regarded as fuzzy logic, intended to obfuscate the truth more than to help DBAs understand risk. The general consensus is that risk scores have low value, but anything with a high score warrants further investigation. Google and 3rd party researchers become catalysts for patching decisions. Still, it’s better than nothing, and most DBAs are simply too busy to make much fuss about it, so there is little more that quiet grumbling in the community. Things seem a bit different with the April 2011 CPU. One of the bugs (CVE-2010-0903) was very similar in nature and exploit method to the last Oracle patch release (CVE-2011-0806), but had a dramatically lower risk score. The modification to the CVSS security score was based on Oracle’s modification to the CVSS scoring system to include a ‘Partial+’ impact metric. I have not spoken to anyone at Oracle about this, so maybe they have a threat model that demonstrates an attacker cannot get out of the compromised database, but I doubt it. It looks like an attempt to “game the system” by producing lower risk scores. Why do I say that? Because a ‘Partial’ reference makes sense if the scope of a vulnerability is localized to a very small part of the database. If it’s the entire database – which is what ‘Partial+’ indicates – pwnage is complete. Lowering of CVSS scores by saying the compromise is ‘Partial+’, instead of ‘Complete’ deliberately(?) misunderstands the way attackers work. Once they get a foot in the door they will automatically start looking for what to attack next. To reduce the risk score you would need to understand what else would be exposed by exploiting this vulnerability. Most people in IT – if they do a threat analysis at all – do it from the perspective of before the exploit. Few fully consider the scope of potential damage if the database were compromised and used against you. I can’t see how ‘Partial+’ makes things better or provides more accurate reporting, but it’s certainly possible the Oracle team has some rationale for the change I have not thought of. To me, though Partial+ means a database is an attacker platform for launching new attacks. And if you have been following any of the breach reports lately, you know most involve a chain of vulnerabilities and weaknesses strung together. Does this change make sense to you? Share:

Share:
Read Post

Software vs. Appliance: Appliances

I want to discuss deployment tradeoffs in Database Activity Monitoring, focusing on advantages and disadvantages of hardware appliances. It might seem minor, but the delivery model makes a big first impression on customers. It’s the first difference they notice when comparing DAM products, and it’s impressive – those racks of blinking whirring 1U & 2U machines, neatly racked, do stick with you. They cluster in groups in your data center, with lots of cool lights, logos, and deafening fans. Sometimes called “pizza boxes” by the older IT crowd, these are basic commodity computers with 1-2 processors, memory, redundant power supplies, and a disk drive or two. Inexpensive and fast, appliances are more than half the world’s DAM deployments. When choosing between solutions, first impressions make a huge difference to buying decisions, and this positive impression is a big reasons appliances have been a strong favorite for years. Everything is self-contained and much of the monitoring complexity can be hidden from view. Basic operation and data storage are self-contained. System sizing – choosing the right processor(s), memory, and disk are the vendor’s concern, so the customer doesn’t have to worry about it or take responsibility (even if they do have to provide all the actual data…). Further cementing the positive impression, the initial deployment is easier for an average customer, with much less work to get up and running. And what’s not to like? There are several compelling advantages to appliances, namely: Fast and Inexpensive: The appliance is dedicated to monitoring. You don’t need to share resources across multiple applications (or worry another application will impact monitoring), and the platform can be tailored to its task. Hardware is chosen to fit the requirements of the vendor’s code; and configuration can be tuned to well-known processor, memory, and disk demands. Stripped-down Linux kernels are commonly used to avoid unneeded OS features. Commodity hardware can be chosen by the vendor, based purely on cost/performance considerations. When given equal resources, appliances performed slightly better than software simply because they have been optimized by the vendor and are unburdened by irrelevant features. Deployment: The beauty of appliances is that they are simple to deploy. This is the most obvious advantage, even though it is mostly relevant in the short term. Slide it into the rack, connect the cables, power it up, and you get immediate functionality. Most of the sizing and capacity planning is done for you. Much of the basic configuration is in place already, and network monitoring and discovery are available without little to no effort. The box has been tested; and in some cases the vendor pre-configures policies, reports, and network settings before to shipping the hardware. You get to skip a lot of work on each installation. Granted, you only get the basics, and every installation requires customization, but this makes a powerful first impression during competitive analysis. Avoid Platform Bias: “We use HP-UX for all our servers,” or “We’re an IBM shop,” or “We standardized on SQL Server databases.” All the hardware and software is bundled within the appliance and largely invisible to the customer, which helps avoid religious wars configuration and avoids most compatibility concerns. This makes IT’s job easier and avoids concerns about hardware/OS policies. DAM provides a straightforward business function, and can be evaluated simply on how well it performs that function. Data Security: The appliance is secured prior to deployment. User and administrative accounts need to be set up, but the network interfaces, web interfaces, and data repositories are all set up by the vendor. There are fewer moving parts and areas to configure, making appliances more secure than their software counterparts when they are delivered, and simplifying security management. Non-relational Storage: To handle high database transaction rates, non-relational storage within the appliance is common. Raw SQL queries from the database are stored in flat files, one query per line. Not only can records be stored faster in simple files, but the appliance itself avoids have the burden of running a relational database. The tradeoff here is very fast storage at the expense of slower analysis and reporting. A typical appliance-based DAM installation consists of two flavors of appliances. The first and most common is small ‘node’ machines deployed regionally – or within particular segments of a corporate network – and focused on collecting events from ‘local’ databases. The second flavor of appliance is administration ‘servers’; these are much larger and centrally located, and provide event storage and command and control interfaces for the nodes. This two-tier hierarchy separates event collection from administrative tasks such as policy management, data management, and reporting. Event processing – analysis of events to detect policy violations – occurs either at the node or server level, depending on the vendor. Each node sends (at least) all notable events to its upstream server for storage, reporting, and analysis. In some configurations all analysis and alerting is performed at the ‘server’ layer. But, of course, appliances are not perfect. Appliance market share is being eroded by software and software-based “virtual appliances”. Appliances have been the preferred deployment model for DAM for the better part of the last decade, but may not be for much longer. There are several key reasons for this shift: Data Storage: Commodity hardware means data is stored on single or redundant SATA disks. Some compliance efforts require storing events for a year or more, but most appliances only support up 90 days of event storage – and in practice this is often more like 30-45 days. Most nodes rely heavily on central servers for mid-to-long-term storage of events for reports and forensic analysis. Depending on how large the infrastructure is, these server appliances can run out of capacity and performance, requiring multiple servers per deployment. Some server nodes use SAN for event storage, while others are simply incapable of storing 6-12 months of data. Many vendors suggest compatible SIEM or log management systems to handle data storage (and perhaps analysis of ‘old’ data). Virtualization: You can’t deploy a physical appliance in a virtual network. There’s no TAP or SPAN

Share:
Read Post

Database Trends

This is a non-security post, in case that matters to you. A few days ago I was reading about a failed Telcomm firm ‘refocusing’ its business and technology to become a cloud database provider. I’m thinking that’s the last frackin’ thing we need. Some opportunistic serial start-up-tard can’t wait to fail the first time, and wants skip over onto not one but two, hot trends. Smells like 1999. Of course they landed an additional $4M; couple Cloud with a modular database and it’s a no-lose situation – at least for landing venture funding. So why do we need vendor #22 jumping onto the database in the cloud bandwagon? I visited the xeround site, and after looking at their cloud database architecture … damn, it appears solid. Think of a more modular MySQL. Or better yet, Amazon Dynamo with less myopic focus on search and content delivery. Modular back-end storage options, multiple access nodes disassociated from the query engines, and multiple API handlers. The ability to mix and match components to form a database engine depending upon the task at hand makes more sense than the “everything all the time” model we have with relational vendors. I don’t see anything novel here, just a solid assemblage of features. To fully take advantage of the elastic, multi-zone, multi-tenant pay-as-you go cloud service, a modular, dynamic database is more appropriate. Notice that I did not say ‘requirement’ – you can run Oracle as an AMI on Amazon too, but that’s neither modular nor nimble in my view. The main point I want to make is that the next generation of databases is going to look more like this and less like Oracle and IBM DB2. The core architecture described embodies a “use just what you need” approach, and allows you tailor the database to fit the application service model. And don’t mistake me for yet another analyst claiming that relational database platforms are dead. I have taken criticism in the past because people felt I was indicating relational platforms had run their course, but that’s not the case. It’s more like the way RISC concepts appeared in CISC processors to make them better, but did not supersede the original as promised. NoSQL concepts are pushing the definition of what ‘database’ means. And we see all these variants because the relational platforms are not a good fit for either the application model or cloud service delivery models. Expect many of the good NoSQL ideas to show up in relational platforms as the next evolutionary step. For now, the upstarts are pointing the way. Note that this is not an endorsement of the xeround technology. Frankly I am too busy to load up an AMI and try their database to see if it works as advertised. And their feature comparison is kinda BS. But conceptually I think this model is on track. That’s why will see many new database solutions on the market, as many firms struggle to find the right mix of features and platform options to meet requirements of application developers and cloud computing customers. Share:

Share:
Read Post

Software vs. Appliance: Understanding DAM Deployment Tradeoffs

One thing I don’t miss from my vendor days in the Database Activity Monitoring market is the competitive infighting. Sure, I loved to do the competitive analyses to see how each vendor viewed itself, and how they were all trying to differentiate their products. I did not enjoy going into a customer shop after a competitor “poisoned the well” with misleading statements, evangelical pitches touting the right way to tackle a problem, or flat-out lies. Being second into a customer account meant having to deal with the dozen land mines left in their minds, and explaining those issues just to get even. The common land mines were about performance, lack of impact on IT systems, and platform support. The next vendor in line countered with architectures that did not scale, difficulties in deployment, inability to collect important events, and management complexity of every other product on the market. The customer often cannot determine who’s lying until after they purchase something and see if it does what the vendor claimed, so this game continues until the market reaches a certain level of maturity. With Database Activity Monitoring, the appliance vs. software debate is still raging. It’s not front and center in most product marketing materials. It’s not core to solving most security challenges. It is positioned as an advantage behind the scenes, especially during bake-offs between vendors, to undermine competitors. Criticism not based on the way events are processed, UI, or event storage – but simply on the deployment model. Hardware is better than software. Software is better than hardware. This virtual hardware appliance is just as good as software. And so on. This is an area where I can help customers understand the tradeoffs of the different models. Today I am kicking off a short series to discuss tradeoffs between appliance, software, and virtual appliance implementations of Database Activity Monitoring systems. I’ll research the current state of the DAM market and highlight the areas you need to focus on to determine which is right for you. I’ll also share some personal experiences that illustrate the difference between the theoretical and the practical. The series will be broken into four parts: Hardware: Discussion of hardware appliances dedicated to Database Activity Monitoring. I’ll cover the system architecture, common deployment models, and setup. Then we’ll delve into the major benefits and constraints of appliances including performance, scalability, architecture, and disaster recovery. Software: Contrasting DAM appliances with software architecture and deployment models; then cover pros and cons including installation and configuration, flexibility, scalability and performance, and installation/setup Virtual Appliances: Virtualization and cloud models demand adaptation for many security technologies, and DAM is no different. Here I will discuss why virtual appliances are necessary – contrasting against with hardware-based appliances – and consider practical considerations that crop up. Data Collection and Management: A brief discussion of how data collection and management affect DAM. I will focus on areas that come up in competitive situations and tend to confuse buying decisions. I have been an active participant in these discussions over the last decade, and I worked for a DAM software provider. As a result I need to acknowledge, up front, my historical bias in favor of software. I have publicly stated my preference for software in the past based upon my experiences as a CIO and author of DAM technology. As an analyst, however, I have come to recognize that there is no single ‘best’ technology. My own experiences sometimes differ from customer reality, and I undersetand that every customer has its own preferred way of doing things. But make no mistake – the deployment model matters! With that said, there is no single ‘best’ model. Hardware, software, and virtual appliance – each has advantages and disadvantages. What works for each customer depends on its specific needs. And just like vendors, customer will have their own biases. What’s important is what is ‘better’ for the consumer. I will provide a list of pros and cons, to help you decide what will work best. I will point out my own preferences (bias), and as always you are welcome to call ‘BS’ on anything in this series you don’t accept. Perhaps more than any other series I have ever written at Securosis, I want to encourage feedback from the security and IT practitioner community. Why? Because I have witnessed too many software solutions that don’t scale as advertised. I am aware of several hardware deployments that cost the customer almost 4X the original bid. I am aware of software – my own firm was guilty – so inflexible we were booted from the customer site. I know these issues still occur, so my goal is to help wade through the competitive puffery. I encourage you to share what have you seen, what you prefer, and why, as it helps the community. Share:

Share:
Read Post

Friday Summary: April 8, 2011

I was almost Phished this week. Not by some Nigerian scammer, or Russian botnet, but by my own bank. Bundled with both my checking and mortgage statements – with the bank’s name, logos, and phone number was the warning: “Notice: Credit Report Review Re: Suspicious activity detection”. The letter made it appear that there were ongoing suspicious activity reported by the credit agency, and I needed to take immediate action. I thought “Crud, now I have to deal with this.” Enclosed was a signature sheet that looked like they wanted permission to investigate and take action. But wait a minute – when does my bank ask for permission? My suspicion awoke. I looked at the second page of the letter, under an electron microscope to read the 10^-6 point fine print, and it turned out suspicious activity was only implied. They were using fear of not acting to scare me into signing the sheet. The letter was a ruse to get me to buy credit monitoring ‘Services’ from some dubious partner firm that has been repeatedly fined millions by various state agencies for deceptive business practices. Now my bank – First Usury Depository – is known for new ‘products’ that are actually financial IED’s. Of the 30 fantastic new FUD offerings mailed in the last three years, not one could have saved me money. All would have resulted in higher fees, and all contained traps to hike interest rates or incur hidden charges. But the traps are hidden in the financial terms – they had not stooped to fear before, instead using the lure of financial independence and assurances that I was being very smart. Alan Shimmel’s right that we need to be doubly vigilant for phishing scams, just for the wrong reasons. Both phishers and bank executives are looking to make a quick buck by fooling people. They both use social engineering tactics: official-looking scary communications, to trigger fear, to prompt rushed and careless action. And they both face very low probabilities of jail time. I can’t remember who tweeted “Legitimate breach notification is indistinguishable from phishing”, but it’s true on a number of levels. Phished or FUDded, you’re !@#$ed either way. I have to give First Usury some credit – their attack is harder to detect. I am trained to look at email headers and HTML content, but not so adept at deciphering credit reports and calculating loan-to-value ratios. If I am phished out of my credit card number, I am only liable for the first $50 If I am FUDded into a new service by my bank, it’s $20 every month. Hey, it has worked for AOL for decades… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mike quoted on metrics in Dark Reading. Adrian’s DAM and Intrusion Defense lesson Rich on Threatpost talking about RSA and Epsilon breaches. Adrian’s Securing Databases In The Cloud: Part 4 at Dark Reading. Favorite Securosis Posts Rich: Less Innovation Please. We don’t need more crap. We need more crap that works. That we use properly. Mike Rothman: Less Innovation Please. Adrian kills it with this post. Exactly right. “We need to use what we have.” Bravo. Adrian Lane: FireStarter: Now What? Other Securosis Posts Always Be Looking. Incite 4/6/2011: Do Work. Fool us once… EMC/RSA Buys NetWitness. Security Benchmarking, Going Beyond Metrics: Collecting Data Systematically. Security Benchmarking, Going Beyond Metrics: Sharing Data Safely. Quick Wins with DLP Light: Technologies and Architectures. Quick Wins with DLP Light: The Process. Favorite Outside Posts Rich: IEEE’s cloud portability project: A fool’s errand? Seriously, do you really think interoperability is in a cloud provider’s best interest? They’ll all push this off as long as possible. What will really happen is smaller cloud vendors will adopt API and functional compatibility with the big boys, hoping you will move to them. Mike Rothman: Jeremiah Grossman Reveals His Process for Security Research. Good interview with the big White Hat. Also other links to interviews with Joanna Rutkowska, HD Moore, Charlie Miller, and some loudmouth named Rothman. Pepper: Creepy really is. You can build a remarkable activity picture / geotrack / slime trail from public photo geolocation tags. Adrian Lane: Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit…. Project Quant Posts DB Quant: Index. NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. NSO Quant: Manage Metrics–Process Change Request and Test/Approve. NSO Quant: Manage Metrics–Signature Management. Research Reports and Presentations Network Security in the Age of Any Computing. The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Top News and Posts Conde Nast $8M Spear Phishing Scam was mostly buried in the news, but a big deal! Something about email addresses being hacked. You make have heard about it from 50 or so of your closest vendors. Albert Gonzales surprise appeal. IBM to battle Amazon in the public cloud. Cyberwars Should Not Be Defined in Military Terms, Experts Warn. Net giants challenge French data law. EMC Acquires NetWitness Corporation Blog Comment of the Week Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Lubinski, in response to Incite: Do Work. “They seem to forget we are all supposed to be on the same team” I work with a few people like this. It makes me wonder if they don’t really think about it and just go on doing what they have been doing for X number of years and consider that good enough. The RSA can get pwnd as easily as the rest of the world, its not like they have users that carry around magical anti-hacker unicorn’s. I see a new buzzword coming on, StuxAPT. 🙂 No? Share:

Share:
Read Post

Less Innovation Please

It happens every time we have a series of breaches. The ‘innovators’ get press coverage with some brand-new idea for how to stop hackers and catch malicious employees trying to steal data. We are seeing yet another cycle right now, which Rich discussed yesterday in FireStarter: Now What? The sheer idiocy of Wired Magazine’s Paranoia Meter made me laugh out loud. Not that monitoring should not be done, but the concept of monitoring users’ physical traits to identify bad behavior is a lot more effort and is also error-prone. Looking at posture, mouse movements, and keystrokes to judge state of mind, then using that to predict data theft? Who could believe in that? It baffles me. User behavior in the IT realm does not need to be measured in terms of eye movement, typing speed, or shifting in one’s seat – if it did, we would need to round up all the 3rd graders in the world because we’d have a serious problem. Worse, the demand is clearly a marketing attempt to capitalize on WikiLeaks and HBGary – the whole thing reminds me more than a little of South Park’s ‘It’. Behavior analysis of resource usage is quite feasible without spy cameras and shoving probes where they don’t belong. We can collect just about every action a user takes on the network, and if we choose from endpoint and applications as well – all of which is simpler, more reliable, and cheaper than adding physical sensors or interpreting their output. It’s completely feasible to analyze actual (electronic) user actions – rather than vague traits with unclear meaning – in order to identify behavioral patterns indicating known attacks and misuse. Today we mostly see attribute-based analysis (time, location, document type, etc.), but behavioral profiles can be derived to use as a template for identifying good or bad acts, and used to validate current activity. How well this all works depends more on your requirements and available time than the capabilities of particular tools. What angers me here the complete lack of discussion of SIEM, File Activity Monitoring, Data Loss Prevention, or Database Activity Monitoring – all four technologies exist today and don’t rely upon bizarre techniques to collect data or pseudoscience to predict crime. Four techniques with flexible analysis capabilities on tangible metrics. Four techniques that have been proven to detect misuse in different ways. We don’t really need more ‘innovative’ security technologies as Wired suggests. We need to use what we have. Often we need it to be easier to use, but we can already have good capabilities for solving these problems. Many of these tools have been demonstrated to work. The impediments are cost and effort – not lack of capabilities. Share:

Share:
Read Post

Comments on Ponemon’s “What Auditors think about Crypto”

The Ponemon Institute has released a white paper, What auditors think about Crypto (registration required). I downloaded and took a cursory look at their results. My summary of their report is “IT auditors rely on encryption, but key management can be really hard”. No shock there. A client passed along a TechTarget blog post where Larry Ponemon is quoted as saying auditors prefer encryption , but worded to make their study sound like a comparison between encryption and tokenization. So I dove deep into their contents to see if I missed something. Nope. The study does not compare encryption to tokenization, and Larry’s juxtaposition implies it is. The quotes from the TechTarget post are as follows: Encryption has always been a coveted technology to auditors, but organizations that have problems with key management may view tokenization as a good alternative and Tokenization is an up and coming technology; we think PCI DSS and some other compliance requirements will allow tokenization as a solid alternative to encryption. and In general auditors in our study still favor encryption in all the different use cases that we examined, Which are all technically true but misleading. If you had to choose one technology over another for all use cases, I don’t know of a security professional who wouldn’t choose encryption, but that’s not a head to head comparison. Tokenization is a data replacement technology; encryption is a data obfuscation technology. They serve different purposes. Think about it this way: There is no practical way for tokenization to protect your network traffic, and it would be a horrible strategy for protecting backup tapes. You can’t build a VPN with tokenization – the best you could do would be to use access tokens from a Kerberos-like service. That does not mean tokenization won’t be the best way to secure data at rest security now or in the future. Acknowledging that encryption is essential sometimes and that auditors rely on it is a long way from establishing that encryption is better or preferable technology in the abstract. Larry’s conclusion is specious. Let’s be clear: the vast majority of discussion around tokenization today has to do with credit card replacement for PCI compliance. The other forms of tokens used for access and authorization have been around for many years and are niche technologies. It’s just not meaningful to compare cryptography in general against tokenization within PCI deployments. A meaningful comparison of popularity between encryption and tokenization, would need to be confined to areas where they can solve equivalent business problems. That’s not GLBA, SOX, FISMA, or even HIPAA; currently it’s specifically PCI-DSS. Note that only 24% of those surveyed were PCI assessors. They look at credit card security on a daily basis, and compare relative merits of the two technologies for the same use case. 64% had over ten years experience, but PCI audits have been common for less than 5. The survey population is clearly general auditors, which doesn’t seem to be an appropriate audience for ascertaining the popularity of tokenization – especially if they were thinking of authorization tokens when answering the survey. Of customers I have spoken to, who want to know about tokenization, more than 70% intend to use tokenization to help reduce the scope of PCI compliance. Certainly my sample size is smaller than the Ponemon survey’s. And the folks I speak with are in retail and finance, so subject to PCI-DSS compliance. At Securosis we predict that tokenization will replace encryption in many PCI-DSS regulated systems. The bulk of encryption installations, having nothing to do with PCI-DSS and being inappropriate use cases for tokenization, however, will be unchanged. At a macro level these technologies go hand in hand. But as tokenization grows in popularity, in suitable situations it will often be chosen over encryption. Note that encryption systems require some form of key management. Thales, the sponsor of Ponemon’s study, is a key vendor in the HSM space which dominates key management for encryption deployments. Finally, there is some useful information in the report. It’s worth a few minutes to review, to look get some insight into decision makers and where funding is coming from. But it’s just not possible to make a valid comparison between tokenization and encryption from this data. Share:

Share:
Read Post

Friday Summary: March 25, 2011

I am probably in the minority, but when I buy something I think of it as mine. I paid for it so I own it. I buy a lot of stuff I am not totally happy with, but that’s the problem with being a tinkerer. Usually I think I can improve on what I purchased, or customize my purchase to my liking. This could be as simple as adding sugar to my coffee, or having a pair of pants altered, or changing the carburetor on that rusty Camaro in my backyard. More recently it’s changing game save files or backing out ‘fixes’ that break software. It’s not the way the manufacturer designed it or implemented it, but it’s the way I want it. One man’s bug is another man’s feature. But as the stuff I bought is mine – I paid for it, after all – I am free to fix or screw things up as I see fit. Somewhere along the line, the concept of ownership was altered. We buy stuff then treat it as if it’s not ours. I am not entirely sure when this concept went mainstream, but I am willing to bet it started with software vendors – you know, the ones who write those End User License Agreements that nobody reads because that would be a waste of time and delay installing the software they just bought. I guess this is why I am so bothered by stories like Sony suing some kid – George Holtz – for altering a PlayStation 3. Technically they are not pissed off at him for altering the function of his PlayStation – they are pissed that he taught others how to modify their consoles so they can run whatever software they want. The unstated assumption is that anyone who would do such a thing is a scoundrel and criminals, out to pirate software and destroy hard-working companies (And all their employees! Personally!). These PlayStations were purchased – personal property if you will – and their owners should be able to do as they see fit with their possessions. Don’t like Sony’s OS and want to run Linux? Those customers bought the PS3s (and Sony promised support, then reneged) so they should be able to run what they want without interference. It’s not that George is trying to resell the PlayStation code, or copy the PlayStation and sell a derived work. He’s not reselling Halo or an Avatar Blu-ray; he’s altering his own stuff to suit his needs, and then sharing. This is not an issue of content or intellectual property, but of personal property. Sony should be able to void his warranty, but coming after him legally is totally off-the-charts insane IMO. Now I know Sony has better lobbyists than either George or myself, so it’s much more likely that laws – such as the Digital Millennium Copyright Act (DCMA) – reflect their interests rather than ours. I just can’t abide by the notion that someone sells me a product and then demands I use it only as they see fit. Especially when they want to prohibit my enjoyment because there is a possibility someone could run pirated software. If you take my money, I am going to add hard drives or memory of software as I like. If companies like Sony don’t like that, they should not sell the products. Cases like this call the legitimacy of the DCMA into question. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich in Macworld on private browsing. Protect your privacy: online shopping. Mike’s first Macworld article. Rich quoted in the New York Times on RSA. A great response to Rich’s Table Stakes article. John Strand does a good job of presenting his own spin. Index link to Mike & Rich’s Macworld series on privacy. Adrian’s Dark Reading article on McAfee acquisition. Rich quoted on RSA breach. Adrian’s Dark Reading post on DB Security in the cloud. Favorite Securosis Posts Rich: Agile and Hammers – They Don’t Fix Stupid. I still don’t fully get how people glom on to something arbitrary and turn it into a religion. Mike Rothman: Agile and Hammers: They Don’t Fix Stupid. Rare that Adrian wields his snark hammer. Makes a number of great points about people – not process – FAIL. Gunnar Peterson: The CIO Role and Security. Adrian Lane: Crisis Communications. Other Securosis Posts FAM: Additional Features. McAfee Acquires Sentrigo. Incite 3/23/2011: SEO Unicorns. RSA Releases (Almost) More Information. FAM: Core Features and Administration, Part 1. Death, Taxes, and M&A. How Enterprises Can Respond to the RSA/SecurID Breach. Network Security in the Age of Any Computing: Index of Posts. Favorite Outside Posts Rich: Why Stuxnet Isn’t APT. Mike Cloppert is one of the few people out there talking about APT who actually knows what he’s talking about. Maybe some of those vendor marketing departments should read his stuff. Mike Rothman: The MF Manifesto for Programming, MF. Back to basics, MFs. And that is one MFing charming pig. Adrian Lane: A brief introduction to web “certificates”. While I wanted to pick the MF Manifesto as it made me laugh out loud, Robert Graham’s post on cryptography and succinct explanation of the Comodo hack was too good to pass up. Project Quant Posts NSO Quant: Index of Posts. NSO Quant: Health Metrics–Device Health. NSO Quant: Manage Metrics–Monitor Issues/Tune IDS/IPS. NSO Quant: Manage Metrics–Deploy and Audit/Validate. Research Reports and Presentations The Securosis 2010 Data Security Survey. Monitoring up the Stack: Adding Value to SIEM. Network Security Operations Quant Metrics Model. Network Security Operations Quant Report. Understanding and Selecting a DLP Solution. White Paper: Understanding and Selecting an Enterprise Firewall. Understanding and Selecting a Tokenization Solution. Security + Agile = FAIL Presentation. Top News and Posts Dozens of exploits released for popular SCADA programs. Twitter, Javascript Defeat NYT’s $40m Paywall. Apple patches unused Pwn2Own bug, 55 others in Mac OS. Spam Down 40 Pecent in Rustock’s Absence. The Challenge of Starting an Application Security Program. Hackers make off with TripAdvisor’s membership list. Talk of Facebook Traffic Being Detoured. Firefox 4 Content Security Policy feature. Firefox

Share:
Read Post

McAfee Acquires Sentrigo

McAfee announced this morning its intention to acquire Sentrigo, a Database Activity Monitoring company. McAfee has had a partnership with Sentrigo for a couple years, and both companies have cooperatively sold the Sentrigo solution and developed high-level integration with McAfee’s security management software. McAfee’s existing enterprise customer base has shown interest in Database Activity Monitoring, and DAM is no longer as much of an evangelical sale as it used to be. Sentrigo is a small firm and integration of the two companies should go smoothly. Despite persistent rumors of larger firms looking to buy in this space, I am surprised that McAfee finally acquired Sentrigo. McAfee, Symantec, and EMC are the names that kept popping up as interested parties, but Sentrigo wasn’t the target discussed. Still, this looks like a good fit because the core product is very strong, and it fills a need in McAfee’s product line. The aspects of Sentrigo that are a bit scruffy or lack maturity are the areas McAfee would want to tailor anyway: workflow, UI, reporting, and integration. I have known the Sentrigo team for a long time. Not many people know that I tried to license Sentrigo’s memory scanning technology – back in 2006 while I was at IPLocks. Several customers used the IPLocks memory scanning option, but the scanning code we licensed from BMC simply wasn’t designed for security. I heard that Sentrigo architected their solution correctly and wanted to use it. Alas, they were uninterested in cooperating with a competitor for some odd reason, but I have maintained good relations with their management team since. And I like the product because it offers a (now) unique option for scraping SQL right out of the database memory space. But there is a lot more to this acquisition that just memory scraping agents. Here are some of the key points you need to know about: Key Points about the Acquisition McAfee is acquiring a Database Activity Monitoring (DAM) technology to fill out their database security capabilities. McAfee obviously covers the endpoints, network, and content security pieces, but was missing some important pieces for datacenter application security. The acquisition advances their capabilities for database security and compliance, filling one of the key gaps. Database Activity Monitoring has been a growing requirement in the market, with buying decisions driven equally by compliance requirements and response to escalating use of SQL injection attacks. Interest in DAM was previously to address insider threats and Sarbanes-Oxley, but market drivers are shifting to blocking external attacks and compensating controls for PCI. Sentrigo will be wrapped into the Risk and Compliance business unit of McAfee, and I expect deeper integration with McAfee’s ePolicy Orchestrator. Selling price has not been disclosed. Sentrigo is one of the only DAM vendors to build cloud-specific products (beyond a simple virtual appliance). The real deal – not cloudwashing. What the Acquisition Does for McAfee McAfee responded to Oracle’s acquisition of Secerno, and can now offer a competitive product for activity monitoring as well as virtual patching of heterogeneous databases (e.g., Oracle, IBM, etc). While it’s not well known, Sentrigo also offers database vulnerability assessment. Preventative security checks, patch verification, and reports are critical for both security and compliance. One of the reasons I like the Sentrigo technology is that it embeds into the database engine. For some deployment models, including virtualized environments and cloud deployments, you don’t need to worry about the underlying environment supporting your monitoring functions. Most DAM vendors offer security sensors that move with the database in these environments, but are embedded at the OS layer rather than the database layer. As with transparent database encryption, Sentrigo’s model is a bit easier to maintain. What This Means for the DAM Market Once again, we have a big name technology company investing in DAM. Despite the economic downturn, the market has continue to grow. We no longer estimate the market size, as it’s too difficult to find real numbers from the big vendors, but we know it passed $100M a while back. We are left with two major independent firms that offer DAM; Imperva and Application Security Inc. Lumigent, GreenSQL, and a couple other firms remain on the periphery. I continue to hear acquisition interest, and several firms still need this type of technology. Sentrigo was a late entry into the market. As with all startups, it took them a while to fill out the product line and get the basic features/functions required by enterprise customers. They have reached that point, and with the McAfee brand, there is now another serious competitor to match up against Application Security Inc., Fortinet, IBM/Guardium, Imperva, Nitro, and Oracle/Secerno. What This Means for Users Sentrigo’s customer base is not all that large – I estimate fewer than 200 customers world wide, with the average installation covering 10 or so databases. I highly doubt there will be any technology disruption for existing customers. I also highly doubt this product will become shelfware in McAfee’s portfolio, as McAfee has internally recognized the need for DAM for quite a while, and has been selling the technology already. Any existing McAfee customers using alternate solutions will be pressured to switch over to Sentrigo, and I imagine will be offered significant discounts to do so. Sentrigo’s DAM vision – for both functionality and deployment models – is quite different than its competitors, which will make it harder for McAfee to convince customers to switch. The huge upside is the possibility of additional resources for Sentrigo development. Slavik Markovich’s team has been the epitome of a bootstrapping start-up, running a lean organization for many years now. They deserve congratulations for making it this less than $10M $20M in VC funds. They have been slowly and systematically adding enterprise features such as user management and reporting, broadening platform support, and finally adding vulnerability assessment scanning. The product is still a little rough around the edges; and lacks some maturity in UI and capabilities compared to Imperva, Guardium, and AppSec – those products have been fleshing out their capabilities for years more. In a

Share:
Read Post

Agile and Hammers: They Don’t Fix Stupid

I did not see the original Agile Ruined My Life post until I read Paul Krill’s An agile pioneer versus an ‘agile ruined my life’ critic response today. I wish I had, as I would have used Mr. Markham’s post as an example of the wrong way to look at Agile development in my OWASP and RSA presentations. Mr. Markham raises some very good points, but in general the post pissed me off: it reeks of irresponsibility and unwillingness to own up to failure. But rather than go off on a tirade covering the 20 reasons that post exhibits a lack of critical thinking, I’ll take the high road. Jon Kern’s quotes in the response hit the nail on the head, but did not include an adequate explanation of why, so I offer a couple examples. I make two points in my Agile development presentation which are relevant here. First: The scrum is not the same thing as Agile. Scrum is just a technique used to foster face-to-face communication. I like scrum and have had good success with it because, a) it promotes a subtle form of peer pressure in the group, and b) developers often come up with ingenious solutions when discussing problems in an open forum. Sure, it embodies Agile’s quest for simplicity and efficiency, but that’s just facility – not the benefit. Scrum is just a technique, and some Agile techniques work in particular circumstances, while others don’t. For example, I have never got pair programming to work. That could be due to the way I paired people up, or the difficulty of those projects might have made pairs impractical, or perhaps the developers were just lazy (which definitely does happen). The second point is that people break process. Mr. Markham does not accept that, but sorry, there are just not that many variables in play here. We use process to foster and encourage good behavior, to minimize poor behaviors, and to focus people on the task at hand. That does not mean process always wins. People are brilliant at avoiding responsibility and disrupting events. I couch Agile pitfalls in terms of SDL – because I am more interested in promoting secure code development – but the issues I raise cause general project failures as well. Zealots. Morons. Egoists. Unwitting newbies. People paranoid about losing their jobs. All these personality types figure into the success (or lack thereof) of Agile teams. Sometimes Agile looses to that passive-aggressive bastard at the back of the room. Maybe you need process adjustments, or perhaps better process management, or just maybe you need better people. If you use a hammer to drive a screw into the wall, don’t be surprised when things go wrong. Use the wrong tool or technique to solve a problem, and you should expect bad things to happen. Agile techniques are geared toward reducing complexity and improving communication; improvements in those two areas mean better likelihood of success, but there’s no guarantee. Especially when communication and complexity are not your problem. Don’t blame the technique – or the process in general – if you don’t have the people to support it. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.