By Adrian Lane
Dan Geer wrote an article for SC Magazine on The enterprise information protection paradigm, discussing the fundamental disconnect between the derived value of data and the investment to protect information. He asks the important question: If we reap ever increasing returns on information, where is the investment to protect the data? Dan has an eloquent take on a long-standing viewpoint in the security community that Enterprise Information Protection (EIP) is a custodial responsibility of corporations, as it is core to generation of revenue and thus the company’s value.
Dan’s point that we don’t pay enough attention (and spend enough money and time) on data security is inarguable – we lose a lot of data, and it costs. His argument that we should concentrate on (unification of) existing technologies (such as encryption, audit, NAC, and DLP), however, is flawed – we already have lots of this technology, so more of the same is not the answer.
Part of our problem is that in the real world, inherent security is only part of the answer. We also have external support, such as police who arrest bank robbers – it’s not entirely up to the bank to stop bank robbers. In the computer security world – for various reasons – legal enforcement is highly problematic and much less aggressive than for physical crimes like robbery.
I don’t have a problem with Dan’s reasoning on this issue. His argument for the motivation to secure information is sound. I do, however, take issue with a couple of the examples he uses to bridge his reasoning from one point to the next.
First, Dan states, “We have spent centuries learning about securing the physical world, plus a few years learning about securing the digital world. What we know to be common to both is this: That which cannot be tolerated must be prevented.” He puts that in very absolute terms, and I do not believe it is true in either the physical or electronic realms. For example, our society absolutely does not tolerate bank robberies. However, preventative measures are miniscule. The banks are open for business and pretty much anyone can walk in the door. Rather than prevent a robbery, we collect information from witnesses, security cameras, and other forensic information – to find, catch, and punish bank robbers. We hope that the threat of the penalty will deter most potential robbers, and sound police work will allow us to catch up with the remainder who are daring enough to commit these crimes.
While criminals are very good at extracting real value from virtual objects, law enforcement has done a crappy job at investigating, punishing, and (indirectly) deterring crimes in and around data theft. These two crucial factors are absent in electronic crimes in comparison to physical crimes. It’s not that we can’t – it’s that we don’t.
This is not to undermine Dan’s basic point – that enterprises which derive value from data are not protecting themselves sufficiently, and contributorily negligent. But stating that “The EIP mechanism – an unblinking eye focused on information – has to live where the data lives.” and “EIP unifies data leakage prevention (DLP), network access control (NAC), encryption policy and enforcement, audit and forensics,” argues that network and infrastructure security are the answer. As Gunnar Peterson has so astutely pointed out many times, while the majority of IT spending is in data management applications, our security spending is predominately in and around the network. That means the investments made today are to secure data at rest and data in motion, rather than data in use. Talking about EIP as an embodiment of NAC & DLP and encryption policy reinforces the same suspect security investment choices we have been making for some time. We know how to effectively secure data “at that point where data-at-rest becomes data-in-motion”. The problem is we suck ” … at the point of use where data is truly put at risk …” – that’s not network or infrastructure, but rather in applications.
A basic problem with data security is that we do not punish crimes at anywhere near the same rate as we do physical crimes. There is no (or almost no) deterrence, because examples of capturing and punishing crimes are missing. Further, investment in data security is typically misguided. I understand how this happens – protecting data in use is much harder than encrypting TCP/IP or disk drives – but where we invest is a critical part of the issue. I don’t want this to come across as disagreement with Dan’s underlying premise, but I do want to stress that we need to make more than one evolutionary shift.
Posted at Thursday 25th February 2010 5:20 am
(11) Comments •
Auditors got you down? Struggling to manage all those pesky database-related compliance issues?
Thursday I’m presenting a webcast on Pragmatic Database Compliance and Security. It builds off the base of Pragmatic Database Security, but is more focused on compliance, with top tips for your favorite regulations.
It is sponsored by Oracle, and you can sign up here.
We’ll cover most of the major database security domains, and I’ll show specifically how to apply them to major regulations (PCI, HIPAA, SOX, and privacy regs). If you are a DBA or security professional with database responsibilities, there’s some good stuff in here for you.
Posted at Wednesday 24th February 2010 11:51 pm
(0) Comments •
By Mike Rothman
The fun is just beginning. We continue our trip through the Securosis Guide to the RSA Conference 2010 by discussing what we expect to see relative to Endpoint Security.
Anti-virus came onto the scene in the early 90’s to combat viruses proliferated mostly by sneakernet. You remember sneakernet, don’t you? Over the past two decades, protecting the endpoint has become pretty big business, but we need to question the effectiveness of traditional anti-virus and other endpoint defenses, given the variety of ways to defeat those security controls. This year we expect many of the endpoint vendors to start espousing “value bundles” and alternative controls such as application whitelisting, while jumping on the cloud bandwagon to address the gap between claims and reality.
What We Expect to See
There are four areas of interest at the show for endpoint security:
The Suite Life: There are many similarities between current endpoint security suites and office automation suites in the early part of the decade. The applications don’t work particularly well, but in order to keep prices up, more and more stuff you don’t need gets bundled into the package. There is no end to that trend in sight, as the leading endpoint agent companies have been acquiring new technologies (such as full disk encryption and DLP) to broaden their suites and maintain their price points. But at the show this year, it’s reasonable to go to your favorite endpoint agent vendor and ask them why they can’t seem to “get ahead of the threat.” Yes, that is a rhetorical question, but we Securosis folks like to see vendors squirm, so that would be a good way to start the conversation. Also be on the lookout for the folks offering “Free AV” and talking about how ridiculous it is to be paying for AV nowadays. Just be aware, the big booths with the Eastern European models don’t come cheap, so they will get their pound of flesh in the form of management consoles and upselling to more full-featured suites (which actually may do something).
The Cloud Messiah: Endpoint vendors aren’t the only ones figuring the ‘cloud’ will save them from all their issues, but they will certainly be talking about how integrating malware defenses into the ‘cloud’ will increase effectiveness and keep the attackers at bay. This is another game of three-card monty, and the endpoint vendors are figuring you won’t know the difference. After you’ve asked the vendor why they can’t stop even simplistic web attacks or detect a ZeuS infection, they’ll probably start talking about “shared intelligence” and the great googly-moogly malware engine in the sky. At this point, ask a pretty simple question: “How do you win this arms race?” With 2-3 million new malware attacks happening this year, how long can this signature-based approach work? That should make for more interesting conversation.
Control Strategies: Given that traditional anti-virus is mostly useless against today’s attacks, you are going to hear a number of smaller application whitelisting vendors start to go more aggressively after the endpoint security companies. But this category (along with USB device control technology) suffers from a perception that the technology breaks applications and impacts user experience. As with every competitive tete-a-tete, there is some truth to that argument. So challenge the white listing vendors on how they impact the user experience (or don’t) and can provide similar value to an endpoint security suite (firewall, HIPS, full disk encryption, etc.).
Laptop Encryption: You’ll likely also be hearing about another feature of most of the endpoint suites: full disk encryption (FDE). There will be lots of FUD about the costs of disclosure and why it’s just a lot easier to encrypt your mobile devices and be done with it. For once, the vendor mouthpieces are absolutely right. But this brings us to the question of what features you need, whether FDE should be bundled into your endpoint suite, and how you can recover data when users inevitably lose passwords and devices are stolen. So if you have mobile users (and who doesn’t?), it’s not an issue of whether you need the technology – it’s the most effective way to procure and deploy.
For those so inclined (or impatient), you can download the entire guide (PDF). Or check out the other posts in our RSAC Guide: Network Security, Data Security, and Application Security.
Posted at Wednesday 24th February 2010 7:54 pm
(0) Comments •
By Mike Rothman
It is said that unhappiness results from either not getting what you want, or getting what you don’t want. I’m pretty sure strep throat qualifies as something you don’t want, and it certainly is causing some unhappiness in Chez Rothman. Yesterday, I picked up 4 different antibiotics for everyone in the house except me, which must qualify me for some kind of award at the Publix pharmacy.
I like to think of myself as a reasonably flexible person who can go with the flow – but in reality, not so much. I don’t necessarily have a set schedule, but I know what I need to get done during the day and roughly when I want to work on certain things. But when the entire family is sick, you need to improvise a bit. Unfortunately that is hard for a lot of people, including me. So when the best laid plans of sitting down and cranking out content were subverted by a high maintenance 6 year old – who wanted to converse about all sorts of things and wanted me to listen – I needed to engage my patience bone.
Oh yeah, I don’t have a patience bone. I don’t even have a patience toenail. So I got a bit grumpy, snarled a bit, and was generally an ass. The Boss was good in pointing out I’m under a lot of stress heading into a big conference and to give me a wide berth, but that’s a load of crap. I had my priorities all screwed up. I needed to take a step back and view this as a positive and figure this is another great opportunity to work on my patience and show the flexibility that I claim to have. So I chat with my girl when she’s done watching Willy Wonka, and I go out to the pharmacy and get the medicine.
Here is the deal – crap is going to happen. You’ll get sick at the most inopportune time. Or your dog will. Or maybe it’s your kid. Or your toilet will blow up or your washing machine craps out. It’s always something. And there are two ways to deal with it. You can get pissy (like I did this morning), which doesn’t really do anything except make a bad situation worse. My other option was to realize that I’m lucky to have a flexible work environment and a set of partners who can (and do) cover for me. Yes, the latter is the right answer. So I cover at home when I need to and soon enough I’ll be back to my regular routine and that will be good too.
Um, I’m not sure who wrote this post, but I kind of like him.
Photo credit: “Be Flexible” originally uploaded by Chambo25
Incite 4 U
I’d like say it’s the calm before the storm, but given that 4 out of the 5 people I live with are sick, there’s no calm on the home front, and there is always the last minute prep work involved in getting ready for the RSA Conference that makes the week before somewhat frantic. And that’s a good description of this week thus far.
If you are heading out to San Francisco, check out our Securosis Guide to the RSA Conference 2010 (PDF), or the bite-size chunks as we post them on the blog this week. That should help you get a feel for the major themes and what to look for at the show.
Finally, make sure to RSVP for the Disaster Recovery Breakfast we are hosting on Thursday morning with the fine folks of Threatpost.
Without exploits, what’t the point? – Andy the IT Guy wrote a piece about whether pen tests require the use of exploits. He cites some PCI chapter and verse, coming to the conclusion that exploits are not required for the pen testing requirement of PCI. Whether it is or is not required is up to your assessor, but that misses the point. Yes, exploits can be dangerous and they can knock stuff down. But pen testing using real exploits is the closest you are going to get to a real world scenario. That old adage that any battle plan doesn’t survive contact with the enemy – it’s true. So your vulnerability scanner will tell you what’s vulnerable, not what can be exploited, and I can assure you the bad guys don’t just stop once they’ve knocked on your door with Nessus. – MR
IE6 + Adobe = Profit! – An article by Brian Krebs on a new experimental tool to prevent drive-by malware on Windows got me thinking. Blade (BLock All Drive-by Exploits) doesn’t stop the exploit, but supposedly eliminates the ability to install a download without user approval. Assuming it works as advertised, it could be useful, although it won’t stop horny users from installing malware in attempts to view videos of nekked folks. But the interesting part is the statistics from their testing – over 40% of attacks are against IE 6, with a whopping 67% of drive by attacks targeting Adobe Reader or Flash. If those numbers don’t give you at least a little juice with management to update your applications and get off IE6, or to prioritize Adobe patches, perhaps it’s time to polish the resume. – RM
Socially Inept – Security Barbie had a good post on the Rapid 7 incident in “My ode to Rapid7” where a few sales people Twitter & LinkedIn spammed the bejesus out of the entire security community. Or at least the echo chamber of folks most likely to bitch about it. “Fine, fine. I’m gonna take them off my list of successful people today.” I am not poking fun at Rapid7, but there are strange boundaries of what is appropriate and inappropriate behavior on venues like Twitter. It’s fine to ask my friends what they think of a product or company, but not OK for people I don’t know from that company to offer an opinion. Every corporation out there has a PR and media strategy for social media, and usually approaches it in a totally anti-social way. A corporation acting like it’s my friend on social media is, well, creepy. It’s not like a corporation comes to my house to have a beer and watch a UFC match, especially since I don’t have cable TV. Following tweets to gauge customer acceptance is one thing, but trying to participate with me like we’re buddies is more about managing perceptions than socially interacting. But people representing companies on social media venues is a grey area. Frankly, one of the reasons I don’t tweet more often is much of what interests me in security is now (my) business, and I am uncertain where to draw the line. – AL
Business Advice from Van Halen – This is an awesome way to tell if your vendor isn’t paying attention. It’s the business version of asking if the product supports RFC3514 or RFC2549. A former coworker would ask vendors about LRF support. Similarly, I’ve thrown all sorts of bizarre requirements into contracts and RFIs just to see what the responses are, and whether people are paying attention. What are your indicators that vendors are just going through the motions? – DMort
It’s about the Business, Stupid… – I absolutely love this response from a CIO in response to why a CISO candidate didn’t get a job. Right, it’s not about ‘us’ and our security problems. It’s about relating value to business problems and showing how security can help the business achieve its goals. It seems a lot of security folks don’t get senior management because they don’t understand how important security is, and how not doing security well puts the company at risk. Read this, and make sure it’s not you with such a myopic view of the business. – MR
An Agile Crust, Tinged with a Risk Reduction and a Side of Backlog – J.D. Meier posted on Agile Security Engineering this week, talking about overlaying security activities on top of an “Agile software cycle”. He broke down security tasks and mapped them to Agile phases. The post raised several red flags for me, because the security tasks mapped to the iteration cycle a) are not performed on every iteration, b) don’t necessarily fit in the iteration time line, or c) are part of the implicit test-driven development. Agile is good at getting high priorities attention very fast – the middle ground is the killer. Security ‘stories’ end up on the project backlog, with mid-to-low priority levels, and the “buckets” described are never pushed up the queue for web applications because there is no end state for web application development programs. If J.D. is making this work I would love to see a fully fleshed out case study, because this describes a model I find to be broken. – AL
You Can’t Outsource Thinking – Bejtlich tackles whether it makes sense to outsource incident response, and honestly I did a double take. Did someone really ask that? OK, Richard basically says in a very nice and politically correct way that it’s not a very good idea. Being neither nice nor politically correct, I say that’s security career suicide. It gets back to my philosophy that the only thing you can’t outsource is thinking. Everything else is fair game. So you can get some help with your incident response. But you need to run your IR team, just like you need to run your security program, though parts of it can (and should) be outsourced. – MR
Don’t Catch a Social Networking Disease – I am completely fascinated by the larger historical implications of social networking and technology over time. Years ago I wrote a post on the potential political implications once we reach a point where all politicians grew up with social technologies. Can you image George W. Bush having to deal with tweets like, “Crashed daddy’s car into tree on lawn and told him he’s an ass. Need more beer.” Andy the IT Guy writes about the dangers of poor social networking policies using an example from reality TV. If you don’t have a clear policy and educate employees (and prospects), you’re leaving the door open for problems. And don’t forget to balance your policies with the need to attract and maintain workers, or you might end up like Forrester. – RM
Posted at Wednesday 24th February 2010 5:15 am
(2) Comments •
By Adrian Lane
Continuing our postings from the Securosis Guide to the RSA Conference 2010, we turn our attention to application security.
Application Security is a nascent market, but data from several recent data breach reports and OWASP studies have disproven the myth of the “Insider Threat”. The primary cause of breaches is poorly executed applications – specifically web applications that rely on complex multi-layered infrastructure. While there is no agreement on which methods and technologies are ‘best’ for securing applications, application developers show growing interest in learning about the available options.
What We Expect to See
A Focus on Web Application Development Security: As a general rule we don’t have very good statistics in security and risk management, but this trend is changing. With better forensic information we are showing that web application breaches are the leading cause of security breaches. While this has not yet translated into a significant change in security spending, expect to see long lines and greater interest in code security products and education. Vendors will be disappointed at dealing with lower level IT and software practitioners who come across as tire-kickers who ask too many questions, but this is tomorrow’s buying center! These are the people who will change their applications and deployments to be more secure, not CIOs.
Anti-exploitation: While education in the development community lags regarding what constitutes risky code, tools that identify poor code or provide anti-exploitation will get a lot of attention as they raise the bar without a lot of re-engineering. The tools vary greatly in the depth of their features, how they are deployed, and where in the development cycle they fit. For example, some examine source code, some examine objects while they are compiled or linked, and others offer run-time protection. You will need to ask the vendor what classes of anti-exploitation they provide, and see if their model fits your development framework.
Integrated Assessment and Firewall Technologies: Web application development cycles are so short that full regression testing of new functions is generally impossible. More, test systems fail to mimic live production sites, so many vulnerabilities are missed prior to deployment. This has increased demand for application scanning, and changed it into a never-ending task. The window of time between when a vulnerability is introduced and when it is discovered is very small. In most cases exploitation begins before a fix can be identified, implemented, tested, and rolled out to production servers. To fill the gap, vulnerabilities discovered by application scanners are being fed into web application firewall (WAF) platforms in near-real-time to block while the application fix is underway. Since the 2009 RSA show, the number of WAF vendors who offer dynamic blocking has tripled. The quality of the assessment is still key, but investigate what your WAF provider is offering, how quickly new policies can be deployed, and what the performance impact will be. This is an effective security feature but has potential policy management and performance impacts which you need to understand.
For those so inclined (or impatient), you can download the entire guide (PDF). Or check out the first post in the RSAC Guide on Network Security.
Posted at Tuesday 23rd February 2010 10:56 pm
(3) Comments •
Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. Next up is Data Security.
Although technically nearly all of Information Security is directed at protecting corporate data and content, in practice our industry has historically focused on network and endpoint security. At Securosis we divide up the data security world into two major domains based on how users access data – the data center and the desktop. This reflects how data is managed far more practically than “structured” and “unstructured”. The data center includes access through enterprise applications, databases, and document management systems. The desktop includes productivity applications (the Office suite), email, and other desktop applications and communications.
What We Expect to See
There are four areas of interest at the show relative to data security:
- Content Analysis: This is the ability of security tools to dig inside files and packets to understand the content inside, not just the headers or other metadata. The most basic versions are generally derived from pattern matching (regular expressions), while advanced options include partial document matching and database fingerprinting. Content analysis techniques were pioneered by Data Loss Prevention (DLP) tools; and are starting to pop up in everything from firewalls, to portable device control agents, to SIEM systems.
The most important questions to ask identify the kind of content analysis being performed. Regular expressions alone can work, but result in more false positives and negatives than other options. Also find out if the feature can peer inside different file types, or only analyze plain text. Depending on your requirements, you may not need advanced techniques, but you do need to understand exactly what you’re getting and determine if it will really help you protect your data, or just generate thousands of alerts every time someone buys a collectable shot glass from Amazon.
- DLP Everywhere: Here at Securosis we use a narrow definition for DLP that includes solutions designed to protect data with advanced content analysis capabilities and dedicated workflow, but not every vendor marketing department agrees with our approach. Given the customer interest around DLP, we expect you’ll see a wide variety of security tools with DLP or “data protection” features, most of which are either basic content analysis or some form of context-based file or access blocking. These DLP features can be useful, especially in smaller organizations and those with only limited data protection needs, but they are a pale substitute if you need a dedicated data protection solution.
When talking with these vendors, start by digging into their content analysis capabilities and how they really work from a technical standpoint. If you get a technobabble response, just move on. Also ask to see a demo of the management interface – if you expect a lot of data-related violations, you will likely need a dedicated workflow to manage incidents, so user experience is key. Finally, ask them about directory integration – when it comes to data security, different rules apply to different users and groups.
- Encryption and Tokenization: Thanks to a combination of PCI requirements and recent data breaches, we are seeing a ton of interest in application and database encryption and tokenization. Tokenization replaces credit card numbers or other sensitive strings with random token values (which may match the credit card format) matched to real numbers only in a central highly secure database. Format Preserving Encryption encrypts the numbers so you can recover them in place, but the encrypted values share the credit card number format. Finally, newer application and database encryption options focus on improved ease of use and deployment compared to their predecessors.
You don’t really need to worry about encryption algorithms, but it’s important to understand platform support, management user experience (play around with the user interface), and deployment requirements. No matter what anyone tells you, there are always requirements for application and database changes, but some of these approaches can minimize the pain. Ask how long an average deployment takes for an organization of your size, and make sure they can provide real examples or references in your business, since data security is very industry specific.
- Database Security: Due partially to acquisitions and partially to customer demand, we are seeing a variety of tools add features to tie into database security. Latest in the hit parade are SIEM tools capable of monitoring database transactions and vulnerability assessment tools with database support. These parallel the dedicated Database Activity Monitoring and Database Assessment markets. As with any area of overlap and consolidation, you’ll need to figure out if you need a dedicated tool, or if features in another type of product are good enough. We also expect to see a lot more talk about data masking, which is the conversion of production data into a pseudo-random but still usable format for development.
Posted at Tuesday 23rd February 2010 10:16 pm
(0) Comments •
We quite enjoy all the free evening booze at the RSA conference, but most days what we’d really like is just a nice, quiet breakfast. Seriously, what’s with throwing massive parties for people to network, then blasting the music so loud that all we can do is stand around and stare at the mostly-all-dude crowd?
In response, last year we started up the Disaster Recovery Breakfast, and it went over pretty well. It’s a nice quiet breakfast with plenty of food, coffee, recovery items (aspirin & Tums), and even the hair of the dog for those of you not quite ready to sober up. No marketing, no presentations, no sales types trolling for your card. Sit where you want, drop in and out as much as you want, and if you’re really a traditionalist, blast your iPod and stand in a corner staring at us while nursing a Bloody Mary.
This year we will be holding it Thursday morning at Jillian’s in the Metreon from 8-11. It’s an open door during that window, and feel free to stop by at any time and stay as long as you want. We’re even cool if you drive through just to mooch some quick coffee.
Please RSVP by dropping us a line at email@example.com, and we’ll see you there!
Posted at Tuesday 23rd February 2010 8:28 pm
(2) Comments •
By Mike Rothman
Over the next 3 days, we’ll be posting the content from the Securosis Guide to the RSA Conference 2010. We broke the market into 8 different topics: Network Security, Data Security, Application Security, Endpoint Security, Content (Web & Email) Security, Cloud and Virtualization Security, Security Management, and Compliance. For each section, we provide a little history and what we expect to see at the show. First up is Network Security.
Since we’ve been connecting to the Internet people have been focused on network security, so the sector has gotten reasonably mature. As a result, there has been a distinct lack of innovation over the past few years. There have certainly been hype cycles (NAC, anyone?), but most organizations still focus on the basics of perimeter defense. That means intrusion prevention (IPS) and reducing complexity by collapsing a number of functions into an integrated Unified Threat Management (UTM) device.
What We Expect to See
There are four areas of interest at the show for network security:
Application Awareness: This is the ability of devices to decode and protect against application layer attacks. Since most web applications are encapsulated in HTTP (port 80) or HTTPS (port 443) traffic, to really understand what’s happening it’s important for network devices to dig into each packet and understand what the application is doing. This capability is called deep packet inspection (DPI), and most perimeter devices claim to provide it, making for a confusing environment with tons of unsubstantiated vendor claims. The devil is in the details of how each vendor implements DPI, so focus on which protocols they understand and what kinds of policies and reporting are available on a per-protocol basis.
Speeds and Feeds: As with most mature markets, especially on the network, at some point it gets down to who has the biggest and fastest box. Doing this kind of packet decodes and attack signature matching requires a lot of horsepower, and we are seeing 20gbps IPS devices appear. You will also see blade architectures on integrated perimeter boxes, and other features focused on adding scale to the environment as customer networks continue to go faster. Since every organization has different requirements, spend some time ahead of the show on understanding what you need and how you’d like to architect your network security environment. Get it down on a single piece of paper and head down to the show floor. When you get to the vendor booth, find an SE (don’t waste time with a sales person) and have them show you how their product(s) can meet your requirements. They’ll probably want to show you their fancy interface and some other meaningless crap. Stay focused on your issues and don’t leave until you understand in your gut whether the vendor can get the job done.
Consolidation and Integration: After years of adding specific boxes to solve narrow problems, many organizations’ perimeter networks are messes. Thus the idea of consolidating both boxes (with bigger boxes) and functions (with multi-function devices) continues to be interesting. There will be lots of companies on the show floor talking about their UTM devices, targeting small companies and large with similar equipment. Of course, the needs of the enterprise fundamentally differ from small business requirements, so challenge how well suited any product is for your environment. That means breaking out your one-page architecture again, and having the SEs on the show floor show you how their integrated solutions can solve your problems. Also challenge them on their architecture, given that the more a box needs to do (firewall, IPS, protocol decode, content security, etc.) the lower its throughput. Give vendor responses the sniff test and invite those who pass in for a proof of concept.
Forensics: With the understanding that we cannot detect some classes of attacks in advance, forensics and full packet capture gear will be high profile at this year’s conference. This actually represents progress, although you will see a number of vendors talking about blocking APT-like attackers. The reality is (as we’ve been saying for a long time under the React Faster doctrine) that you can’t stop the attacks (not all of them, anyway), so you had better figure out sooner rather than later that you have been compromised, and then act accordingly. The key issues around forensics are user experience, chain of custody, and scale. Most of today’s networks generate a huge amount of data, and you’ll have to figure out how to make that data usable, especially given the time constraints inherent to incident response. You also need to get comfortable with evidence gathering and data integrity, since it’s easy to say the data will hold up in court, but much harder to make it do so.
And for those of you who cannot stand the suspense, you can download the entire guide (PDF).
Posted at Tuesday 23rd February 2010 6:26 pm
(4) Comments •
By Mike Rothman
I know what you are thinking. “Oh god, they should stick to podcasting.” You’re probably right about that – it’s no secret that Rich and I have faces made for radio. But since we hang around with Adrian, we figured maybe he’d be enough of a distraction to not focus on us. You didn’t think we keep Adrian around for his brains, do you?
Joking aside, video is a key delivery mechanism for Securosis content moving forward. We’ve established our own SecurosisTV channel on blip.tv, and we’ll be posting short form research on all our major projects this way throughout the year. You can get the video directly through iTunes or via RSS, and we’ll also be embedding the content on the blog as well.
So on to the main event: Our first video is an RSA Conference preview highlighting the 3 Key Themes we expect to see at the show. The video runs about 15 minutes and we make sure to not take ourselves too seriously.
Direct Link: http://blip.tv/file/3251515
Yes, we know embedding a video is not NoScript friendly, so for each video we will also include a direct link to the page on
blip.tv. We just figure most of you are as lazy as we are, and will appreciate not having to leave our site. We’re also interested in comments on the video – please let us know what you think. Whether it’s valuable, what we can do to improve the quality (besides getting new talent), or any other feedback you may have.
Posted at Monday 22nd February 2010 11:12 pm
(0) Comments •
By Mike Rothman
As most of the industry gets ramped up for the festivities of the 2010 RSA Conference next week in San Francisco, your friends at Securosis have decided to make things a bit easier for you. We’re putting the final touches on our first Securosis Guide to the RSA Conference. As usual, we’ll preview the content on the blog and have the piece packaged in its entirety as a paper you can carry around at the conference. We’ll post the entire PDF tomorrow, and through the rest of this week we’ll be highlighting content from the guide. To kick things off, let’s tackle what we expect to be the key themes of the show this year.
How many times have you shown up at the RSA Conference to see the hype machine fully engaged about a topic or two? Remember 1999 was going to be the Year of PKI? And 2000. And 2001. And 2002. So what’s going to be news of the show in 2010? Here is a quick list of three topics that will likely be top of mind at RSA, and why you should care.
Cloud computing and virtualization are two of the hottest trends in information technology today, and we fully expect this trend to extend into RSA sessions and the show floor. There are few topics as prone to marketing abuse and general confusion as cloud computing and virtualization, despite some significant technological and definitional advances over the past year. But don’t be confused – despite the hype this is an important area. Virtualization and cloud computing are fundamentally altering how we design and manage our infrastructure and consume technology services – especially within data centers. This is definitely a case of “where there’s smoke, there’s fire”.
Although virtualization and cloud computing are separate topics, they have a tight symbiotic relationship. Virtualization is both a platform for, and a consumer of, cloud computing. Most cloud computing deployments are based on virtualization technology, but the cloud can also host virtual deployments. We don’t really have the space to fully cover virtualization and cloud computing in this guide, though we will dig a layer deeper later. We highly recommend you take a look at the architectural section of the Cloud Security Alliance’s Security Guidance for Critical Areas of Focus in Cloud Computing (PDF). We also draw your attention to the Editorial Note on Risk on pages 9-11, but we’re biased because Rich wrote it.
Cyber-crime & Advanced Persistent Threats
Since it’s common knowledge that not only government networks but also commercial entities are being attacked by well-funded, state-sponsored, and very patient adversaries, you’ll hear a lot about APT (advanced persistent threats) at RSA. First let’s define APT, which is an attacker focused on you (or your organization) with the express objective of stealing sensitive data. APT does not specify an attack vector, which may or may not be particularly advanced – the attacker will do only what is necessary to achieve their objective.
Securosis has been in the lead of trying to deflate the increasing hype around APT, but vendors are predictable animals. Where customer fear emerges the vendors circle like vultures, trying to figure out how their existing widgets can be used to address the new class of attacks. But to be clear, there is no silver bullet to stop or even detect an APT – though you will likely see a lot of marketing buffoonery discussing how this widget or that could have detected the APT. Just remember the Tuesday morning quarterback always completes the pass, and we’ll see a lot of that at RSA.
It’s not likely any widget would detect an APT because an APT isn’t an attack, it’s a category of attacker. And yes, although China is usually associated with APT, it’s bigger than one nation-state. It’s a totally new threat model. This nuance is important, because it means the adversary will do what is necessary to compromise your network. In one instance it may be a client-side 0-day, in another it could be a SQL injection attack. If the attack can’t be profiled, then there is no way a vendor can “address the issue.”
But there are general areas of interest for folks worried about APT and other targeted attacks, and those are detection and forensics. Since you don’t know how they will get in, you have to be able to detect and investigate the attack as quickly as possible – we call this “React Faster”. Thus the folks doing full packet capture and forensic data collection should be high on your list of companies to check out on the show floor. You’ll also want to check out some sessions, including Rich and Mike’s Groundhog Day panel, where APT will surely be covered.
Compliance as a theme for RSA? Yes, you have heard this before. Unlike 2005, though, ‘compliance’ is not just a buzzword, but a major driver for the funding and adoption of most security technologies. Assuming you are aware of current compliance requirements, you will be hearing about new requirements and modifications to existing regulations (think PCI next or HIPAA/HiTech evolution). This is the core of IT’s love/hate relationship with compliance. Regulatory change means more work for you, but at the same time if you need budget for a security project in today’s economy, you need to associate the project with a compliance mandate and cost savings at the same time. Both vendors and customers should be talking a lot about compliance because it helps both parties sell their products and projects, respectively.
The good news at this point is that security vendors do provide value in documenting compliance. They have worked hard to incorporate policies and reports specific to common regulations into their products, and provide management and customization to address the needs of other constituencies. But there will still be plenty of hype around ease of use and time to value. So there will be plenty of red “Easy PCI” buttons to bring back for your kids, and promises of “Instant Sarbanes-Oxley” and “Comprehensive HIPAA support” in every brochure.
We also expect to see considerable hot air directed towards Massachusetts 201 CMR 17.00 privacy and disclosure regulation, but it’s not clear this requirement will be adopted on a national scale. At this point, unless you have customers in MA, you probably don’t need to pay much attention this year. In general, you already know the regulations you need to worry about, so don’t get too excited when someone tells you compliance with GBRSH 590 or FUBR 140 is mandatory. There are lots of proposed ‘standards’ out there, but questions of ‘if’, ‘when’, and ‘how’ regarding compliance are less certain.
Also keep in mind that Securosis is sticking to its Security First mindset. Focus on protecting private and sensitive data with security controls you can document, and your compliance efforts will be significantly streamlined.
Posted at Monday 22nd February 2010 8:40 pm
(1) Comments •
By Adrian Lane
First some project housekeeping:
We have now completed the Protect phase of Project Quant for Database Security:
For reference, here are the rest of the series links:
Next we move into the management phase, where we first cover configuration management.
In the Database Security Planning phase we performed the initial discovery work required to establish basic standards. In the Configuration post we focused on the specific implementation actions needed to configure a database and set baselines. In this specific task we will wrap configuration steps into repeatable management processes to gather information and maintain configuration settings across the entire organization. The steps for assessment and configuration were designed for re-use here. Some of the collection steps are redundant if the number of databases within your organization remains static, but will need to be repeated as new installations are added.
Note that if you are part of a small IT organization, this is a pretty straight forward process. If your work as part of a larger enterprise team, you’ll have stakeholders in database administration, audit, IT operations, and security, which makes information collection, distribution and record keeping far more complicated.
- Identify databases: Identify databases under management. Group as necessary and assign responsibilities for configuration settings and audit verification.
- Time to gather configuration baselines: Based upon previous assessment scans, gather baseline settings for future comparisons.
- Time to specify configuration, policy and rule updates: Changes to internal configuration policies or vendor patch revisions should be accounted for in the policies. Add policies and update remediation information.
- Time to run scan and gather results (see assessment process). If you are adding databases, account for the entire assessment phase. If assessment scans are on previously scanned databases, the Scan and Distribute Results tasks should be sufficient.
- Time to run configuration process.
- Time to produce and distribute audit reports: Independent verification of settings, completion of work orders, and production of compliance control reports.
- Time to create/submit work orders and trouble tickets. Remediation of configuration errors to be scheduled. Fix verification can be scheduled as part of normal assessment scans, ad-hoc reporting or inspection.
- Optional: Time to conduct independent audit of configuration settings.
- Time to document changes for policies.
- Time to update recorded baselines.
Posted at Monday 22nd February 2010 7:31 pm
(0) Comments •
Like any analyst, I spend a lot of time on vendor briefings and meeting with very early-stage startups. Sometimes it’s an established vendor pushing a new product or widget, and other times it’s a stealth idea I’m evaluating for one of our investor clients. Usually I can tell within a few minutes if the idea has a chance, assuming the person on the other side is capable of articulating what they actually do (an all too common problem).
In 2007 I posted on the primary technique I use to predict security markets, and as we approach RSA I’m going to build on that framework with one of my favorite examples: IT-GRC.
IT-GRC (governance, risk, and compliance) products promise a wonderland of compliance bliss. Just buy this very expensive product – which typically requires major professional services to implement, and all your business units to buy-in and participate – and all your risk and compliance problems will go away. Your CEO and CIO get a kick-ass dashboard that allows him or her to assess all your risk and compliance issues across IT, and you can have all the reports your auditor could ever ask for with the press of a button.
Uh-huh. Right. Because that always works so well, just like ERP.
Going back to my framework for predicting security markets, there are three classes of markets:
- Threat/Response – Things that keep your customer website from being taken down, ensure people can surf during lunch, and keep the CEO from asking what’s wrong with his or her email. All those other threats? They don’t matter.
- Compliance – Something mandated by your auditor or assessor, with financial penalties if you don’t comply. And those penalties have to cost more than the solution.
- Internal Motivation/Efficiency – Things that help you do your job better and improve efficiency with corresponding cost savings.
The vast majority of security spending is in response to noisy, in-your-face threats that disrupt your business (someone stealing your data doesn’t count, unless they burn the barn behind them). The rest deals with compliance mandates and deficiencies. I think we only spend single-digit percentages of our security budget on anything else, maybe.
So let’s look at IT-GRC. It doesn’t directly stop any threats and it’s never mandated for compliance. It’s a reporting and organization tool – and a particularly expensive one. Thus we only see it succeeding in the largest of large companies, where it shows a financial return by reducing the massive manual costs of reporting. Mid-sized and small companies simply aren’t complex enough to see the same level of benefits, and the cost of implementation alone (never mind the typically 6-figure product costs) aren’t justified by the benefit.
IT-GRC in most organizations is like chasing Paris Hilton the Unicorn. It’s expensive and high-maintenance, with mythical benefits – and unless you have some serious bank, it isn’t worth the chase.
That’s not my assessment – it’s a statement of the realities of the market. I don’t even have to declare GRC dead (not that I’m against that). If you have any contacts in one of these companies – someone who will tell you the honest truth – you know that these products don’t make sense for mid-sized and small companies.
This post isn’t an assessment of value – it’s a statement of execution. In other words, this isn’t my opinion – the numbers speak for themselves. All you end users reading this already know what I’m saying, since none of you are buying the products anyway.
Posted at Monday 22nd February 2010 3:00 pm
(5) Comments •
By Adrian Lane
February 23rd (this Tuesday) at 12:00pm EST, I will be presenting “Understanding and Selecting a Database Activity Monitoring Solution” in a Webinar with Netezza. I’ll cover the basic value propositions of platforms, go over some of the key functional components to understand prior to an evaluation, and discuss some key deployment questions to address during a proof of concept.
You can sign up for the Webinar here. We will take 10-15 minutes for Q&A, so you can send questions in ahead of time and I will try to address them within the slides, or you can submit a question in the WebEx chat facility during the presentation.
Posted at Monday 22nd February 2010 2:25 pm
(0) Comments •
I’d like some fail, with a little fail, and a side of fail.
Rothman was out in Phoenix this week for some internal meetings and to record some video segments that we will be putting out fairly soon. I have a slightly weird video recording and production setup, designed to make it super-fast and dirt easy for us to put segments together. I’ve tested most of it before, although I did add a new time saver right before Mike showed up.
Yeah, you know where this is headed.
First, the new thing didn’t work. It was so frustrating that we almost ran out and bought a new camera so we wouldn’t need the extra box. Actually, we did run out, but it turns out almost no consumer cameras with high def have FireWire anymore. I dropped back into troubleshooting and debugging mode once I realized we were stuck. My personal process is first to eliminate as many variables as possible, and then slowly add one function or component at a time until I can identify where the failure is. Rip it back to the frame, then build and test piece by piece.
That didn’t work.
So I moved on to option 2, which has helped me more in my IT career than I care to admit (in my tech days I was the one they pulled in when no one else could get something to work). It’s no big secret – I just screw with it until the problem goes away. I try all sorts of illogical stuff that shouldn’t work, and usually does. I call this “sacrificing a chicken” mode. I toss out all assumptions as to how a computer system should work, and just start mashing the keys in some barely-logical way. I figure there are so many layers of abstraction and so many interconnections in modern software, that it is nearly impossible to completely model and predict how things will really work.
It totally worked.
With that up and running, the next bit failed. The software we use to live mix the video couldn’t handle our feeds, even though our setup is well within the performance expectations and recommendations. We use BoinxTV, but it was effectively useless on a tricked out MacBook Pro. That one I couldn’t fix.
No prob – I had a backup plan. Record the video, then edit/mix on my honking Mac Pro with 12gb of RAM and 8 core.
You really know where this is headed.
Despite the fact I’ve done this before with test footage, using the exact same process, it didn’t work. Something about the latest version of Boinx. So I restored the old version using Time Machine, and it still wouldn’t work. Oh, and then there’s the part where my Mac suddenly informed me it was missing memory (fixed with a re-seating, but still annoying). I’ve sent 2 tech support requests in, but no responses yet. Had this happened pre-Macworld Expo, I could have cornered them on the show floor. Ugh.
My wife came up with one last option that I haven’t tried yet. Our best guess is that something in one of Apple’s Mac OS X updates caused the problem. She suggested I restore Leopard onto her MacBook and test on that. Better yet – I have spare drives in the Mac Pro to test new versions of operating systems, and there’s no reason I can’t install the old version. I’m also going to upgrade my video card.
I don’t expect any of this to work, but I really need to produce these videos, and am not looking forward to the more time consuming traditional process.
But for those of you who troubleshoot, my methodology almost always works. Back out to nothing and build/test build/test, or randomly screw with stuff that shouldn’t help, but usually does.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
Other Securosis Posts
Favorite Outside Posts
- Adrian Lane: The List of Top 25 Most Dangerous Programming Errors. When I first read the post I was thinking it could be re-titled “Why Web Programmers Suck”, but when you get past the first half dozen or so poor coding practices, it could be pretty much any application. And let’s face it, web apps are freaking hard because you cannot trust the user or the user environment. Regardless, print this out and post on the break room wall for the rest of the development team to read every time they get a cup of coffee.
- Pepper: Urine Sample Hacked?
- Mike Rothman: No one knows what the F*** they are doing. Awesome post to understand and remind you that you don’t have all the answers. But you had better know what you don’t know.
- Rich: Rafal reminds people to know who you are giving your data to. He can be a bit reactionary at times, but he nails it with this one. How do you think Facebook and Google make their money? They aren’t evil, but they are what they are.
Project Quant Posts
Research Reports and Presentations
Top News and Posts
Blog Comment of the Week
Remember, for every comment selected, Securosis makes a $25 donation to Hackers for Charity. This week’s best comment goes to Erin (Secbarbie), in response to What is Your Plan B?.
Thank you for saying this. As a whole, we all need to ensure that we have put all the necessary pieces in place to ensure that we can stand our ground when it is necessary for the sake of security and our personal integrity.
Too many people let themselves end up in situations where they don’t have the “Plan B” to ensure confidence in giving the correct answer to executives, not just what they want to hear.
Posted at Friday 19th February 2010 1:39 pm
(1) Comments •
By Mike Rothman
In what remains a down economy, you may be suspicious when I tell you to think about leaving your job. But ultimately in order to survive, you always need to have Plan B or Plan C in place, just in case. Blind loyalty to an employer (or to employees) died a horrendous death many years ago.
What got me thinking about the whole concept was Josh Karp’s post on the CISO Group blog talking about the value of vulnerability management. He points out the issues around selling VM internally and some of those challenges. Yet the issues with VM didn’t resonate with me. It was the behavior of the CTO, who basically squelches the discussion of vulnerabilities found on their network because he doesn’t want to be responsible for fixing them. To be clear, this kind of stuff happens all the time. That’s not the issue.
The issue is understanding what you would do if you worked there. I would have quit on the spot, but that’s just me. Do you have the stones to just get up, pack your personal effects, and leave? It takes a rare individual with the kind of confidence to just get up and leave – heading off into the unknown.
Assuming it would be unwise to act rashly (which I’ve been known to do from time to time), you need to revisit your personal Plan B. Or build it, if you aren’t the type of person with a bomb shelter in your basement. I advise all sorts of folks to be very candid about their ability to be successful, given the expectations of their jobs and the resources they have to execute. If the corporate culture allows a C-level executive to sweep legitimate risks under the rug, then there is zero chance of security success. If you can’t get simple defenses in place, then you can’t be successful – it’s a simple as that.
If you find yourself in this kind of situation (and it’s not as rare as it seems), it’s time to execute on Plan B and find something else to do.
Being a contingency planner at heart, I also recommend folks have a list of “things you will not do” under any circumstances. There are lots of folks in Club Fed who were just following the instructions of their senior executives, even though they knew they were wrong. My Dad told me when I first joined the working world that I would only get one chance to compromise my integrity, and to think very carefully about everything I did. It makes sense to run those scenarios through your mind ahead of time. So you’ll know where your personal line is, and when someone has crossed it.
I know it’s pretty brutal out there in the job market. I know it’s scary when you have responsibilities and people depend on you to provide. But if someone asks you to cross that line, or you know you have no chance to be successful – you owe it to yourself to move on quickly.
But you need to be ready to do so, and that preparation starts now. Here is your homework over the weekend: Polish your resume. Hopefully that doesn’t take long because it’s up to date, right? If not, get it up to date. Then start networking and make it a habit. Set up a lunch meeting with a local peer in another organization every week for two months. There is no agenda. You aren’t looking for anything except to reconnect with someone you lost touch with or to learn about how other folks are handling common issues. Two months becomes three months becomes a year, and then you know lots of folks in your community. Which is invaluable when the brown stuff hits the fan.
You also need to get involved in your local community, assuming you want to stay there. Go to your local ISSA, NAISG, or InfraGard meeting and network a bit. Even if you are happy in your job. As Harvey MacKay says, Dig Your Well Before You’re Thirsty.
Posted at Thursday 18th February 2010 5:00 pm
(5) Comments •