Login  |  Register  |  Contact
Wednesday, November 11, 2009

Welcome to Oceania

By David J. Meier

At lunch last week, location-based privacy came up. I actively opt in to a monitoring service, which gets me a discount on insurance for a vehicle I own. My counterpart stated that they would never agree to anything of the sort because of the inherent breach of personal privacy and security. I responded that the privacy statement explicitly reads that the device does not contain GPS, nor does the company track the vehicle’s location. But even if the privacy statement said the opposite – should I care? Is location directly tied to some aspect of my life that might negatively impact me? And ultimately is security really tied to privacy in this context?

In a paper by Janice Tsai, Who’s Viewed You? The Impact of Feedback in a Mobile Location-Sharing Application (PDF) the abstract’s last line states, “…our study suggests that peer opinion and technical savviness contribute most to whether or not participants thought they would continue to use a mobile location technology.” This makes sense as I would self-qualify my ability to understand the technology enough to be able to control and measure the level of exposure I may create. Although the paper’s focus is ultimately on the feedback (or lack thereof) that these location-based services provide, it still contains interesting information. The thing that most intrigued me is that it never actually correlated privacy to security. I expected there to be a definitive point where users complained about being less secure somehow because they were being tracked. But nothing like that appeared.

I continued on my journey, looking to tie location-based privacy to security, and ran across another paper with a more promising title: “Location-Based Services and the Privacy-Security Dichotomy” by K. Michael, L. Perusco, and M. G. Michael. The paper provides much more warning of “security compromise” and “privacy risk”, but the problem remains – again, this paper doesn’t provide any hard evidence of how these location-based services actually create a security risk. In fact it’s more the opposite – they state that if we are willing to give up privacy, then our personal security may be increased. The authors mention the obvious risks, including lack of control and data leakage, but at this point, I’m still unsatisfied and have yet to find a clear understanding of how or why using a location-based service might ultimately make me less secure. So maybe it’s simply not so, and perhaps the real problem is outlined in section 3.2 of the paper: “The Human Need for Autonomy”.

Let’s be honest – it’s more psychological than anything with a placeholder for obvious exceptions, the most notable being stalker scenarios that are linked to domestic abuse of sorts. Even in this scenario it may be a stretch to say that location-based services are really the root cause of decreased personal security. Sure an angry ex may guess or even know a password to a webmail account and skim location data from communications, but the same could be done by lock picking a place of residence and stealing a daily planner. It’s a particular area that can easily be argued from either side because of different interpretations of what it is in the end.

We’d like to think that nobody is tracking us, but we all carry mobile phones, we’re all recorded daily by countless cameras, we all badge in at work using RFID, we all swipe payment cards, and we all use the Internet (I’m generalizing “we” based on content distribution here, but flame if you must). The addition of things like Google Latitude, Skyhook Wireless, and Yahoo! Fire Eagle are adding a level of usability but in the grand scheme of things do they really impact your personal security? Probably not. In the meantime, my fellow netizens, we can at least make light of the situation while we discuss what it is and isn’t. It’s a place, no matter where we are, that can mockingly be referred to as: Oceania – because try as you might, someone is watching.

—David J. Meier

Tuesday, November 10, 2009

Compliance vs. Security

By Adrian Lane

Reading Bill Brenner’s PCI Security a Devil, ‘Like No Child Left Behind’, I had the impression Brenner’s summary of Joshua Corman’s presentation would be: Joshua was %#!*$ crazy. In a nutshell:

“Organizations have made PCI DSS and compliance in general the basis of their information security policies,” he said. “They’re basing security on sloppy logic from Visa and MasterCard and in the process are ignoring some very bad state-sponsored threats. As a community, we have not evolved at all.”

You have to read the whole article to fully grasp Corman’s nuances, and note that some of the inflammatory additions seem to be Bill’s, rather than direct quotes from Joshua. Still, while there are points I agree with, Corman seems to have connected the dots arbitrarily. Not only do I not see general security policies being based off compliance initiatives, I don’t buy the argument that compliance is at the expense of security. Is there overlap? Absolutely. But the recognized lack of security is motivated by completely different forces. In the presence of evidence that many organizations are doing the absolute minumum to comply with regulations, how can you suppose that they would voluntarily invest in security without compliance requirements? Why would companies take a risk-based approach to spending efficiently, when they really don’t want to spend at all?

To me, companies embody the approach of The Three Wise Monkeys: “See no evil. Hear no evil. Speak no evil.”

Regulations espouse the ideals of safety, security and efficacy, and companies want tasks performed cheaply, quickly, and easily. Regulation is supposed to alter the way companies do business, providing guidance on how to realize the ideal. Companies often handle compliance as just another task, and try to address it from within the same processes the compliance mandate is designed to reform. If companies could be trusted to come close to the ideals and intentions, we would not have auditors.

Part of Corman’s presentation seems to be a derivative of his 8 Dirty Secrets presentation (summarized), where part 6 discusses how “Compliance Threatens Security”. Do I think that security product vendors are “…offering products that do everything from offer PCI compliance out of the box to ultimate cure-alls for healthcare entities coping with the demands of HIPAA”? Absolutely. But this was the cheapest, fastest and easiest way to comply. Take Sarbanes-Oxley as an example: products like Database Activity Monitoring and Log Management are the only way to achieve some of the required controls over automated financial systems that process millions of transactions a day. The fact that these unique data collection and analysis capabilities came from a security vendor is incidental. The security investment was made to satisfy a compliance mandate, not for the sake of security. The fact that the tools provide security as well is a by-product for many vendors and customers, often considered unimportant or incidental.

If I was going to create my own Dirty Little Secret list, I would say most companies treat security as “Don’t Ask, Don’t Tell”. Security tools that are bought to fulfill compliance have a bad habit of illuminating threats companies really don’t want to know about. They want to pass their compliance audits and not worry about other problems problems discovered … those just lead to additional expenses. If you doubt my cynical perspective, look at how most firms react when told their corporate network is host to 5,000 bots that just commenced a DDOS attack on another company: they tend to threaten suit for invasion of privacy or libel. Another example we see is that a high percentage of companies have web application firewalls for PCI, but run them as monitors rather than proxies! They need to have WAF to comply with PCI, so they bought one, but no one mandateed they use it effectively. Security professionals really care about security, but the executive management cares precisely as much as legal and finance tells them to.

I think security is a really hard problem, and far too often our attempts at security are flawed. I just don’t see any evidence that risk management is subjugated to compliance.

—Adrian Lane

Monday, November 09, 2009

Two Random Security Rules

By Rich

  1. Do not expect human behavior to change. You can affect habits, but not behavior.
  2. No security problem ever goes away. People have always hit each other over the heads with rocks and cracked safes since they existed (which is why safes were invented, of course), and will continue to hit each other with rocks and crack safes. Problems get better or worse, but never disappear.

—Rich

Google Dashboard Comments

By Adrian Lane

I was playing around with Google Dashboard this morning. After reading the cnet post on Google’s Data Liberation Project, and Google’s announcement of DataLiberation.org, I could not help but get a excited about what they were doing. Trying to be ‘open’ and ‘liberate’ data sounds great!

Many web services make it difficult to leave their services – you have to pay them for exporting your data, or jump through all sorts of technical hoops – for example, exporting your photos one by one, versus all at once. We believe that users – not products – own their data, and should be able to quickly and easily take that data out of any product without a hassle. We’d rather have loyal users who use Google products because they’re innovative – not because they lock users in. You can think of this as a long-term strategy to retain loyal users, rather than the short-term strategy of making it hard for people to leave.

We’ve already liberated over half of all Google products, from our popular blogging platform Blogger, to our email service Gmail, and Google developer tools including App Engine. In the upcoming months, we also plan to liberate Google Sites and Google Docs (batch-export).

Awesome! I jumped right in as I had two very specific things to address. I wanted to see if I could remove some information from Google that would change Google search behavior. Those issues are:
1. After I responded to a friend’s email inquiry a few months ago (sent to my Gmail account) regarding a piece of electronics equipment, I started to see ads for that product in my search results. I have no interest in the product and it does not belong in my search results.
2. I do a lot of driving and I use Google and Amazon maps. Google has started altering my route endpoints arbitrarily. I own a home, but the address is not registered as my home address anywhere except tax records, and has never been used in any online search, much less a Google map search (for very specific reasons). But Google Maps has been altering the endpoints of my routes to direct me to this property; it’s not an address I want to travel to and I did not enter it. How Google found it and then associated it with me is a interesting in and of itself, but to arbitrarily assume I want to go there is both annoying and disconcerting.

So I plunged right and and found: zero. Nothing that showed any of that data, nor how it was being used. Oh well. I guess my expectations were far too high. So I took a step back and looked at exactly what Google is offering.

Digging in, what does the concept of “liberated” data get me? To “… easily take that data out of any product without a hassle” is a nice idea. Medical records, photos, and social media site contents would be great to have copies of. But making digital copies is trivial, and I don’t think Google is talking about removal from products or services, but taking a copy and importing that copy into another app or service. Looking at the Dashboard, control and management is absent. To put this into context, when I think of data management, I think of the Data Security Lifecycle concept that Rich and I present at conferences. Data ownership and management is totally different than getting a copy. Most people will read this ‘take’ in a non-digital, real-world analog sense, meaning to ‘remove’. Google is using the digital sense, where ‘take’ is closer to ‘propagate’.

Furthermore, I am not sure just what exactly they mean by an “an open web run on open standards”. Is Google offering an open data format? An open API to control or manage data? Or do they mean all web data being open to web search (Google), and available to as many applications (Google) and services (Google) as you care to use?

It sounded so good, but unfortunately there does not seem to be anything of substance behind the press releases! That’s why I think this is all window dressing. Call me a skeptical security guy, but it looks like Google is taking a page out of Microsoft’s handbook, in that they are creating a tool to combat user fears and concerns, but data storage and management become tied more closely to Google, not less. Taking data from one place to another provides additional attributes and context that increases its value. Google remains in control and it will be very difficult to argue who owns that data.

—Adrian Lane

Friday, November 06, 2009

Friday Summary - November 6, 2009

By Adrian Lane

When I was in college, I figured every professor assumed I had only one class: the one they were teaching. They seemed to assume I dedicated days and nights solely to their coursework, and was no less interested in the subject they had dedicated their lives to. And they allocated my time accordingly, giving me enough work to do to consume 40 hours a week. But I was taking 5 classes! WTF! Berkeley was especially bad this way. By noon each Monday I felt like I was a week behind the curve. For the first few weeks I was quite angry about the selfishness of those professors: how could they possibly be so callous as to give us far more work than any two people could perform? Were they encouraging shoddy work? Were they nuts?!?

After a few weeks I grudgingly acknowledged that the profs were not in their positions because they were stupid or ignorant, but because they were smart. Well, maybe one was stupid and ignorant, but most of them were really freakin’ intelligent. And consciously or not, this overburdening forced you to work faster, prioritize, and be more efficient. Handling an overburden of requirements has been a skill that has served me better than the subject matter of any one of those courses.

I am not talking about time management here, like some motivational seminar might teach; I am talking about strategy. When you have 5 times more work work than you can do, tasks become self selecting. You do those things that you must do to survive. If you’re lucky, some of the things that you want to do overlap with what must be done. You learn to select the right opportunities that are most in line with success, and not look back when you walk away from good ideas that don’t support your goals or the requirements on you. Your choices will differ from your peers, but you make choices and you do the best you can. For those of you who have participated in startups, I expect that you have a full appreciation of this viewpoint.

That’s the way I approach my project work here. And my goal is that our research makes it easier for you to do this as well.

With just Rich and me being the only full-time guys here, we go through this process a lot. There are simply not enough hours in the day to do some things that look like great ideas at first. On the bright side it forces us to re-evaluate projects and come up with much more streamlined versions, which improves the quality and the usability of the research. And frankly I want to get away from this computer and, I dunno, have a life, so it’s important on several levels.

A big portion of this blog’s readers are not security professionals, but deal with an aspect of security in their daily jobs. They don’t necessarily want to be experts, but just understand how to find answers to their security questions and get the job done. This is a bit of a tease, but as a result of viewing our research calendar in this light, we are reconsidering what we had planned to create. In the coming weeks we are going to be adding a lot of new stuff to the research library, fitting our new more streamlined approach, as our plans grew too big for us to handle. More importantly, it was too cumbersome for part-time security practitioners to benefit from.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Stacy Shelley in response to Verizon Has Most of the Web Application Security Pieces… But Do They Know It?:

Hi Rich - Yes, SecureWorks offers managed WAF and web app scanning services. We also have the capability to leverage the web app scanning data in the management of WAF policies. Our Web App Sec services align pretty well with the components you guys cover in your “Building a Web Application Security Program” paper.

Our Consulting group has been doing web app pen testing and code audits for a few years now. In the spring, we launched the managed WAF service. In October, we launched the web app scanning service (which also scans databases). We’ve also had the capability to monitor application logs for quite some time, although it’s value is largely dependent on the audit logging capabilities of the app.

—Adrian Lane

Thursday, November 05, 2009

Major SSL Flaw Discovered

By Adrian Lane

A major flaw has been found that enables a man-in-the-middle attacks against SSL connections. Several other media outlets are reporting, but Kelly Jackson Higgins has a nice summary over at Dark Reading, and betanews has a much more detailed discussion. According to Marsh Ray at PhoneFactor:

“The bug results in a set of related attacks that allow a man-in-the-middle to do bad things to your SSL/TLS connection. The (attacker) in the middle is able to inject his own chosen text into what your application believes is an encrypted, secure communications channel,” says Ray, a senior software development engineer for PhoneFactor. “This has implications for all protocols that run on top of SSL/TLS, such as HTTPS … What’s different with this (bug) is that both the client and server need to be patched to restore the full security guarantees that are expected with TLS.”

The communication process two parties go through to establish a trusted connection inadvertently leaves some response information in clear text during part of the dialogue. Basically when they agree to change some of the session attributes the protocol leaves some information exposed:

“Methods exist for one or the other party to request a change in the parameters of their transactions, perhaps to switch to a different, stronger cipher suite … In a situation similar to someone’s e-mail application replying to your e-mail with a message whose subject line begins, RE:, the conversation between client and server over what to change to, contains a reference to the request for renegotiation – the request that had, when sent earlier, been encrypted. Now it’s not, and that’s the problem. “

The fix for this should be relatively straightforward and, from what I understand, should be available within the next few days. The issue becomes deploying a patch to a piece of code used for just about any secure communication session. So plan on patching a lot of applications in the coming weeks!

PhoneFactor named their efforts ‘Project Mogul’, which has nothing to do with The Mogull so far as I know.

—Adrian Lane

Wednesday, November 04, 2009

Verizon Has Most of the Web Application Security Pieces… But Do They Know It?

By Rich

Last week Verizon Business announced that they now offer web application vulnerability assessment software as a service. Specifically, they are reselling a full version of WhiteHat Security’s offering, customized for Verizon business customers.

To be honest I’m somewhat biased here since WhiteHat’s CTO, Jeremiah Grossman, is a friend; but I’ve been fairly impressed with their model of SaaS-based continuous web app vulnerability assessment using a combination of scanning and manual validation to reduce false positives. Jeremiah’s marketing folks will hate it when I say this, but in my mind it’s closer to penetration testing than the other SaaS vulnerability assessment products, which rely completely on automated scanning. Perhaps instead of calling this “penetration testing” we can call it “exploit validation”. Web application vulnerabilities are tougher to deal with from a risk management perspective since, on the surface, it can be very difficult to tell if a vulnerability is exploitable; especially compared to the platform vulnerabilities typically checked by scanners. Since all web applications are custom, it’s important to validate those vulnerabilities to determine overall risk, as the results of a blind scan are generally full of potential false positives – unless it has been de-tuned so much that the false negative rate is extremely high instead.

Verizon Business also sells a managed web application firewall, which they mention in the press release. If you refer back to our Building a Web Application Security Program series and paper; vulnerability assessment, penetration testing, and web application firewalls are core technologies for the secure deployment and secure operations phases of managing web applications (plus monitoring, which is usually provided by the WAF and other logging).

In that series and paper, we also discussed the advantages of WAF + VA, where you dynamically generate WAF policies based on validated vulnerabilities in your application. This supports a rapid “shield then patch” model.

In the released information, Verizon mentions that they support WAF + VA. Since we know they are using WhiteHat, that means their back-end for WAF is likely Imperva or F5, based on WhiteHat’s existing partnerships.

Thus Verizon has managed VA, managed WAF, managed WAF + VA, and some penetration testing support, via the VA product.

They also have a forensics investigation/breach response unit which collects all the information used to generate the Data Breach Investigations Report.

Let’s add this up… VA + Exploit Validation (lightweight pen testing) + WAF + (WAF + VA) + incident response + threat intelligence (based on real incident responses). That’s a serious chunk of managed web security available from a single service provider. My big question is: do they realize this? It isn’t clear that they are positioning these as a combined service, or that the investigations/response guys are tied in to the operations side.

The big gap is anything in the secure development side… which, to be honest, is hard (or impossible) for any provider unless you outsource your actual development to them.

SecureWorks is another vendor in this space, offering web application assessments and managed WAF (but I don’t know if they have WAF + VA)… and I’m pretty sure there are some others out there I’m missing.

What’s the benefit? These are all pieces I believe work better when they can feed information to each other… whether internal or hosted externally. I expect the next pieces to add are better integrated application monitoring, and database activity monitoring.

(For the disclosure record, we have no current business relationships with WhiteHat, Verizon, F5, or SecureWorks, but we have done work with Imperva).

—Rich

Tuesday, November 03, 2009

Myths Surrounding Databases in Virtual Environments

By Adrian Lane

Every now and again I run into an article that totally baffles me. It’s as if the author had a bunch of somewhat related quotes sitting around, and then stitched a Frankenstein article together. In this case the article was in the October 5th edition of eWeek, and the topic was “Databases: The next big virtualization thing”. The intention seems to be sketching out some hazy future projections about virtualized databases, and what wonderful things virtualization can do for you. But if you closely examine the assertions, not only are they are based on bad assumptions, they are flat-out misleading. I am not sure there is a single point in the article I wholly agree with. Rather than wallow in this mess, I will offer you what I consider to be 7 myths surrounding databases in virtual environments:

Myth #1 - Virtualization makes database administration easier. No. Any time you place a database into an environment, virtual or not, the database needs to be tuned to operate efficiently within that environment. Virtualization abstracts the resources underneath the database; it does not relieve you from the administrative tasks of tuning and provisioning. While it is theoretically possible to reduce administrative tasks by standardizing an environment, history has shown we need to optimize database configuration to accommodate organic changes that occur over time.

Myth #2 - Virtualization improves database performance. Possibly, but not always. Improvements to database performance are more likely to result from tuning SQL and database structures. Generally speaking, improvements in database logic offer an order of magnitude greater improvement than any ‘external’ changes. Virtualization does provide an easier way to allocate more resources to a database, and is highly beneficial when a database is memory or CPU constrained. I/O constrained databases are as likely to suffer from distributed storage latency as realize gains in performance, and more likely require some redesign to take advantage of virtual resources. Sure, you can throw twice as many resources at a database, but that does not mean it will automatically perform better!

Myth #3 - Virtualization lets you consolidate databases. Not really. Virtualization offers the ability to use a single central database installation, but you still normally use multiple database instances to support multiple applications. Effective consolidation of databases to take advantage of virtual environments requires some database re-engineering and does not magically (automatically) occur in a virtual environment.

Myth #4 - Virtualization will reduce your database licensing costs. This is not typically the case. Check with your vendor on this, because adding a virtual CPU is likely to cost you additional fees just as if you added a real CPU. Per database pricing may mean higher licensing costs, not lower. It will depend upon your vendor’s pricing model, so do not take it for granted.

Myth #5 - Virtualization provides better database security. I have never understood this claim. How exactly could virtualization make a database more secure? Through obscurity? Some giant VMotion shell game that hides the location of the data? The access to your data is still gated by access controls and governed by permissions. Security is largely dependent upon solid configuration of the database and current patches being applied, which may nor may not be easier depending upon how you have your virtual environment set up. Virtualization provides no inherent advantage to security, and opens up additional vulnerabilities. I have never been a big fan of the concept of ‘threat surface’, but if data gets copied to multiple locations there are simply more chances to gain access to the raw data files, which is why we recommend transparent database encryption for databases in virtual environments.

Myth #6 - Virtualization enables all clustered databases to be active simultaneously. Nonsense! This is possible today without virtualization. SQL Server is a good example. It offers two basic models for database clustering: an active-passive setup designed for failover, and an active-active mode for distributed processing. Both require the data sets to be synchronized, often via shared disks. The former requires no special database design work – only the appropriate configuration. In the later case you really need a data allocation strategy to minimize performance and data contention issues. Virtualization does provide the means to make physically separate disks appear as one, but it does not make synchronization issues go away.

Myth #7 - Virtualization helps abstract the database from applications: No, it doesn’t. Abstraction technologies like Hibernate can mask the underlying database usage from an application. Generalization of data types stored within a database or even use of XML allow data to be moved between heterogenous databases and applications. There is nothing inherent to virtualization technology that abstracts database usage. The benefit virtualization provides, in cases of disaster recovery, is being able to easily spawn a new copy of the database should the existing copy no longer be available.

—Adrian Lane

Friday, October 30, 2009

Friday Summary- October 30, 2009

By Rich

This week’s Friday Summary is sponsored by Evilsquirrel Enterprises, your World Domination Specialists.

My absolute favorite holiday of the year is Halloween. More than Christmas (possibly because I’m a non-practicing Jew), more than my birthday, and even more than Talk Like a Pirate Day.

Halloween is the ultimate geek holiday. It’s the one time of year we have an excuse to pull out our table saws, microcontrollers, and pneumatics as we build wonderful devices to soil the underwear of all the neighborhood children. I knew I was finally getting it right the first year a group of kids carefully approached our home, then ran off screaming as the motion sensor tripped and the effects kicked in. Between the business and the baby I haven’t really had tine to build anything new this year, but I did finally invest in some commercial-grade fog machines. Fog, light, and sound are absolutely essential for setting a good scene, and go a long way further than any actual decorations.

I’ve previously used the cheap foggers from Party City or the Halloween stores, but never managed to get them to last more than 2 years in a row. I’m hoping this commercial unit will be a bit more reliable… and the 20,000 cubic feet per minute of fog it kicks out can’t hurt.

This is the 13th year, 4th location, and 2nd state for our annual Evilsquirrel party. It’s a bit smaller than the “Squirrel Wars” year where we had 300 people show up and 4 live bands, but that’s what happens when everyone runs off and starts careers and families. Needless to say, my friends and I are all tremendously amused that the whole “squirrel” meme is so big these days. Now we don’t seem quite as weird.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: This Wired article on the anti-vaccination movement. It’s an extremely important article, but here’s the money quote for us security folks: “Looking back over human history, rationality has been the anomaly. Being rational takes work, education, and a sober determination to avoid making hasty inferences, even when they appear to make perfect sense. Much like infectious diseases themselves – beaten back by decades of effort to vaccinate the populace – the irrational lingers just below the surface, waiting for us to let down our guard.”
  • Adrian: Jeremiah’s post on Black Box vs. White Box. QA professional have used this ‘threshold of stability’ approach for years to gate software releases, but it seems counter-intuitive to security professionals.
  • Mortman: Detecting Malice Released Only halfway through and it is completely awesome. Best tech book I’ve read in ages. (I second that -Rich). (Meier thirds it: “Anyone I bring it up to first complains about the $40 eBook, but it’s the best technical book I’ve bought in a while.”)
  • Meier: Amazon Lets Shoppers Pay With a Phrase This is just dumb. First we have a phrase that’s verifiably known to be taken and second I bet if someone did research on any web authentication mechanisms that are identified as “PIN” you could map the majority of those users bank PINs to their other PINs. I don’t get it. Oh and, to change your PayPhrase you have to log in anyway. Way to go, Amazon.
  • Rich (2): I can’t help myself, I had a tie this week. This article from Ivan Arce at Core Security is a month old, but well worth the read.

Special – Worst Link of the Week

In this study, I have tried to determine if IT security project management is a viable career choice for women. If so, do they have what it takes to be a successful IT Security Project Manager? I would like to emphasize that IT profession cannot be generalized based on gender. No conclusion has been drawn to indicate if one sex is better than the other in any of the subsets within IT field.

Isn’t it great how the author, Gurdeep Kaur, simultaneously tells us that she’s going to investigate whether one gender has the ability to do a job, and then claims that you can’t generalize on the basis of gender? You really shouldn’t read the paper, but if you do, it goes downhill from there. The analysis is shallow and suffers largely from citing lots of studies that demonstrate the problem while providing little in the way of solutions. The few suggestions provided are insulting to say the least. I’d quote more but I can’t bring myself to do it. I am amazed that SANS actually posted this to their reading room and granted the author a “Gold Certification”.

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Marc in response to Tokenization Will Become the Dominant Payment Transaction Architecture:

I always thought Chuck E. Cheese was a rat…not a mouse. That being said, I think your example of a video arcade is a good one. I have used the casino chip analogy when explaining tokenization to people. You trade the high value data (cash in the analogy and a CC# in the use case) for some lower value data (a casino chip and a piece of “tokenized” data). The problem I have with tokens though is that they still have value in a certain context. You haven’t sufficiently devalued the original data by making it a “token.” The token can still be used to perform functions, albeit in a more limited context than the original data. And I question the methodologies currently used to generate these tokens. I have yet to see any academic research that establishes that the tokens are truly random or that they are any better than hashed values. What we’ve done is traded one type of attack for one that has yet to emerge (an underground market in valid card data for one that will surely emerge trading valid token data in poorly implemented solutions). Now, coupling a token with a time-based signature or some other authentication value makes these solutions much more palatable because then I can prove the token is being properly used. There are numerous implementation issues in the different token solutions provided in the market today…and not enough discussion of provable security and standardization of those implementations…

—Rich

Wednesday, October 28, 2009

Penetration Testing Market Update, Part 2

By Rich

This is part 2 of a series, click here for Part 1

Penetration testing solution and market changes

I’m not exactly sure when Core Security Technologies and Immunity started business, but before then there were no dedicated commercial penetration testing tools. There were a number of vulnerability scanners, and plenty of different “micro” tools to help with different parts of a pen test, but no dedicated exploitation tools. Metasploit also changed this on the non-commercial side. For those who aren’t experts in this area, it’s important to remember that a vulnerability assessment is not a penetration test – vulnerability assessment determines if a system may be vulnerable to an attack, while penetration testing determines if that vulnerability is exploitable.

Update- Ivan from Core emailed that they started as consulting in 1996, and the first version of Impact was released in 2002.

Rather than repeating Nick Selby’s excellent market summary of the three penetration testing tools providers over at IANS, I’ll focus on the changes we’re seeing in the overall market.

  1. The market is still dominated by services, with quality ranging from excellent to absolute snake oil. Even using a tool like Core, by far the most user-friendly, you still need a certain skill level to perform a reasonable test.
  2. The tools market is increasing, as Core and Immunity have experienced reasonable growth, with extensive growth of the Metasplit user community.
  3. Partnerships between vulnerability assessment vendors and penetration testing solution providers have grown. This was pretty much completely driven by Core until the Metasploit acquisition by Rapid7. Core partners with Tenable, Qualys, nCircle, IBM, Lumension, GFI, and eEye. Update- Immunity partners with Tenable, I missed that in my initial research.
  4. Web application vulnerability assessment tools (and services) almost always include some level of penetration-testing capabilities. This is a technology requirement for effective results, since it is extremely difficult to accurately validate many web application vulnerability types without some degree of exploitation. VA tools tend to restrict themselves to prevent damaging the application being tested, and (as with nearly any vulnerability assessment), can normally be run against non-production targets with less safety, in order to produce deeper and more accurate results.
  5. Any penetration test worth its salt includes web applications within the scope, and pen testing tools are increasing their support for web application testing.

I expect to see greater blurring of the lines between vulnerability assessment and penetration testing in the web application area, which will spill over into the infrastructure assessment space. We’ll also see increasing demand for internal penetration testing, especially for web applications.

Core will increase its partnerships and integration on the VA side, and could see an acquisition if larger VA vendors (a small list) see growing customer demand for penetration testing – which I do not expect in the short term. The VA market is larger and if those vendors see pen testing client demands, or greater competition from Rapid7, they can leverage their Core partnerships. Core’s Impact Essential tool is the first to target individuals who aren’t full-time security professionals or penetration testers, and run on an automated schedule. While it doesn’t have nearly the depth of the Pro product, it could be interesting for continuous testing. The real question is whether customers perceive it as either reducing their process costs for vulnerability management (via prioritization and elimination of non-exploitable vulnerabilities), or a replacement for an existing VA solution. If Impact Essential can’t be used to cut overall costs, it will be hard to justify in the current economic environment.

As Nick concluded, Immunity will need to improve their UI to increase adoption beyond organic growth… unless they plan to stay focused on dedicated penetration testers. They should also consider some VA partnerships, as they will be the only penetration testing tool not partnered or integrated with VA Update- I was incorrect, Immunity also partners with Tenable. Apologies for missing that in my initial research.. I agree with Nick: Immunity is most at risk in the short term from the Metasploit commercialization. If the UI improves, Immunity could use cost to compete, and some VA vendors might add them as an additional partner.

Rapid7 just jumped from being one of the less-known VA players to a household name for anyone who pays attention to penetration testing. This is a huge opportunity, but not without risks. Metasploit is an awesome tool (I’ve used it since version 1… in the lab), but not yet enterprise class. The speed, usefulness, and usability of its integration will play a major role in its long-term success and ability to springboard off the large amount of press and additional name recognition associated with this acquisition. H D also needs to aggressively maintain the Metasploit community, or Rapid7 will lose a large fraction of Metasploit’s value and have to pay staff to replace those volunteers. Quality assurance, of the product as well as the exploits, will also be important to maintain; this could reduce the speed of releasing exploits which Metasploit is famous for.

Rapid7 also faces risks due to Metasploit’s BSD license. There is nothing to prevent any other vendor from taking and using the code base. This is a common risk when commercializing any free/open source software, and we’ve seen both successes and failures.

Conclusion

Here’s how I see things developing:

  1. For infrastructure/non-web applications we will see growing demand for exploit testing automation. The vulnerability assessment vendors will add native capabilities, and Core (and Immunity, if they choose) will add more native VA capabilities and find themselves competing more with VA vendors. My gut feel is that VA vendors (other than Rapid7) will only add the most basic of capabilities, leaving the pen testing vendors with a technical advantage until both markets completely merge. That might not matter to most organizations, which either won’t understand the technology differentiation, or won’t care.
  2. There will continue to be a need for in-depth tools to support professional penetration testers. This market will continue to grow, but will not offer the opportunities of the broader, ‘lights-out’ automated side of the market.
  3. Overall, the penetration testing tools market will continue to grow. This acquisition and other market trends validate the usefulness of this market, especially in assisting with remediation prioritization – not just problem identification.
  4. The greatest area of growth will be in web applications, and as I mentioned before the lines between pure VA and pure penetration testing will completely blur in this area.
  5. All the penetration testing vendors will benefit from the Metasploit acquisition. Immunity faces the greatest mid-term risk, Core the greatest potential for price pressure, and Rapid7 the risk of losing the Metasploit community and seeing their work appear in competing products. All three vendors can manage potential risks, but the answers aren’t necessarily easy.

A bit of disclosure – I haven’t been briefed formally by Immunity, so I could be missing part of their strategy. Although I’ve talked with most of the VA vendors, I haven’t specifically discussed their plans for exploit validation or penetration testing, and I base my conclusions more on conversations with end users.

—Rich

Penetration Testing Market Grows and Matures, but Faces Challenges

By Rich

With last week’s acquisition of Metasploit by Rapid7, I thought it might be a good time to do a review of the penetration testing market and the evolving role of pen testing in the security arsenal. We’ve seen a few different shifts over the past few years in how organizations use pen testing, and I believe this acquisition – combined with changes in enterprise infrastructure – indicates that pen testing is becoming more essential, more closely tied to vulnerability assessment, and generally more mature.

First, a bit of a disclaimer: I’m approaching this as an analyst, not a penetration tester. Although I’ve used many of the tools in demonstrations and the lab, I’ve never worked as a pen tester and don’t claim to have that skill set. I’m fairly sure my BBS hacking experience from the mid-80’s doesn’t really count.

There are two important issues we need to focus on when evaluating penetration testing – changes in need and value, and changes in delivery methods and tools.

The value of penetration testing

There is sometimes a debate on the value of penetration testing. Some question its usefulness, since a test by a competent practitioner is pretty much guaranteed to succeed, but highly unlikely to find every exploit path into the organization. More comprehensive tests will find more holes, but at a much higher cost. In some verticals (particularly financials and some types of government organizations) the risk is so high that this is an accepted cost, but for less-aware and less-targeted verticals, or small and mid-sized organizations, a basic vulnerability or program assessment can find more issues at lower cost.

That’s because, until fairly recently, penetration testing was dominated by external service organizations performing broad network and host based assessments. Tests were used to:

  1. Scare management into spending more on security.
  2. Get a general sense of how hardened the organization was.
  3. Find and fix any obvious holes that might stand out either in an untargeted scan/attack, or to an attacker willing to spend a little more time with limited resources.

Basically, a pen test would give you a good sense of how you’d withstand an attack by an opponent at the same skill level as your testing team, for the amount of time/effort you were willing to pay for. Obviously there are a lot of exceptions, and I’m only talking about general market trends. But at this stage, unless you were a big target, a vulnerability assessment (including an internal assessment) would provide sufficient value at a lower cost.

That’s still how many tests are used, but we’ve seen a shift in the past few years due to a few changes in the risk and threat landscape. Specifically:

  1. An increase in highly targeted attacks.
  2. Greater use of web applications, and more web application attacks (one of the single biggest source of losses in recent major reported incidents).
  3. A market and economic system for taking advantage of exploited data.
  4. Evolution of technologies & vulnerabilities, coupled with much shorter exploit creation/adoption cycles than in the past. For example, zero day attacks were extremely uncommon just 2-3 years ago, but now seem to appear monthly.

The bad guys are making serious money, are going after harder targets, and are taking advantage of our rapid adoption of web technologies. They really have to, since we’ve gotten a lot better at securing our networks and endpoints (yes, we really have, from an overall trends standpoint).

These factors change the focus and requirements for penetration testing. While this is merely one analyst’s opinion, and some of these are very early trends, here’s what I’m seeing:

  1. Organizations are increasing the frequency of vulnerability assessments and penetration testing, to reduce between-assessment risks. In some cases these are continuous programs.
  2. Penetration tests are being more closely tied to vulnerability assessments in order to determine risk and prioritize patches and other defenses.
  3. The line between a vulnerability assessment and a penetration test is almost completely blurred for web applications – especially custom web applications.
  4. There is greater use of, and need for, penetration testing during development and pre-production phases, since some testing is prohibitively risky on a production system.

Penetration testing is being more closely tied to vulnerability assessment on non-web systems to help prioritize. A VA doesn’t necessarily tell you how exploitable a target is, and it certainly won’t tell you what the bad guy can potentially gain. A penetration test helps validate the overall risk and determine the potential impact and losses (not in financial terms – that’s for another day). A vulnerability scan can tell you that system X is vulnerable to attack Y, but you often need to go a step further with a pen test to determine if data Z is at risk. This is especially true for web applications, but also important for other types of assets.

The overall focus is shifting away from “Can someone break in, and how long will it take them?” to “Where are we most exposed, and what are our potential losses?” Penetration testing is becoming more of a prioritization and secure development tool.

See part 2 for how these factors change the solutions and penetration testing market

—Rich

Tuesday, October 27, 2009

Add Anti Exploitation to Applications You Didn’t Write

By Rich

This morning Dan Goodin over at The Register dropped me a line to get my take on a new tool from Microsoft that lets you apply anti-exploitation controls to existing applications. Here’s Dan’s article with my quote, and more information directly from Microsoft.

This. Is. Awesome.

Here’s why EMET is so significant. Anti-exploitation technologies are incredibly powerful because they reduce the risk that any vulnerability – even a zero day – can actually be exploited to cause harm. They include a bunch of techniques including Data Execution Protection (DEP, which is a software flag enforced at the hardware level), Address Space Layout Randomization (ASLR), and stack protection.

As powerful as these techniques are, the software developer needs to design and build their programs to take advantage of them. Most developers don’t do this yet, which makes their software a major potential weak point for any host security. This is especially problematic with web browser plugins that are leveraged by web-based client-side exploits.

EMET allows anyone to add certain anti-exploitation protections to any program without requiring recompiling. You can now apply four anti-exploitation techniques to an existing application, no matter where you got it from or who programmed it (see Microsoft’s post for the list and explanation). Since this will break some applications, it’s not for the faint of heart, but EMET has per-process granularity which can help you lock something down, while leaving open the bits that break.

It’s very cool, and kudos to Microsoft. We still need to see how well it works in the real world, so hopefully we’ll get some field reports soon.

—Rich

Amazon RDS Announced

By Adrian Lane

Amazon announced a Relational Database Service today:

Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period.

It was natural to choose the most popular open source database, MySQL 5.1, at least in the short term. With this introduction they have effectively filled out their cloud offering for database infrastructure services. To go along with the existing capabilities of Amazon’s Simple DB and a generic Amazon Machine Image that provide logical instances of any of the major database platforms, you have just about every option you could want as an application developer.

There is a list of pricing options based upon tiers of memory and computational capacity for your web service. Storage is equally flexible, with the ability to select from 5GB to 1TB of storage capacity. Snapshotting, rollbacks, resource monitoring, automated backup, and pretty much everything needed for basic database setup and maintenance.

What Amazon is doing is very cool, but this is a security blog so I need to make a few comments on security and not just act like an RDS fanboi. Which I sometimes hate because I feel like the guy who’s yelling “Hey kid, stop running around with that sharp stick! You’ll poke your eye out!” With the AMI variants, as Amazon takes care of patching and configuration, and the user takes care of access control and identity management. While the instances most likely have security patches applied on a consistent basis, there is a lot more to security than patching IDM. I have no evidence that these database instances are insecure, but no one gets the benefit of the doubt in this case. For most relational database platforms I look at about 125 different database settings in an assessment sweep, most of these are to ensure the factory defaults have been changed. There is no reason to believe that Amazon is doing the same, so protection against SQL injection falls on the shoulders of client developers.

With MySQL databases for RDS, the situation appears to be a little different, as the user has some configuration options. The RDS Developer Guide shows that we can alter port settings and enforce SSL connections. But the API is limited and far more focused on programming than administration. The security guides don’t offer any details on usage of service accounts, default passwords, stored procedure access, networking agents, or other features that are not necessarily masked by the Amazon APIs. Many important security topics are simply not addressed. And odds are, if someone is going after your data, they are going to use SQL injection, default account access, or external stored procedures – all of which are your responsibility to secure. I would have a tough time putting any sensitive data out there until you can verify the security setup. Use caution or you might… oh, never mind.

—Adrian Lane

Monday, October 26, 2009

IDM: Identity?

By David Mortman

For Adam after harassing me on irc:

Calling ‘accounts’ ‘identities’ is broken. Discuss.

—David Mortman

Friday, October 23, 2009

Friday Summary - October 23, 2009

By Adrian Lane

The First 90 Days.

When you take a new position, what is it you will do in the first 90 days? What do you want to learn? What do you wish to accomplish? Is it enough to plan a course of action or do you immediately need to fix something? “What is your plan for your first 90 days?” is a common interview question for executives. The candidate’s answer tells the prospective employer a few things about the person’s grasp of the challenges ahead, how they operate typically, the efficiency of their approach, and how well their expectations align. Most candidates are under no illusion about taking a new role. In the best case they are filling a gap in a growing company, but more often than not they are there to fix something broken. The question cements in the mind of the candidate what is expected of them stepping in the door. And more than any other point during your tenure with a company, your first 90 days sets your boss’ and coworkers’ impressions of your effectiveness.

Never in my career has fixing security been in my top 3 challenges for the first 90 days. It’s always been quality of service, failed process, a broken, product or a dysfunctional development team. I have never been a CISO or security officer so in the context of security, I don’t really know how I would answer the question “What would my first 90 days look like?” If you are a security practitioner, how would you answer the question? Or perhaps it is more interesting to ask non-security professionals what their 90-day plan for security is? What challenges could you hope to accomplish? Do you think you could come up with a security program in that amount of time? I am interested in your thoughts on this subject. Is research on the establishment of a security program interesting to you? Let us know what you think.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: Amrit’s post on Gartner, and working for Gartner. For the record, analysts are very well insulated from financial considerations that could affect research. That said, people who pay to speak to analysts get more time with them, and that can subtly affect opinions.
  • Adrian: My favorite post was also Amrit’s, both for his honest quadrant diagram and for the commentary. To be honest, I felt for ZL as Gartner has the power to cut a company’s sales in half, but I agree with their assessments more than I disagree. My favorite tweet was from @securityincite: “@rmogull Would someone please give Rich some work to do? He’s loitering in shopping malls now. Next he’ll be upgrading to Windows Mobile”.
  • Mortman: @RSnakes on a Plane. (Mort sent this in Monday, he was so convinced).
  • Meier: Two out of five at risk from Wi-Fi Hijacking - Interesting that Talk Talk (the ISP in the UK) is taking this stance to protect end users from heavy-handed plans to tackle Internet piracy by Lord Mandelson.
  • Chris Pepper: Time Warner Cable Exposes 65,000 Customer Routers to Remote Hacks.

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Erik Swan (a Splunk employee -Adrian) in response to Splunk and Unstructured Data:

Thanks for mentioning Splunk, and your post brings up interesting points.

We recommend that people dump “everything” into splunk and just keep it. I’d go further and say that i’d bet that far less than 1% of that data is ever looked-at/reported on/etc. As you point out, its likely harder and more risky to remove data than keep it. This clearly changes when you talk about multiple T per day ( average large system these days ), where even for a wealthy company, the IO required is very expensive and not sure the data has value/risk. My gut is that data generation growth is clearly outpacing the size/price curve per GB, and will likely do so until massively more scaleable and cost effective media is available.

For the time being, keeping everything is likely the best starting point.

At the same time, we have seen models that look a lot like email spam filtering, where “uninteresting” data is routed to different instances that have shorter retention policies. Summarization is used to capture and compress the data hopefully with no information loss. Not a great practice for compliance, but for trouble shooting and analytics can work. Longer term its an interesting area for research and something that due to the size of data we deal with needs to be solved.

—Adrian Lane