Login  |  Register  |  Contact
Wednesday, November 04, 2009

Verizon Has Most of the Web Application Security Pieces… But Do They Know It?

By Rich

Last week Verizon Business announced that they now offer web application vulnerability assessment software as a service. Specifically, they are reselling a full version of WhiteHat Security’s offering, customized for Verizon business customers.

To be honest I’m somewhat biased here since WhiteHat’s CTO, Jeremiah Grossman, is a friend; but I’ve been fairly impressed with their model of SaaS-based continuous web app vulnerability assessment using a combination of scanning and manual validation to reduce false positives. Jeremiah’s marketing folks will hate it when I say this, but in my mind it’s closer to penetration testing than the other SaaS vulnerability assessment products, which rely completely on automated scanning. Perhaps instead of calling this “penetration testing” we can call it “exploit validation”. Web application vulnerabilities are tougher to deal with from a risk management perspective since, on the surface, it can be very difficult to tell if a vulnerability is exploitable; especially compared to the platform vulnerabilities typically checked by scanners. Since all web applications are custom, it’s important to validate those vulnerabilities to determine overall risk, as the results of a blind scan are generally full of potential false positives – unless it has been de-tuned so much that the false negative rate is extremely high instead.

Verizon Business also sells a managed web application firewall, which they mention in the press release. If you refer back to our Building a Web Application Security Program series and paper; vulnerability assessment, penetration testing, and web application firewalls are core technologies for the secure deployment and secure operations phases of managing web applications (plus monitoring, which is usually provided by the WAF and other logging).

In that series and paper, we also discussed the advantages of WAF + VA, where you dynamically generate WAF policies based on validated vulnerabilities in your application. This supports a rapid “shield then patch” model.

In the released information, Verizon mentions that they support WAF + VA. Since we know they are using WhiteHat, that means their back-end for WAF is likely Imperva or F5, based on WhiteHat’s existing partnerships.

Thus Verizon has managed VA, managed WAF, managed WAF + VA, and some penetration testing support, via the VA product.

They also have a forensics investigation/breach response unit which collects all the information used to generate the Data Breach Investigations Report.

Let’s add this up… VA + Exploit Validation (lightweight pen testing) + WAF + (WAF + VA) + incident response + threat intelligence (based on real incident responses). That’s a serious chunk of managed web security available from a single service provider. My big question is: do they realize this? It isn’t clear that they are positioning these as a combined service, or that the investigations/response guys are tied in to the operations side.

The big gap is anything in the secure development side… which, to be honest, is hard (or impossible) for any provider unless you outsource your actual development to them.

SecureWorks is another vendor in this space, offering web application assessments and managed WAF (but I don’t know if they have WAF + VA)… and I’m pretty sure there are some others out there I’m missing.

What’s the benefit? These are all pieces I believe work better when they can feed information to each other… whether internal or hosted externally. I expect the next pieces to add are better integrated application monitoring, and database activity monitoring.

(For the disclosure record, we have no current business relationships with WhiteHat, Verizon, F5, or SecureWorks, but we have done work with Imperva).

–Rich

Tuesday, November 03, 2009

Myths Surrounding Databases in Virtual Environments

By Adrian Lane

Every now and again I run into an article that totally baffles me. It’s as if the author had a bunch of somewhat related quotes sitting around, and then stitched a Frankenstein article together. In this case the article was in the October 5th edition of eWeek, and the topic was “Databases: The next big virtualization thing”. The intention seems to be sketching out some hazy future projections about virtualized databases, and what wonderful things virtualization can do for you. But if you closely examine the assertions, not only are they are based on bad assumptions, they are flat-out misleading. I am not sure there is a single point in the article I wholly agree with. Rather than wallow in this mess, I will offer you what I consider to be 7 myths surrounding databases in virtual environments:

Myth #1 - Virtualization makes database administration easier. No. Any time you place a database into an environment, virtual or not, the database needs to be tuned to operate efficiently within that environment. Virtualization abstracts the resources underneath the database; it does not relieve you from the administrative tasks of tuning and provisioning. While it is theoretically possible to reduce administrative tasks by standardizing an environment, history has shown we need to optimize database configuration to accommodate organic changes that occur over time.

Myth #2 - Virtualization improves database performance. Possibly, but not always. Improvements to database performance are more likely to result from tuning SQL and database structures. Generally speaking, improvements in database logic offer an order of magnitude greater improvement than any ‘external’ changes. Virtualization does provide an easier way to allocate more resources to a database, and is highly beneficial when a database is memory or CPU constrained. I/O constrained databases are as likely to suffer from distributed storage latency as realize gains in performance, and more likely require some redesign to take advantage of virtual resources. Sure, you can throw twice as many resources at a database, but that does not mean it will automatically perform better!

Myth #3 - Virtualization lets you consolidate databases. Not really. Virtualization offers the ability to use a single central database installation, but you still normally use multiple database instances to support multiple applications. Effective consolidation of databases to take advantage of virtual environments requires some database re-engineering and does not magically (automatically) occur in a virtual environment.

Myth #4 - Virtualization will reduce your database licensing costs. This is not typically the case. Check with your vendor on this, because adding a virtual CPU is likely to cost you additional fees just as if you added a real CPU. Per database pricing may mean higher licensing costs, not lower. It will depend upon your vendor’s pricing model, so do not take it for granted.

Myth #5 - Virtualization provides better database security. I have never understood this claim. How exactly could virtualization make a database more secure? Through obscurity? Some giant VMotion shell game that hides the location of the data? The access to your data is still gated by access controls and governed by permissions. Security is largely dependent upon solid configuration of the database and current patches being applied, which may nor may not be easier depending upon how you have your virtual environment set up. Virtualization provides no inherent advantage to security, and opens up additional vulnerabilities. I have never been a big fan of the concept of ‘threat surface’, but if data gets copied to multiple locations there are simply more chances to gain access to the raw data files, which is why we recommend transparent database encryption for databases in virtual environments.

Myth #6 - Virtualization enables all clustered databases to be active simultaneously. Nonsense! This is possible today without virtualization. SQL Server is a good example. It offers two basic models for database clustering: an active-passive setup designed for failover, and an active-active mode for distributed processing. Both require the data sets to be synchronized, often via shared disks. The former requires no special database design work – only the appropriate configuration. In the later case you really need a data allocation strategy to minimize performance and data contention issues. Virtualization does provide the means to make physically separate disks appear as one, but it does not make synchronization issues go away.

Myth #7 - Virtualization helps abstract the database from applications: No, it doesn’t. Abstraction technologies like Hibernate can mask the underlying database usage from an application. Generalization of data types stored within a database or even use of XML allow data to be moved between heterogenous databases and applications. There is nothing inherent to virtualization technology that abstracts database usage. The benefit virtualization provides, in cases of disaster recovery, is being able to easily spawn a new copy of the database should the existing copy no longer be available.

–Adrian Lane

Thursday, October 29, 2009

Friday Summary- October 30, 2009

By Rich

This week’s Friday Summary is sponsored by Evilsquirrel Enterprises, your World Domination Specialists.

My absolute favorite holiday of the year is Halloween. More than Christmas (possibly because I’m a non-practicing Jew), more than my birthday, and even more than Talk Like a Pirate Day.

Halloween is the ultimate geek holiday. It’s the one time of year we have an excuse to pull out our table saws, microcontrollers, and pneumatics as we build wonderful devices to soil the underwear of all the neighborhood children. I knew I was finally getting it right the first year a group of kids carefully approached our home, then ran off screaming as the motion sensor tripped and the effects kicked in. Between the business and the baby I haven’t really had tine to build anything new this year, but I did finally invest in some commercial-grade fog machines. Fog, light, and sound are absolutely essential for setting a good scene, and go a long way further than any actual decorations.

I’ve previously used the cheap foggers from Party City or the Halloween stores, but never managed to get them to last more than 2 years in a row. I’m hoping this commercial unit will be a bit more reliable… and the 20,000 cubic feet per minute of fog it kicks out can’t hurt.

This is the 13th year, 4th location, and 2nd state for our annual Evilsquirrel party. It’s a bit smaller than the “Squirrel Wars” year where we had 300 people show up and 4 live bands, but that’s what happens when everyone runs off and starts careers and families. Needless to say, my friends and I are all tremendously amused that the whole “squirrel” meme is so big these days. Now we don’t seem quite as weird.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: This Wired article on the anti-vaccination movement. It’s an extremely important article, but here’s the money quote for us security folks: “Looking back over human history, rationality has been the anomaly. Being rational takes work, education, and a sober determination to avoid making hasty inferences, even when they appear to make perfect sense. Much like infectious diseases themselves – beaten back by decades of effort to vaccinate the populace – the irrational lingers just below the surface, waiting for us to let down our guard.”
  • Adrian: Jeremiah’s post on Black Box vs. White Box. QA professional have used this ‘threshold of stability’ approach for years to gate software releases, but it seems counter-intuitive to security professionals.
  • Mortman: Detecting Malice Released Only halfway through and it is completely awesome. Best tech book I’ve read in ages. (I second that -Rich). (Meier thirds it: “Anyone I bring it up to first complains about the $40 eBook, but it’s the best technical book I’ve bought in a while.”)
  • Meier: Amazon Lets Shoppers Pay With a Phrase This is just dumb. First we have a phrase that’s verifiably known to be taken and second I bet if someone did research on any web authentication mechanisms that are identified as “PIN” you could map the majority of those users bank PINs to their other PINs. I don’t get it. Oh and, to change your PayPhrase you have to log in anyway. Way to go, Amazon.
  • Rich (2): I can’t help myself, I had a tie this week. This article from Ivan Arce at Core Security is a month old, but well worth the read.

Special – Worst Link of the Week

In this study, I have tried to determine if IT security project management is a viable career choice for women. If so, do they have what it takes to be a successful IT Security Project Manager? I would like to emphasize that IT profession cannot be generalized based on gender. No conclusion has been drawn to indicate if one sex is better than the other in any of the subsets within IT field.

Isn’t it great how the author, Gurdeep Kaur, simultaneously tells us that she’s going to investigate whether one gender has the ability to do a job, and then claims that you can’t generalize on the basis of gender? You really shouldn’t read the paper, but if you do, it goes downhill from there. The analysis is shallow and suffers largely from citing lots of studies that demonstrate the problem while providing little in the way of solutions. The few suggestions provided are insulting to say the least. I’d quote more but I can’t bring myself to do it. I am amazed that SANS actually posted this to their reading room and granted the author a “Gold Certification”.

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Marc in response to Tokenization Will Become the Dominant Payment Transaction Architecture:

I always thought Chuck E. Cheese was a rat…not a mouse. That being said, I think your example of a video arcade is a good one. I have used the casino chip analogy when explaining tokenization to people. You trade the high value data (cash in the analogy and a CC# in the use case) for some lower value data (a casino chip and a piece of “tokenized” data). The problem I have with tokens though is that they still have value in a certain context. You haven’t sufficiently devalued the original data by making it a “token.” The token can still be used to perform functions, albeit in a more limited context than the original data. And I question the methodologies currently used to generate these tokens. I have yet to see any academic research that establishes that the tokens are truly random or that they are any better than hashed values. What we’ve done is traded one type of attack for one that has yet to emerge (an underground market in valid card data for one that will surely emerge trading valid token data in poorly implemented solutions). Now, coupling a token with a time-based signature or some other authentication value makes these solutions much more palatable because then I can prove the token is being properly used. There are numerous implementation issues in the different token solutions provided in the market today…and not enough discussion of provable security and standardization of those implementations…

–Rich

Wednesday, October 28, 2009

Penetration Testing Market Update, Part 2

By Rich

This is part 2 of a series, click here for Part 1

Penetration testing solution and market changes

I’m not exactly sure when Core Security Technologies and Immunity started business, but before then there were no dedicated commercial penetration testing tools. There were a number of vulnerability scanners, and plenty of different “micro” tools to help with different parts of a pen test, but no dedicated exploitation tools. Metasploit also changed this on the non-commercial side. For those who aren’t experts in this area, it’s important to remember that a vulnerability assessment is not a penetration test – vulnerability assessment determines if a system may be vulnerable to an attack, while penetration testing determines if that vulnerability is exploitable.

Update- Ivan from Core emailed that they started as consulting in 1996, and the first version of Impact was released in 2002.

Rather than repeating Nick Selby’s excellent market summary of the three penetration testing tools providers over at IANS, I’ll focus on the changes we’re seeing in the overall market.

  1. The market is still dominated by services, with quality ranging from excellent to absolute snake oil. Even using a tool like Core, by far the most user-friendly, you still need a certain skill level to perform a reasonable test.
  2. The tools market is increasing, as Core and Immunity have experienced reasonable growth, with extensive growth of the Metasplit user community.
  3. Partnerships between vulnerability assessment vendors and penetration testing solution providers have grown. This was pretty much completely driven by Core until the Metasploit acquisition by Rapid7. Core partners with Tenable, Qualys, nCircle, IBM, Lumension, GFI, and eEye. Update- Immunity partners with Tenable, I missed that in my initial research.
  4. Web application vulnerability assessment tools (and services) almost always include some level of penetration-testing capabilities. This is a technology requirement for effective results, since it is extremely difficult to accurately validate many web application vulnerability types without some degree of exploitation. VA tools tend to restrict themselves to prevent damaging the application being tested, and (as with nearly any vulnerability assessment), can normally be run against non-production targets with less safety, in order to produce deeper and more accurate results.
  5. Any penetration test worth its salt includes web applications within the scope, and pen testing tools are increasing their support for web application testing.

I expect to see greater blurring of the lines between vulnerability assessment and penetration testing in the web application area, which will spill over into the infrastructure assessment space. We’ll also see increasing demand for internal penetration testing, especially for web applications.

Core will increase its partnerships and integration on the VA side, and could see an acquisition if larger VA vendors (a small list) see growing customer demand for penetration testing – which I do not expect in the short term. The VA market is larger and if those vendors see pen testing client demands, or greater competition from Rapid7, they can leverage their Core partnerships. Core’s Impact Essential tool is the first to target individuals who aren’t full-time security professionals or penetration testers, and run on an automated schedule. While it doesn’t have nearly the depth of the Pro product, it could be interesting for continuous testing. The real question is whether customers perceive it as either reducing their process costs for vulnerability management (via prioritization and elimination of non-exploitable vulnerabilities), or a replacement for an existing VA solution. If Impact Essential can’t be used to cut overall costs, it will be hard to justify in the current economic environment.

As Nick concluded, Immunity will need to improve their UI to increase adoption beyond organic growth… unless they plan to stay focused on dedicated penetration testers. They should also consider some VA partnerships, as they will be the only penetration testing tool not partnered or integrated with VA Update- I was incorrect, Immunity also partners with Tenable. Apologies for missing that in my initial research.. I agree with Nick: Immunity is most at risk in the short term from the Metasploit commercialization. If the UI improves, Immunity could use cost to compete, and some VA vendors might add them as an additional partner.

Rapid7 just jumped from being one of the less-known VA players to a household name for anyone who pays attention to penetration testing. This is a huge opportunity, but not without risks. Metasploit is an awesome tool (I’ve used it since version 1… in the lab), but not yet enterprise class. The speed, usefulness, and usability of its integration will play a major role in its long-term success and ability to springboard off the large amount of press and additional name recognition associated with this acquisition. H D also needs to aggressively maintain the Metasploit community, or Rapid7 will lose a large fraction of Metasploit’s value and have to pay staff to replace those volunteers. Quality assurance, of the product as well as the exploits, will also be important to maintain; this could reduce the speed of releasing exploits which Metasploit is famous for.

Rapid7 also faces risks due to Metasploit’s BSD license. There is nothing to prevent any other vendor from taking and using the code base. This is a common risk when commercializing any free/open source software, and we’ve seen both successes and failures.

Conclusion

Here’s how I see things developing:

  1. For infrastructure/non-web applications we will see growing demand for exploit testing automation. The vulnerability assessment vendors will add native capabilities, and Core (and Immunity, if they choose) will add more native VA capabilities and find themselves competing more with VA vendors. My gut feel is that VA vendors (other than Rapid7) will only add the most basic of capabilities, leaving the pen testing vendors with a technical advantage until both markets completely merge. That might not matter to most organizations, which either won’t understand the technology differentiation, or won’t care.
  2. There will continue to be a need for in-depth tools to support professional penetration testers. This market will continue to grow, but will not offer the opportunities of the broader, ‘lights-out’ automated side of the market.
  3. Overall, the penetration testing tools market will continue to grow. This acquisition and other market trends validate the usefulness of this market, especially in assisting with remediation prioritization – not just problem identification.
  4. The greatest area of growth will be in web applications, and as I mentioned before the lines between pure VA and pure penetration testing will completely blur in this area.
  5. All the penetration testing vendors will benefit from the Metasploit acquisition. Immunity faces the greatest mid-term risk, Core the greatest potential for price pressure, and Rapid7 the risk of losing the Metasploit community and seeing their work appear in competing products. All three vendors can manage potential risks, but the answers aren’t necessarily easy.

A bit of disclosure – I haven’t been briefed formally by Immunity, so I could be missing part of their strategy. Although I’ve talked with most of the VA vendors, I haven’t specifically discussed their plans for exploit validation or penetration testing, and I base my conclusions more on conversations with end users.

–Rich

Penetration Testing Market Grows and Matures, but Faces Challenges

By Rich

With last week’s acquisition of Metasploit by Rapid7, I thought it might be a good time to do a review of the penetration testing market and the evolving role of pen testing in the security arsenal. We’ve seen a few different shifts over the past few years in how organizations use pen testing, and I believe this acquisition – combined with changes in enterprise infrastructure – indicates that pen testing is becoming more essential, more closely tied to vulnerability assessment, and generally more mature.

First, a bit of a disclaimer: I’m approaching this as an analyst, not a penetration tester. Although I’ve used many of the tools in demonstrations and the lab, I’ve never worked as a pen tester and don’t claim to have that skill set. I’m fairly sure my BBS hacking experience from the mid-80’s doesn’t really count.

There are two important issues we need to focus on when evaluating penetration testing – changes in need and value, and changes in delivery methods and tools.

The value of penetration testing

There is sometimes a debate on the value of penetration testing. Some question its usefulness, since a test by a competent practitioner is pretty much guaranteed to succeed, but highly unlikely to find every exploit path into the organization. More comprehensive tests will find more holes, but at a much higher cost. In some verticals (particularly financials and some types of government organizations) the risk is so high that this is an accepted cost, but for less-aware and less-targeted verticals, or small and mid-sized organizations, a basic vulnerability or program assessment can find more issues at lower cost.

That’s because, until fairly recently, penetration testing was dominated by external service organizations performing broad network and host based assessments. Tests were used to:

  1. Scare management into spending more on security.
  2. Get a general sense of how hardened the organization was.
  3. Find and fix any obvious holes that might stand out either in an untargeted scan/attack, or to an attacker willing to spend a little more time with limited resources.

Basically, a pen test would give you a good sense of how you’d withstand an attack by an opponent at the same skill level as your testing team, for the amount of time/effort you were willing to pay for. Obviously there are a lot of exceptions, and I’m only talking about general market trends. But at this stage, unless you were a big target, a vulnerability assessment (including an internal assessment) would provide sufficient value at a lower cost.

That’s still how many tests are used, but we’ve seen a shift in the past few years due to a few changes in the risk and threat landscape. Specifically:

  1. An increase in highly targeted attacks.
  2. Greater use of web applications, and more web application attacks (one of the single biggest source of losses in recent major reported incidents).
  3. A market and economic system for taking advantage of exploited data.
  4. Evolution of technologies & vulnerabilities, coupled with much shorter exploit creation/adoption cycles than in the past. For example, zero day attacks were extremely uncommon just 2-3 years ago, but now seem to appear monthly.

The bad guys are making serious money, are going after harder targets, and are taking advantage of our rapid adoption of web technologies. They really have to, since we’ve gotten a lot better at securing our networks and endpoints (yes, we really have, from an overall trends standpoint).

These factors change the focus and requirements for penetration testing. While this is merely one analyst’s opinion, and some of these are very early trends, here’s what I’m seeing:

  1. Organizations are increasing the frequency of vulnerability assessments and penetration testing, to reduce between-assessment risks. In some cases these are continuous programs.
  2. Penetration tests are being more closely tied to vulnerability assessments in order to determine risk and prioritize patches and other defenses.
  3. The line between a vulnerability assessment and a penetration test is almost completely blurred for web applications – especially custom web applications.
  4. There is greater use of, and need for, penetration testing during development and pre-production phases, since some testing is prohibitively risky on a production system.

Penetration testing is being more closely tied to vulnerability assessment on non-web systems to help prioritize. A VA doesn’t necessarily tell you how exploitable a target is, and it certainly won’t tell you what the bad guy can potentially gain. A penetration test helps validate the overall risk and determine the potential impact and losses (not in financial terms – that’s for another day). A vulnerability scan can tell you that system X is vulnerable to attack Y, but you often need to go a step further with a pen test to determine if data Z is at risk. This is especially true for web applications, but also important for other types of assets.

The overall focus is shifting away from “Can someone break in, and how long will it take them?” to “Where are we most exposed, and what are our potential losses?” Penetration testing is becoming more of a prioritization and secure development tool.

See part 2 for how these factors change the solutions and penetration testing market

–Rich

Tuesday, October 27, 2009

Add Anti Exploitation to Applications You Didn’t Write

By Rich

This morning Dan Goodin over at The Register dropped me a line to get my take on a new tool from Microsoft that lets you apply anti-exploitation controls to existing applications. Here’s Dan’s article with my quote, and more information directly from Microsoft.

This. Is. Awesome.

Here’s why EMET is so significant. Anti-exploitation technologies are incredibly powerful because they reduce the risk that any vulnerability – even a zero day – can actually be exploited to cause harm. They include a bunch of techniques including Data Execution Protection (DEP, which is a software flag enforced at the hardware level), Address Space Layout Randomization (ASLR), and stack protection.

As powerful as these techniques are, the software developer needs to design and build their programs to take advantage of them. Most developers don’t do this yet, which makes their software a major potential weak point for any host security. This is especially problematic with web browser plugins that are leveraged by web-based client-side exploits.

EMET allows anyone to add certain anti-exploitation protections to any program without requiring recompiling. You can now apply four anti-exploitation techniques to an existing application, no matter where you got it from or who programmed it (see Microsoft’s post for the list and explanation). Since this will break some applications, it’s not for the faint of heart, but EMET has per-process granularity which can help you lock something down, while leaving open the bits that break.

It’s very cool, and kudos to Microsoft. We still need to see how well it works in the real world, so hopefully we’ll get some field reports soon.

–Rich

Amazon RDS Announced

By Adrian Lane

Amazon announced a Relational Database Service today:

Amazon RDS gives you access to the full capabilities of a familiar MySQL database. This means the code, applications, and tools you already use today with your existing MySQL databases work seamlessly with Amazon RDS. Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period.

It was natural to choose the most popular open source database, MySQL 5.1, at least in the short term. With this introduction they have effectively filled out their cloud offering for database infrastructure services. To go along with the existing capabilities of Amazon’s Simple DB and a generic Amazon Machine Image that provide logical instances of any of the major database platforms, you have just about every option you could want as an application developer.

There is a list of pricing options based upon tiers of memory and computational capacity for your web service. Storage is equally flexible, with the ability to select from 5GB to 1TB of storage capacity. Snapshotting, rollbacks, resource monitoring, automated backup, and pretty much everything needed for basic database setup and maintenance.

What Amazon is doing is very cool, but this is a security blog so I need to make a few comments on security and not just act like an RDS fanboi. Which I sometimes hate because I feel like the guy who’s yelling “Hey kid, stop running around with that sharp stick! You’ll poke your eye out!” With the AMI variants, as Amazon takes care of patching and configuration, and the user takes care of access control and identity management. While the instances most likely have security patches applied on a consistent basis, there is a lot more to security than patching IDM. I have no evidence that these database instances are insecure, but no one gets the benefit of the doubt in this case. For most relational database platforms I look at about 125 different database settings in an assessment sweep, most of these are to ensure the factory defaults have been changed. There is no reason to believe that Amazon is doing the same, so protection against SQL injection falls on the shoulders of client developers.

With MySQL databases for RDS, the situation appears to be a little different, as the user has some configuration options. The RDS Developer Guide shows that we can alter port settings and enforce SSL connections. But the API is limited and far more focused on programming than administration. The security guides don’t offer any details on usage of service accounts, default passwords, stored procedure access, networking agents, or other features that are not necessarily masked by the Amazon APIs. Many important security topics are simply not addressed. And odds are, if someone is going after your data, they are going to use SQL injection, default account access, or external stored procedures – all of which are your responsibility to secure. I would have a tough time putting any sensitive data out there until you can verify the security setup. Use caution or you might… oh, never mind.

–Adrian Lane

Monday, October 26, 2009

IDM: Identity?

By David Mortman

For Adam after harassing me on irc:

Calling ‘accounts’ ‘identities’ is broken. Discuss.

–David Mortman

Thursday, October 22, 2009

Friday Summary - October 23, 2009

By Adrian Lane

The First 90 Days.

When you take a new position, what is it you will do in the first 90 days? What do you want to learn? What do you wish to accomplish? Is it enough to plan a course of action or do you immediately need to fix something? “What is your plan for your first 90 days?” is a common interview question for executives. The candidate’s answer tells the prospective employer a few things about the person’s grasp of the challenges ahead, how they operate typically, the efficiency of their approach, and how well their expectations align. Most candidates are under no illusion about taking a new role. In the best case they are filling a gap in a growing company, but more often than not they are there to fix something broken. The question cements in the mind of the candidate what is expected of them stepping in the door. And more than any other point during your tenure with a company, your first 90 days sets your boss’ and coworkers’ impressions of your effectiveness.

Never in my career has fixing security been in my top 3 challenges for the first 90 days. It’s always been quality of service, failed process, a broken, product or a dysfunctional development team. I have never been a CISO or security officer so in the context of security, I don’t really know how I would answer the question “What would my first 90 days look like?” If you are a security practitioner, how would you answer the question? Or perhaps it is more interesting to ask non-security professionals what their 90-day plan for security is? What challenges could you hope to accomplish? Do you think you could come up with a security program in that amount of time? I am interested in your thoughts on this subject. Is research on the establishment of a security program interesting to you? Let us know what you think.

On to the Friday Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

  • Rich: Amrit’s post on Gartner, and working for Gartner. For the record, analysts are very well insulated from financial considerations that could affect research. That said, people who pay to speak to analysts get more time with them, and that can subtly affect opinions.
  • Adrian: My favorite post was also Amrit’s, both for his honest quadrant diagram and for the commentary. To be honest, I felt for ZL as Gartner has the power to cut a company’s sales in half, but I agree with their assessments more than I disagree. My favorite tweet was from @securityincite: “@rmogull Would someone please give Rich some work to do? He’s loitering in shopping malls now. Next he’ll be upgrading to Windows Mobile”.
  • Mortman: @RSnakes on a Plane. (Mort sent this in Monday, he was so convinced).
  • Meier: Two out of five at risk from Wi-Fi Hijacking - Interesting that Talk Talk (the ISP in the UK) is taking this stance to protect end users from heavy-handed plans to tackle Internet piracy by Lord Mandelson.
  • Chris Pepper: Time Warner Cable Exposes 65,000 Customer Routers to Remote Hacks.

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Erik Swan (a Splunk employee -Adrian) in response to Splunk and Unstructured Data:

Thanks for mentioning Splunk, and your post brings up interesting points.

We recommend that people dump “everything” into splunk and just keep it. I’d go further and say that i’d bet that far less than 1% of that data is ever looked-at/reported on/etc. As you point out, its likely harder and more risky to remove data than keep it. This clearly changes when you talk about multiple T per day ( average large system these days ), where even for a wealthy company, the IO required is very expensive and not sure the data has value/risk. My gut is that data generation growth is clearly outpacing the size/price curve per GB, and will likely do so until massively more scaleable and cost effective media is available.

For the time being, keeping everything is likely the best starting point.

At the same time, we have seen models that look a lot like email spam filtering, where “uninteresting” data is routed to different instances that have shorter retention policies. Summarization is used to capture and compress the data hopefully with no information loss. Not a great practice for compliance, but for trouble shooting and analytics can work. Longer term its an interesting area for research and something that due to the size of data we deal with needs to be solved.

–Adrian Lane

Wednesday, October 21, 2009

Hacking Envelopes

By David J. Meier

This story begins early last week with a phone call from a bank I hold accounts with. I didn’t actually answer the call but a polite voice mail informed me of possible fraudulent activity and stated I should call them back as soon as possible. First and foremost I thought this part of my story was a social engineering exercise, but I quickly validated the phone number as being legit, unless of course this was some fantastic setup that was either man-in-the-middling the bank’s site (which would allow them to publish the number as valid) or the number itself had been hijacked. Tinfoil hat aside, I called the bank.

A friendly fraud services representative handled my call and in less than twenty minutes we had both come to the conclusion my card for the account in question was finally compromised. By finally, I mean roughly seven years as being my primary vehicle for payment on a daily basis. But this, ladies and gentlemen, is not where the fun started. No, I had to wait for the mail for that.

Fast forward five to seven business days, when a replacement card showed up in my lockbox which, interestingly, is an often-ignored benefit of living in a high density residence. This particular day I received a rather thick stack of mail that included half a dozen similarly sized envelopes. Unfortunately, I quickly knew (without opening any of them) which one contained my new card – and it wasn’t based on feel.

One would think a financial institution might go to trivial lengths to protect card data within an envelope, but clearly not in this case. The problem I had was that four of the sixteen numbers were readable because, and I’m assuming here, some automatic feeding mechanism at the post office put enough pressure on the embossed card number to reprint the number on the outside of the envelope. It was like someone had run that part of the card through an old-school carbon card copy machine. At this point my mail turned into a pseudo scratch lottery game and I was quickly to trying household items to finish what had already been started. I was a winner on the second try (the Clinique “smoldering plum – blushing blush powder brush” was a failure – my fiance was not impressed, and clearly I’ve watched too much CSI: Miami).

Turns out a simple brass key is all that is needed to reveal the rest of the numbers, name, and expiration. At this point I’m conflicted, with two different ideas:

  • Relief and confusion: The card security code isn’t embossed. So why must the rest of it be?
  • Social engineering: If obtaining card data like this was easy enough, I could devise a scheme where I called recipients new cards with enough data to sound like the bank for many people to give me the security codes.

After considerable thought I feel it’s safe to say that the current method of card distribution poses a low but real level of risk, wherein a significant amount of card data can be discerned short of brute force on the envelope itself. Is it possible? Surely. Is it efficient? Not really. Would someone notice the card data on the back of the envelope? Maybe. But damn – now it really makes sense that folks just go after card data TJX-style, considering all the extra effort in this route.

–David J. Meier

Splunk and Unstructured Data

By Adrian Lane

“What the heck is up with Splunk”? It’s a question I have been getting a lot lately. From end users and SIEM vendors. Larry Walsh posted a nice article on how Splunk Disrupts Security Log Auditing. His post prodded me into getting off my butt and blogging about this question.

I wanted to follow up on Splunk after I wrote the post on Amazon’s SimpleDB as it relates to what I am calling the blob-ification of data. Basically creating so much data that we cannot possibly keep it in a structured environment. Mike Rothman more accurately called it ” … the further decomposition of application architecture”. In this case we collect some type of data from some type of device, put it onto some type of storage, and then we use a Google-esque search tool to find what we are looking for. And the beauty of Google is that it does not care if it is a web page or voice mail transcript – it will find what you are looking for if you give it reasonable search criteria. In essence that is the value Splunk provides a tool to find information in a sea of data.

It is easy to locate information within a structure repository with known attributes and data types, and we know where certain pieces of information are stored. With unstructured data we may not know what we have or where it is located. For some time normalization techniques were used to introduce structure and reduce storage requirements, but that was a short-lived/low performance approach. Adding attributes to raw data and just linking back to those attributes is far more efficient. Enter Splunk. Throw the data into flat files and index those files. Techniques of tokenization, tagging, and indexing help categorize data with the ultimate goal of correlating events and reporting on unstructured data of differing types. Splunk is not the only vendor who does – several SIEM and Log Management vendors do the same or similar. My point is not that one vendor is better than another, but point out the general trend. It is interesting that Splunk’s success in this area has even taken their competitors by surprise.

Larry’s point …

“The growth Splunk is achieving is due, in part, to penetrating deeper into the security marketplace and disrupting the conventional log management and auditing vendors.”

… is accurate. But they are are able to do this because of the increased volume of data we are collecting. People are data pack-rats. From experience, less than 1% of the logged data I collect has any value. Far too, often organizations do not invest the time to determine what can be thrown away. Many are too chicken to throw useless data away. They don’t want to discard data, just in case it has value, just in case you need it, just in case it contains the needle in the haystack you need for a forensic investigation. I don’t want to be buried under the wash of useless data. My recommendation is to take the time to understand what data you have, determine what you need, and throw the rest away.

The pessimist in me knows that this is unlikely to happen. We are not going to start throwing data away. Storage and computing power are cheap, and we are going to store every possible piece of data we can. Amazon S3 will be the digital equivalent of those U-Haul Self Storage places where you keep your grandmother’s china and all the crap you really don’t want, but think has value. That means we must have Google-like search approaches and indexing strategies that vendors like Splunk provide just to navigate the stuff. Look for unstructured search techniques to be much sought after as the data volumes continue to grow out of control.

Hopefully the vendors will begin tagging data with an expiration date.

–Adrian Lane

Rapid7 Acquires Metasploit

By Adrian Lane

Rapid7 acquires Metasploit, the open source penetration testing platform. Wow. All I can say is ‘Wow’. I had been hearing rumors that Rapid7 was going to make an acquisition for weeks, but this was a surprise to both Rich and myself. Still coming to terms with what it means, and I have no clue what the financial terms look like, but almost certainly this is a cash+stock deal. On the surface, it is a very smart move for Rapid7.

Metasploit is considerably better known than Rapid7. Metasploit is a fixture in the security research world and there are far more people using Metasploit than Rapid7 has customers. If nothing else, this gets Rapid7 products in the hands of the people who are shaping web application security, and defining how penetration testing and vulnerability management will be conducted. In a quickly evolving market like pen testing, access to that community is invaluable for a commercial vendor. Plus they get H D Moore on staff, which is a huge benefit.

Metasploit is a well-architected framework that provides for easy extensibility and can be customized in innumerable ways. If you want to test anything from smart phones to databases, this platform will do it, from targeted exploits to fuzzing. Sure, there is work on your part and accessibility to people other than security researchers is low compared to commercial products like Core Security’s Impact, but it’s a solid platform and the integration of the two should not be difficult. It’s more a question of how best to allow Metasploit to continue its open source evolution while leveraging scans into meaningful vulnerability chaining, as well as risk scoring.

Neither is exactly an ‘enterprise ready’ product. That’s not a slam, as NeXpose performs its primary function as well as most. But Rapid7’s platform is just now breaking ground into larger companies. They have a long way to go in UI, ease of use, pragmatic analysis, integration of risk scoring, SaaS, exploit chaining, and back-end integration. That said, I am not sure they need to be an enterprise ready product, at least in the short term. It makes more sense to continue their mid-market penetration while they complete the integration. Breadth of function, which is what they now have, has proven to be a major factor in winning deals over the last couple years. They can worry about the advanced non-technical stuff later.

Identity in the market is an issue for Rapid7. They have waffled between general assessment, pen testing, and vulnerability management, without a clear identity or differentiator when going toe-to-toe with Qualys, nCircle, Tenable, Secunia, and the like. Sure, ‘compliance scoring’ is a useful marketing gimmick, but Metasploit gives them a unique identity and differentiation. Rather than scan-and-patch for known vulnerabilities, focusing mostly inside the network, they will now be able to go far deeper into externally facing custom applications. Taking a risk score across multiple applications and/or platforms is a better approach. If the two platforms are properly integrated, they’ll be useful to IT, security, and software development.

I am sure Rich will chime in with his own take later in the week. Wow.

–Adrian Lane

Tuesday, October 20, 2009

IDM: Roles, Authorization and Data Centric Security

By David Mortman

There were some great comments on my last post, which bring to light a serious problem with the way authorization is done today and how roles don’t help as much as we’d like. First we hear from LonerVamp:

And even if you get the authentication part down, very few apps that I’ve seen then tie back into whatever is in place for role management.

This is an important point that often gets glossed over by IDM vendors. It turns out that while many applications have support for third party authentication mechanisms, very few have support for third party authorization methods. Which means that even if you can centralize your identities for the purposes of account creation/deletion, you still have to manage use inside each application. Furthermore, many of the applications that claim to support third party authorization really turn out to only support third party groups in LDAP or RADIUS, but you still have to map those groups onto roles within the applications.

Andrew Yeomans followed up with his own comment that shows that he’s been a dedicated Securosis reader for a while now:

I’m starting to think that a data-centric approach may be a way forward.

Today, authorizations are generally enforced by applications. Now firstly this leads to high complexity (as you describe) as there is no unifying set of “policy decision points” and “policy enforcement points”. Secondly, it allows for authorization restrictions to be bypassed by other applications that have access to the same data.

Andrew really hits the nail on the head here. We need to continue our shift towards Data-centric Security. The Data Security Lifecycle explicitly assumes that you can properly assign and control rights to who has what data, which is why IDM is so important. I’ve said it before and I’ll say it again: If you don’t know who is accessing the data, how can you possibly tell if it is being abused or misused?

Finally Omie asked:

I’ve been hearing too much about identity management recently and how the move to roles will solve our compliance problems. And I’ve been wondering and asking how we plan to keep the roles maintained over time. Of course I’ve also been under the impression that every other organization has figured that out except ours, but your post is making me rethink that assumption. If there are some best practices/examples of how to approach role maintenance, I would love to learn about them.

Roles can definitely help you out with compliance, but you are correct – role maintenance is definitely a challenge. There is often an implicit assumption that roles, like the rest of the application configuration, are static, when in reality roles tend be dynamic so you absolutely need a process for adapting roles as necessary. Often the complexity of the application causes admins to add roles rather then edit the existing ones because it is easier in the short term. But in the long run this causes extra complexity. I’ll go into more details on this issue and how to deal with it in a later post, so stay tuned. In the meantime, NIST recently published some documents from their recent Privilege (Access) Management Workshop. In particular, you should check out A Survey of Access Control Models, to give you an idea of some ways that role based access control is problematic.

–David Mortman

Monday, October 19, 2009

The First Phishing Email I Almost Fell For

By Rich

Like many of you, I get a ton of spam/phishing email to my various accounts. Since my email is very public, I get a little more than most people. It’s so bad I use 3 layers of spam/virus filtering, and still have some messages slip through (1 cloud based filter [Postini, which will probably change soon], one on-premise UTM [Astaro], and SpamSieve on my Mac). If something gets through all of that, I still have some additional precautions I take on my desktop to (hopefully) help against targeted malware. Despite all that, I assume that someday I’ll be compromised, and it will probably be ugly.

This morning I got the first phishing email in a very long time that almost tricked me into clicking. It came from “Administrator” at one of my hosts and read:

Attention!

On October 22, 2009 server upgrade will take place. Due to this the system may be offline for approximately half an hour. The changes will concern security, reliability and performance of mail service and the system as a whole. For compatibility of your browsers and mail clients with upgraded server software you should run SSl certificates update procedure. This procedure is quite simple. All you have to do is just to click the link provided, to save the patch file and then to run it from your computer location. That’s all.

http://updates.[cut for safety]

Thank you in advance for your attention to this matter and sorry for possible inconveniences.

System Administrator

Two things tipped me off. First, that system is a private one administered by a friend. While he does send updates like this out, he always signs them with his name. Second, the URL is clearly not really that domain (but you have to read the entire thing). And finally, it leads to an Active Server Pages domain, which that administrator never uses since our system is *nix based.

But it was early in the morning, I hadn’t had coffee yet, and we often need to upgrade our SSL after a system update on this server, so I still almost clicked on it.

According to Twitter this is a Zbot generated message:

SecBarbie: RT @mikkohypponen ZBot malware being spammed out right now in emails starting “On October 22, 2009 server upgrade will take place” Ignore it.

Thanks Erin!

It’s interesting that despite multiple obvious markers this was malicious, and be being very attuned to these sorts of things, I still almost clicked on it. It just goes to show you how easy it is to screw up and make a mistake, even when you’re a paranoid freak who really shouldn’t be let out of the house.

–Rich

Thursday, October 15, 2009

Friday Summary - October 16, 2009

By Rich

All last week I was out of the office on vacation down in Puerto Vallarta. It was a trip my wife and I won in a raffle at the Phoenix Zoo, which was pretty darn cool.

I managed to unplug far more than I can usually get away with these days. I had to bring the laptop due to an ongoing client project, but nothing hit and I never had to open it up. I did keep up with email, and that’s where things got interesting.

Before heading down I added the international plan to my iPhone, for about $7, which would bring my per-minute costs in Mexico down from $1 per minute to around $.69 a minute. Since we talked less than 21 minutes total on the phone down there, we lose.

For data, I signed up for the 20 MB plan at a wonderfully fair $25. You don’t want to know what a 50 MB plan costs. Since I’ve done these sorts of things before (like the Moscow trip where I could never bring myself to look at the bill), I made sure I reset my usage on the iPhone so I could carefully track how much I used.

The numbers were pretty interesting – checking my email ranged from about 500K to 1MB per check. I have a bunch of email accounts, and might have cut that down if I disabled all but my primary accounts. I tried to check email only about 2-3 times a day, only responding to the critical messages (1-4 a day). That ate through the bandwidth so quickly I couldn’t even conceive of checking the news, using Maps, or nearly any other online action. In 4 days I ran through about 14 MB, giving me a bit more space on the last day to occupy myself at the airport.

To put things in perspective, a satellite phone (which you can rent for trips – you don’t have to buy) is only $1 per minute, although the data is severely restricted (on Iridium, unless you go for a pricey BGAN). Since I was paying $3/minute on my Russia trip, next time I go out there I’ll be renting the sat phone.

So for those of you who travel internationally and want to stay in touch… good luck.

-rich

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

Other Securosis Posts

Favorite Outside Posts

Top News and Posts

Blog Comment of the Week

This week’s best comment comes from Rob in response to Which Bits are the Right Bits:

Perhaps it is not well understood that audit logs are generally not immutable. There may also be low awareness of the value of immutable logs: 1) to protect against anti-forensics tools; 2) in proving compliance due diligence, and; 3) in providing a deterrent against insider threats.

–Rich