Securosis

Research

Penetration Testing Market Update, Part 2

This is part 2 of a series, click here for Part 1 Penetration testing solution and market changes I’m not exactly sure when Core Security Technologies and Immunity started business, but before then there were no dedicated commercial penetration testing tools. There were a number of vulnerability scanners, and plenty of different “micro” tools to help with different parts of a pen test, but no dedicated exploitation tools. Metasploit also changed this on the non-commercial side. For those who aren’t experts in this area, it’s important to remember that a vulnerability assessment is not a penetration test – vulnerability assessment determines if a system may be vulnerable to an attack, while penetration testing determines if that vulnerability is exploitable. Update- Ivan from Core emailed that they started as consulting in 1996, and the first version of Impact was released in 2002. Rather than repeating Nick Selby’s excellent market summary of the three penetration testing tools providers over at IANS, I’ll focus on the changes we’re seeing in the overall market. The market is still dominated by services, with quality ranging from excellent to absolute snake oil. Even using a tool like Core, by far the most user-friendly, you still need a certain skill level to perform a reasonable test. The tools market is increasing, as Core and Immunity have experienced reasonable growth, with extensive growth of the Metasplit user community. Partnerships between vulnerability assessment vendors and penetration testing solution providers have grown. This was pretty much completely driven by Core until the Metasploit acquisition by Rapid7. Core partners with Tenable, Qualys, nCircle, IBM, Lumension, GFI, and eEye. Update- Immunity partners with Tenable, I missed that in my initial research. Web application vulnerability assessment tools (and services) almost always include some level of penetration-testing capabilities. This is a technology requirement for effective results, since it is extremely difficult to accurately validate many web application vulnerability types without some degree of exploitation. VA tools tend to restrict themselves to prevent damaging the application being tested, and (as with nearly any vulnerability assessment), can normally be run against non-production targets with less safety, in order to produce deeper and more accurate results. Any penetration test worth its salt includes web applications within the scope, and pen testing tools are increasing their support for web application testing. I expect to see greater blurring of the lines between vulnerability assessment and penetration testing in the web application area, which will spill over into the infrastructure assessment space. We’ll also see increasing demand for internal penetration testing, especially for web applications. Core will increase its partnerships and integration on the VA side, and could see an acquisition if larger VA vendors (a small list) see growing customer demand for penetration testing – which I do not expect in the short term. The VA market is larger and if those vendors see pen testing client demands, or greater competition from Rapid7, they can leverage their Core partnerships. Core’s Impact Essential tool is the first to target individuals who aren’t full-time security professionals or penetration testers, and run on an automated schedule. While it doesn’t have nearly the depth of the Pro product, it could be interesting for continuous testing. The real question is whether customers perceive it as either reducing their process costs for vulnerability management (via prioritization and elimination of non-exploitable vulnerabilities), or a replacement for an existing VA solution. If Impact Essential can’t be used to cut overall costs, it will be hard to justify in the current economic environment. As Nick concluded, Immunity will need to improve their UI to increase adoption beyond organic growth… unless they plan to stay focused on dedicated penetration testers. They should also consider some VA partnerships, as they will be the only penetration testing tool not partnered or integrated with VA Update- I was incorrect, Immunity also partners with Tenable. Apologies for missing that in my initial research.. I agree with Nick: Immunity is most at risk in the short term from the Metasploit commercialization. If the UI improves, Immunity could use cost to compete, and some VA vendors might add them as an additional partner. Rapid7 just jumped from being one of the less-known VA players to a household name for anyone who pays attention to penetration testing. This is a huge opportunity, but not without risks. Metasploit is an awesome tool (I’ve used it since version 1… in the lab), but not yet enterprise class. The speed, usefulness, and usability of its integration will play a major role in its long-term success and ability to springboard off the large amount of press and additional name recognition associated with this acquisition. H D also needs to aggressively maintain the Metasploit community, or Rapid7 will lose a large fraction of Metasploit’s value and have to pay staff to replace those volunteers. Quality assurance, of the product as well as the exploits, will also be important to maintain; this could reduce the speed of releasing exploits which Metasploit is famous for. Rapid7 also faces risks due to Metasploit’s BSD license. There is nothing to prevent any other vendor from taking and using the code base. This is a common risk when commercializing any free/open source software, and we’ve seen both successes and failures. Conclusion Here’s how I see things developing: For infrastructure/non-web applications we will see growing demand for exploit testing automation. The vulnerability assessment vendors will add native capabilities, and Core (and Immunity, if they choose) will add more native VA capabilities and find themselves competing more with VA vendors. My gut feel is that VA vendors (other than Rapid7) will only add the most basic of capabilities, leaving the pen testing vendors with a technical advantage until both markets completely merge. That might not matter to most organizations, which either won’t understand the technology differentiation, or won’t care. There will continue to be a need for in-depth tools to support professional penetration testers. This market will continue to grow, but will not offer the opportunities of the broader, ‘lights-out’ automated side of the

Share:
Read Post

Penetration Testing Market Grows and Matures, but Faces Challenges

With last week’s acquisition of Metasploit by Rapid7, I thought it might be a good time to do a review of the penetration testing market and the evolving role of pen testing in the security arsenal. We’ve seen a few different shifts over the past few years in how organizations use pen testing, and I believe this acquisition – combined with changes in enterprise infrastructure – indicates that pen testing is becoming more essential, more closely tied to vulnerability assessment, and generally more mature. First, a bit of a disclaimer: I’m approaching this as an analyst, not a penetration tester. Although I’ve used many of the tools in demonstrations and the lab, I’ve never worked as a pen tester and don’t claim to have that skill set. I’m fairly sure my BBS hacking experience from the mid-80’s doesn’t really count. There are two important issues we need to focus on when evaluating penetration testing – changes in need and value, and changes in delivery methods and tools. The value of penetration testing There is sometimes a debate on the value of penetration testing. Some question its usefulness, since a test by a competent practitioner is pretty much guaranteed to succeed, but highly unlikely to find every exploit path into the organization. More comprehensive tests will find more holes, but at a much higher cost. In some verticals (particularly financials and some types of government organizations) the risk is so high that this is an accepted cost, but for less-aware and less-targeted verticals, or small and mid-sized organizations, a basic vulnerability or program assessment can find more issues at lower cost. That’s because, until fairly recently, penetration testing was dominated by external service organizations performing broad network and host based assessments. Tests were used to: Scare management into spending more on security. Get a general sense of how hardened the organization was. Find and fix any obvious holes that might stand out either in an untargeted scan/attack, or to an attacker willing to spend a little more time with limited resources. Basically, a pen test would give you a good sense of how you’d withstand an attack by an opponent at the same skill level as your testing team, for the amount of time/effort you were willing to pay for. Obviously there are a lot of exceptions, and I’m only talking about general market trends. But at this stage, unless you were a big target, a vulnerability assessment (including an internal assessment) would provide sufficient value at a lower cost. That’s still how many tests are used, but we’ve seen a shift in the past few years due to a few changes in the risk and threat landscape. Specifically: An increase in highly targeted attacks. Greater use of web applications, and more web application attacks (one of the single biggest source of losses in recent major reported incidents). A market and economic system for taking advantage of exploited data. Evolution of technologies & vulnerabilities, coupled with much shorter exploit creation/adoption cycles than in the past. For example, zero day attacks were extremely uncommon just 2-3 years ago, but now seem to appear monthly. The bad guys are making serious money, are going after harder targets, and are taking advantage of our rapid adoption of web technologies. They really have to, since we’ve gotten a lot better at securing our networks and endpoints (yes, we really have, from an overall trends standpoint). These factors change the focus and requirements for penetration testing. While this is merely one analyst’s opinion, and some of these are very early trends, here’s what I’m seeing: Organizations are increasing the frequency of vulnerability assessments and penetration testing, to reduce between-assessment risks. In some cases these are continuous programs. Penetration tests are being more closely tied to vulnerability assessments in order to determine risk and prioritize patches and other defenses. The line between a vulnerability assessment and a penetration test is almost completely blurred for web applications – especially custom web applications. There is greater use of, and need for, penetration testing during development and pre-production phases, since some testing is prohibitively risky on a production system. Penetration testing is being more closely tied to vulnerability assessment on non-web systems to help prioritize. A VA doesn’t necessarily tell you how exploitable a target is, and it certainly won’t tell you what the bad guy can potentially gain. A penetration test helps validate the overall risk and determine the potential impact and losses (not in financial terms – that’s for another day). A vulnerability scan can tell you that system X is vulnerable to attack Y, but you often need to go a step further with a pen test to determine if data Z is at risk. This is especially true for web applications, but also important for other types of assets. The overall focus is shifting away from “Can someone break in, and how long will it take them?” to “Where are we most exposed, and what are our potential losses?” Penetration testing is becoming more of a prioritization and secure development tool. See part 2 for how these factors change the solutions and penetration testing market Share:

Share:
Read Post

Name of the Game: Vested Interest

It seems as though lately a lot of heated conversations revolve around X.509. Whether it’s implementations using IPsec or SSL/TLS certificates, someone always ends up frustrated. Why? Because it really does suck when you think about it. There are many facets one could rant on and on about, when the topic is X.509: the PKI that could have been but isn’t and never will be. It’s a losing argument and if I’ve already got your blood pressure on the rise (I’m lookin’ at you, registrars!) you know why it sucks but there’s zero motivation to do anything about it. Well, there is some motivation, but that will be quickly squashed with FUD coming out of those corporations telling you how need them. You need the warm fuzzy feeling of having a Certificate Authority that’s WebTrust certified to create certificates to provide security and authenticity. But… didn’t someone break that? Enter cheesy diagram:   I know, I know – that’s a work of art in and of itself. I can be hired for crappy vector art at the low low hourly rate of $29.95. There’s my pitch – now back to the story. So I bet at this point you’re telling yourself that I could have made this diagram much more readable had I arranged it differently. In reality I did it on purpose because, like X.509, stuff is there that doesn’t work quite right. That aside, I want to make sure you get two things out of this rant: “Joe Schmoe” will never be able to make a decision at this level of complexity. Some people can; others cannot. Expecting everyone on the Internet to figure this stuff out is a recipe for failure and fraud. The X.509 chain of trust is a big reason it sucks so much. Let me explain. In the diagram “Joe” is visibly upset. Rightly so, because he’s at his local coffee shop and doing a little social network stalking and banking. Aside from all of the other possible attacks when using public WiFi today, he’s been had by a MiTM attack to explicitly steal his credentials even though he’s careful to make sure the little lock icon says that he’s good to go. There’s no way for him to validate this. So is this attack feasible today? That’s probably the wrong question to ask – the question is: is it possible? Let’s move on to the second item of interest: chain of trust. X.509 is very rigid – if any certificates along the certificate chain are invalidated, you must resign and reissue all the certs below them. Think about that as it applies to thousands of computers using IPsec and X.509 for phase one authentication: if you have a mid-level signing server that either expires or is compromised, you have to distribute and install all new certificates. Now think of that same situation as it applies to the certificate authorities you get your SSL/TLS certificates from (and other kinds, but that’s not the point). The problem is that if in fact that CA certificate is invalidated, then what is the process to revoke on the client side (meaning every browser installed on every computer across the Internet)? That really sucks. Don’t even bring up CRL or OCSP – because neither works and/or was designed to manage at this magnitude (let alone any decent-size environment). So let’s fix it! Let’s do something with DNSSEC to get around this rigidity – as Robert Hansen, Dan Kaminsky, and others have suggested. I’ve got bad news, my friends: vested interests. If we remove the existing rigid system, in favor of something more flexible and dynamic – say, as the distributed as DNS – we have destroyed the very lucrative choke point that currently creates a major revenue stream. That’s not to say this problem will never get fixed, but I expect major pressure to ensure that any replacement preserves the lucrative ‘sweet spot’ for CAs, rather than something more viable and open which might also be much cheaper. As usual, it is unlikely any real progress will occur happen without a catastrophic event to kick-start the proces, but if you’re even remotely cognizant of how things get fixed around these parts, you already knew that. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.