Securosis

Research

The Future Of Application And Database Security: Part 2, Browser To WAF/Gateway

Since Friday is usually “trash” day (when you dump articles you don’t expect anyone to read) I don’t usually post anything major. But thanks to some unexpected work that hit yesterday, I wasn’t able to get part 2 of this series out when I wanted to. If you can tear yourself away from those LOLCatz long enough, we’re going to talk about web browsers, WAFs, and web application gateways. These are the first two components of Application and Database Monitoring and Protection (ADMP), which I define as: Products that monitor all activity in a business application and database, identify and audit users and content, and, based on central policies, protect data based on content, context, and/or activity. Browser Troubles As we discussed in part 1, one of the biggest problems in web application security is that the very model of the web browsers and the World Wide Web is not conducive to current security needs. Browsers are the ultimate mashup tool- designed to take different bits from different places and seamlessly render them into a coherent whole. The first time I started serious web application programming (around 1995/96) this blew my mind. I was able to embed disparate systems in ways never before possible. And not only can we embed content within a browser, we can embed browsers within other content/applications. The main reason, as a developer, I converted from Netscape to IE was that Microsoft allowed IE to be embedded in other programs, which allowed us to drop it into our thick VR application. Netscape was stand alone only; seriously limiting its deployment potential. This also makes life a royal pain on the security front where we often need some level of isolation. Sure, we have the same-origin policy, but browsers and web programming have bloated well beyond what little security that provides. Same-origin isn’t worthless, and is still an important tool, but there are just too many ways around it. Especially now that we all use tabbed browsers with a dozen windows open all the time. Browsers are also stateless by nature, no matter what AJAX trickery we use. XSS and CSRF, never mind some more sophisticated attacks, take full advantage of the weak browser/server trust models that result from these fundamental design issues. In short, we can’t trust the browser, the browser can’t trust the server, and individual windows/tabs/sessions in the browser can’t trust each other. Fun stuff! WAF Troubles I’ve talked about WAFs before, and their very model is also fundamentally flawed. At least how we use WAFs today. The goal of a WAF is, like a firewall, to drop known bad traffic or only allow known good traffic. We’re trying to shield our web applications from known vulnerabilities, just like we use a regular firewall to block ports, protocols, sources, and destinations. Actually, a WAF is closer to IPS than it is to a stateful packet inspection firewall. But web apps are complex beasts; every single one a custom application, with custom vulnerabilities. There’s no way a WAF can know all the ins and outs of the application behind it, even after it’s well tuned. WAFs also only protect against certain categories of attacks- mostly some XSS and SQL injection. They don’t handle logic flaws, CSRF, or even all XSS. I was talking with a reference yesterday of one of the major WAFs, and he had no trouble slicing through it during their eval phase using some standard techniques. To combat this, we’re seeing some new approaches. F5 and WhiteHat have partnered to feed the WAF specific vulnerability information from the application vulnerability assessment. Imperva just announced a similar approach, with a bunch of different partners. These advances are great to see, but I think WAFs will also need to evolve in some different ways. I just don’t think the model of managing all this from the outside will work effectively enough. Enter ADMP The idea of ADMP is that we build a stack of interconnected security controls from the browser to the database. At all levels we both monitor activity and include enforcement controls. The goal is to start with browser session virtualization connected to a web application gateway/WAF. Then traffic hits the web server and web application server, both with internal instrumentation and anti-exploitation. Finally, transactions drop to the database, where they are again monitored and protected. All of the components for this model exist today, so it’s not science fiction. We have browser session virtualization, WAFs, SSL-VPNs (that will make sense in a minute), application security services and application activity monitoring, and database activity monitoring. In addition to the pure defensive elements, we’ll also tie in to the applications at the design and code level through security services for adaptive authentication, transaction authentication, and other shared services (happy Dre? 🙂 ). The key is that this will all be managed through a central console via consistent policies. In my mind, this is the only thing that makes sense. We need to understand the applications and the databases that back them. We have to do something at the browser level since even proper parameterization and server side validation can’t meet all our needs. We have to start looking at transactions, business context, and content, rather than just packets and individual requests. Point solutions at any particular layer have limited effectiveness. But if we stop looking at our web applications as pieces, and rather design security that addresses them as a whole, we’ll be in much better shape. Not that anything is perfect, but we’re looking at risk reduction, not risk elimination. A web application isn’t just a web server, just some J2EE code, or just a DB- it’s a collection of many elements working together to perform business transactions, and that’s how we need to look at them for effective security. The Browser and Web Application Gateway A little while back I wrote about the concept of browser session virtualization. To plagiarize myself and save a little writing time so I can get

Share:
Read Post

Don’t Use chmod To Block Mac OS X ARDAgent Vulnerability

Just a quick note- if you used chmod to change the permissions of ARDAgent to block the privilege escalation vulnerability being used by the new trojans you should still go compress or remove it. Repairing permissions restores ARDAgent and opens the vulnerability again. I suppose you could also make sure you don’t repair permissions, but it’s easiest to just remove it. I removed the chmod recommendation from the TidBITS article. Share:

Share:
Read Post

The Future Of Application And Database Security: Part 1, Setting The Stage

I’ve been spending the past few weeks wandering around the country for various shows, speaking to some of the best and brightest in the world of application and database security. Heck, I even hired one of them. During some of my presentations I laid out my vision for where I believe application (especially web application) and database security are headed. I’ve hinted at it here on the blog, discussing the concepts of ADMP, the information-centric security lifecycle, and DAM, but it’s long past time I detailed the big picture. I’m not going to mess around and write these posts so they are accessible to the non-geeks out there. If you don’t know what secure SDLC, DAM, SSL-VPN, WAF, and connection pooling mean, this isn’t the series for you. That’s not an insult, it’s just that this would drag out to 20+ pages if I didn’t assume a technical audience. Will all of this play out exactly as I describe? No way in hell. If everything I predict is 100% correct I’m just predicting common knowledge. I’m shooting for a base level of 80% accuracy, with hopes I’m closer to 90%. But rather than issuing some proclamation from the mount, I’ll detail why I think things are going where they are. You can make your own decisions as to my assumptions and the accuracy of the predictions that stem from them. Also, apologies to Dre’s friends and family. I know this will make his head explode, but that’s a cost I’m willing to pay. Special thanks to Chris Hoff and the work we’ve been doing on disruptive innovation, since that model drives most of what I’m about to describe. Finally, this is just my personal opinion as to where things will go. Adrian is also doing some research on the concept of ADMP, and may not agree with everything I say. Yes, we’re both Securosis, but when you’re predicting uncertain futures no one can speak with absolute authority. (And, as Hoff says, no one can tell you you’re wrong today). Forces and Assumptions Based on the work I’ve been doing with Hoff, I’ve started to model future predictions by analyzing current trends and disruptive innovations. Those innovations that force change, rather than ones that merely nudge us to steer slightly around some new curves. In the security world, these forces (disruptions) come from three angles- business innovation, threat innovation, and efficiency innovation. The businesses we support are innovating for competitive advantage, as are the bad guys. For both of them, it’s all about increasing the top line. The last category is more internal- efficiency innovation to increase the bottom line. Here’s how I see the forces we’re dealing with today, in no particular order: Web browsers are inherently insecure. The very model of the world wide web is to pull different bits from different places, and render them all in a single view through the browser. Images from over here, text from over here, and, using iframes, entire sites from yet someplace else. It’s a powerful tool, and I’m not criticizing this model; it just is what it is. From a security standpoint, this makes our life more than a little difficult. Even with a strictly enforced same origin policy, it’s impossible to completely prevent cross-site issues, especially when people keep multiple sessions to multiple sites open all at the same time. That’s why we have XSS, CSRF, and related attacks. We are trying to build a trust model where one end can never be fully trusted. We have a massive repository of insecure code that grows daily. I’m not placing the blame on bad programmers; many of the current vulnerabilities weren’t well understood when much of this code was written. Even today, some of these issues are complex and not always easy to remediate. We are also discovering new vulnerability classes on a regular basis, requiring review and remediation on any existing code. We’re talking millions of applications, never mind many millions of lines of code. Even the coding frameworks and tools themselves have vulnerabilities, as we just saw with the latest Ruby issues. The volume of sensitive data that’s accessible online grows daily. The Internet and web applications are powerful business tools. It only makes sense that we connect more of our business operations online, and thus more of our sensitive data and business operations are Internet accessible. The bad guys know technology. Just as it took time for us to learn and adopt new technologies, the bad guys had to get up to speed. That window is closed, and we have knowledgeable attackers. The bad guys have an economic infrastructure. Not only can they steal things, but they have a market to convert the bits to bucks. Pure economics give them viable business models that depend on generating losses for us. Bad guys attack us to steal or assets (information) or hijack them to use against others (e.g., to launch a big XSS attack). They also sometimes attack us just to destroy our assets, but not often (less economic incentive, even for DoS blackmail). Current security tools are not oriented to the right attack vectors. Even WAFs offer limited effectiveness since they are more tied to our network security models than our data/information-centric models. We do not have the resources to clean up all existing code, and we can’t guarantee future code, even using a secure SDLC, won’t be vulnerable. This is probably my most contentious assumption, but most of the clients I work with just don’t have the resources to completely clean what they do have, and even the best programmers will still make mistakes that slip through to production. Code scanning tools and vulnerability analysis tools can’t catch everything, and can’t eliminate all false positives. They’ll never catch logic flaws, and even if we had a perfect tool, the second a new vulnerability appeared we’d have to go back and fix everything we’d built up to that point. We’re relying on more and more code and

Share:
Read Post

Network Security Podcast, Episode 109

This week, Martin and I are joined by Adam Shostack, bandleader of the Emergent Chaos Jazz Combo of the Blogosphere and co-author of The New School of Information Security. (And he sorta works for a big software company, but that’s not important right now). You can get the show notes and episode over at netsecpodcast.com. We spend a lot of time talking about statistics and the New School concepts. I’m a big fan of the book, and Adam and I share a lot of positions on where we are as an industry, and where we need to go. Share:

Share:
Read Post

Improving OS X Security

There’s been a bunch of news on the Mac security front in the past couple of weeks. From the Safari carpet bombing attack, to a couple trojans popping up. Over the weekend I submitted an email response to a press interview where I outlined my recommended improvements to OS X to keep Macs safer than Windows. On the technical side they included elements like completing implementation of library randomization (ASLR), adding more stack protection to applications, enhancing and extending sandboxing to most major OS X applications, running fewer processes as root/system, and more extensive use of DEP. I’m not bothering to lay this out in any more depth, because Dino Dai Zovi did a much better job of describing them over on his blog. Dino’s one of the top Mac security researchers out there, so I highly suggest you read his post if you’re interested in OS X security. There are a few additional things I’d like to see, outside of the OS level changes: A more-deeply staffed Apple Security Response Center, with public facing side to better communicate security issues and engage the research community. Apple absolutely sucks at working with researchers and communicating on security issues. Improvements here will go a way to increase confidence, manage security issues, and avoid many of the kinds of flareups we’ve seen in the past few years. Better policies on updating open source software included with OS X. In some cases, we’ve seen vulnerabilities in OS X due to included open source software, like Samba and Apache, that are unpatched for MONTHS after they are publicly known. These are fully exploitable on Macs and other Apple products until Apple issues an update. I realize this is a very tough issue, because Apple needs to run through extensive evaluation and testing before releasing updates, but they can mitigate this timeline by engaging deeply with those various open source teams to reduce the windows where users are exposed to the vulnerabilities. An Apple CSO- someone who is both the internal leader and external face of Apple security. They need an evangelist with credibility in the security world (no, I’m not trolling for a job; I don’t want to move to California, even for that). A secure development lifecycle for Apple products. The programmers there are amazing, but even great programmers need to follow secure coding practices that are enforced with tools and process. I have suspicions we might see some of these technical issues fixed in Snow Leopard, but the process issues are just as important for building and maintaining a sustainable, secure platform. Share:

Share:
Read Post

I’m With Ptacek- I Run My Mac As Admin

I’m still in New York for the FISD conference, listening to Team Cymru talk about the state of cybercrime as I wait for my turn at the podium (to talk about information-centric security and DLP). One problem with travel is keeping up with the news, so I pretty much missed the Applescript vulnerability and now have to write it up for TidBITS on the plane before Monday. I was reading Thomas Ptacek’s post on the vulnerability, and I think it’s time I joined Tom and came out of the closet. I run as admin on my Mac. All the time. And I’m not ashamed. Why? As Ptacek said, even without root/admin there’s a ton of nasty things you can do on my system. In fact, you can pretty much get anything I really worry about. I even once wrote some very basic Applescript malware that ran on boot (after jailbreaking an improperly configured virtual machine). It didn’t need admin to work. There. I feel better now. Glad to get that out there. (If you’re going to criticize this, go read Tom’s post and talk to him first. He’s smarter than me, and not on an airplane.) Share:

Share:
Read Post

I’m Not The Only Blogger Here!

I’ve been absolutely flattered by some of the positive comments on our posts this week, especially the database posts. But as much as I enjoy the credit for someone else’s work, I’d like to remind everyone that I’m not the only blogger here at Securosis anymore. Adrian Lane, our new Senior Security Strategist, has been putting up all the meat this week. Once I get back from this conference I’ll increase the font size on the writer tagline for the blog so it’s more obvious. We also occasionally have contributions from David Mortman and Chris Pepper, both of whom wrote posts I got the credit for. These are all brilliant guys, and I’m honored they contribute here. They’re probably smarter than I am… … oh. Never mind. I write it all. Share:

Share:
Read Post

Speaking in Seattle And New York This Week

It’s a good thing Adrian joined when he did, because I’m slammed with speaking events this week and he gets to mind the blog. Tomorrow I head up to Bellevue to speak at the Association for Enterprise Integration’s Enterprise Security Management event. This is a mixed audience with mostly defense contractors and NSA types. Bit of a different venue for me, but I love talking with .gov/.mil/.nothingtoseehere types. Wednesday I shift over to the City (NYC) for the Financial Information Security Decisions conference put on by Information Security magazine. I’m presenting in the Data Security track, recording a session on virtualization security with Dino and Hoff, and squeezing in a few other things. I can’t speak for Dino, but Hoff and I are both battling travel-related colds, so the panel could end up as the Great Snot War of 2008. Share:

Share:
Read Post

Crime, Communication, and Statistics

‘I’m not sure if it’s the innate human desire to recognize patterns even when they don’t exist, or if the stars really do align on occasion, but sometimes a series of random events hit at just the right time to inspire a little thought. Or maybe I’m just fishing. This week is an interesting one on the home front. It’s slowly emerging that we’re having some crime problems in the community. There has been a rash of vehicle break-ins and other light burglary. I found out about it when a board member of our HOA (and former cop) posted in our community forums that we’ve hired an off-duty Phoenix police officer to patrol our neighborhood, on top of the security company we already have here. We’ve got a big community center with a pool, so we need a little more security than the average subdivision. Our community forums are starting to fill up with reports from throughout the community and I highly suspect this recent spree will be ending soon. All 900 homes now have access to suspect descriptions, targets, areas of concern, and so on. We’re all locking up tighter and keeping our eyes open. Already some activity was caught on camera and turned over to the police. We know the bad guy’s techniques, tactics, and operations. With this many eyeballs looking for them, the odds are low they’ll be working around here much longer. We’ve had problems for months, and the private security was ineffective. There is just too much territory for them to cover effectively. This spree could have potentially gone on forever, but now that the community is engaged we’ve moved from relying on 2 people to nearly 900 for our monitoring and defense. We’ve taken the edge, just by sharing and talking. In the security world some interesting tidbits have popped up this week. First came Debix with their fraud numbers, and now Verizon with their forensic investigation’s breach report. On a private email list I was slightly critical of Verizon, but I realized I’m just being greedy and wanted more detail. While it could be better, this is some great information to get out there (thanks for making me take a second look, Hoff). I shouldn’t have been critical, because when it comes to data breaches we should be thankful for any moderately reliable stats we can get our hands on. Between these two reports, a couple of things jumped out at me. First, I think these finally debunk all the insider threat marketing garbage. No one ever really had those numbers; trust me, since I saw my “estimate” from Gartner quoted as a hard number for years. This now aligns with my gut feeling, which is that there are more bad guys on the outside than the inside, although inside attacks can be more devastating under the right circumstances. To further support this, the Verizon report also indicates that many attacks on the inside (or from partners) are really attacks from the outside that compromised an internal system. This supports my controversial positions on how we should treat the insider threat. The second major point is that we rarely know where our data is, or if our systems are really configured correctly. Both of these are cited in the report as major sources of breaches- unknown data, unknown systems, and misconfigured systems. This is strongly supported by the root cause analysis work I’ve done on data breaches (in my data breach presentation; haven’t written it in paper/blog form yet). People wonder why I’m such a big fan of DLP. Just think about how much risk you can reduce by scanning your environment for sensitive data in the wrong places. FInally, it’s clear that web applications are a huge problem. Verizon claims web apps were involved in 34% of cases. Again, this supports my conclusion from data breach analysis that links more fraud to application compromises than lost tapes or laptops. The Debix numbers also indicate no higher fraud levels for lost tapes than normal background levels of fraud. We’re on the early edge of building our own neighborhood watch. We’re starting to see the first little nibs of hard breach data, and they’re already defying conventional wisdom. By communicating more and sharing, we are better able to make informed risk and security decisions. Without this information, the bad guys can keep cruising our neighborhoods with impunity, stealing whatever we accidentally leave in our cars overnight. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.