One of the more difficult aspects of the analyst gig is sorting through all the information you get, and isolating out any inherent biases. The kinds of inquiries we get from clients can all too easily skew our perceptions of the industry, since people tend to come to us for specific reasons, and those reasons don’t necessarily represent the mean of the industry. Aside from all the vendor updates (and customer references), our end user conversations usually involve helping someone with a specific problem – ranging from vendor selection, to basic technology education, to strategy development/problem solving. People call us when they need help, not when things are running well, so it’s all too easy to assume a particular technology is being used more widely than it really is, or a problem is bigger or smaller than it really is, because everyone calling us is asking about it. Countering this takes a lot of outreach to find out what people are really doing even when they aren’t calling us.
Over the past few weeks I’ve had a series of opportunities to work with end users outside the context of normal inbound inquiries, and it’s been fairly enlightening. These included direct client calls, executive roundtables such as one I participated in recently with IANS (with a mix from Fortune 50 to mid-size enterprises), and some outreach on our part. They reinforced some of what we’ve been thinking, while breaking other assumptions. I thought it would be good to compile these together into a “state of the industry” summary. Since I spend most of my time focused on web application and data security, I’ll only cover those areas:
When it comes to web application and data security, if there isn’t a compliance requirement, there isn’t budget – Nearly all of the security professionals we’ve spoken with recognize the importance of web application and data security, but they consistently tell us that unless there is a compliance requirement it’s very difficult for them to get budget. That’s not to say it’s impossible, but non-compliance projects (however important) are way down the priority list in most organizations. In a room of a dozen high-level security managers of (mostly) large enterprises, they all reinforced that compliance drove nearly all of their new projects, and there was little support for non-compliance-related web application or data security initiatives. I doubt this surprises any of you.
“Compliance” may mean more than compliance – Activities that are positioned as helping with compliance, even if they aren’t a direct requirement, are more likely to gain funding. This is especially true for projects that could reduce compliance costs. They will have a longer approval cycle, often 9 months or so, compared to the 3-6 months for directly-required compliance activities. Initiatives directly tied to limiting potential data breach notifications are the most cited driver. Two technology examples are full disk encryption and portable device control.
PCI is the single biggest compliance driver for web application and data security – I may not be thrilled with PCI, but it’s driving more web application and data security improvements than anything else.
The term Data Loss Prevention has lost meaning – I discussed this in a post last week. Even those who have gone through a DLP tool selection process often use the term to encompass more than the narrow definition we prefer.
It’s easier to get resources to do some things manually than to buy a tool – Although tools would be much more efficient and effective for some projects, in terms of costs and results, manual projects using existing resources are easier to get approval for. As one manager put it, “I already have the bodies, and I won’t get any more money for new tools.” The most common example cited was content discovery (we’ll talk more about this a few points down).
Most people use DLP for network (primarily email) monitoring, not content discovery or endpoint protection – Even though we tend to think discovery offers equal or greater value, most organizations with DLP use it for network monitoring.
Interest in content discovery, especially DLP-based, is high, but resources are hard to get for discovery projects – Most security managers I talk with are very interested in content discovery, but they are less educated on the options and don’t have the resources. They tell me that finding the data is the easy part – getting resources to do anything about it is the limiting factor.
The Web Application Firewall (WAF) market and Security Source Code Tools markets are nearly equal in size, with more clients on WAFs, and more money spent on source code tools per client – While it’s hard to fully quantify, we think the source code tools cost more per implementation, but WAFs are in slightly wider use.
WAFs are a quicker hit for PCI compliance – Most organizations deploying WAFs do so for PCI compliance, and they’re seen as a quicker fix than secure source code projects.
Most WAF deployments are out of band, and false positives are a major problem for default deployments – Customers are installing WAFs for compliance, but are generally unable to deploy them inline (initially) due to the tuning requirements.
Full drive encryption is mature, and well deployed in the early mainstream – Full drive encryption, while not perfect, is deployable in even large enterprises. It’s now considered a level-setting best practice in financial services, and usage is growing in healthcare and insurance. Other asset recovery options, such as remote data destruction and phone home applications, are now seen as little more than snake oil. As one CISO told us, “I don’t care about the laptop, we just encrypt it and don’t worry about it when it goes missing”.
File and folder encryption is not in wide use – Very few organizations are performing any wide scale file/folder encryption, outside of some targeted encryption of PII for compliance requirements.
Database encryption is hard, and not widely used – Most organizations are dissatisfied with database encryption options, and do not deploy it widely. Within a large organization there is likely some DB encryption, with preference given to file/folder/media protection over column level encryption, but most organizations prefer to avoid it. Performance and key management are cited as the primary obstacles, even when using native tools. Current versions of database encryption (primarily native encryption) do perform better than older versions, but key management is still unsatisfactory. Large encryption projects, when initiated, take an average of 12-18 months.
Large enterprises prefer application-level encryption of credit card numbers, and tokenization – When it comes to credit card numbers, security managers prefer to encrypt it at the application level, or consolidate numbers into a central source, using representative “tokens” throughout the rest of the application stack. These projects take a minimum of 12-18 months, similar to database encryption projects (the two are often tied together, with encryption used in the source database).
Email encryption and DRM tend to be workgroup-specific deployments – Email encryption and DRM use is scattered throughout the industry, but is still generally limited to workgroup-level projects due to the complexity of management, or lack of demand/compliance from users.
Database Activity Monitoring usage continues to grow slowly, mostly for compliance, but not quickly enough to save lagging vendors – Many DAM deployments are still tied to SOX auditing, and it’s not as widely used for other data security initiatives. Performance is reasonable when you can use endpoint agents, which some DBAs still resist. Network monitoring is not seen as effective, but may still be used when local monitoring isn’t an option. Network requirements, depending on the tool, may also inhibit deployments.
My main takeaway is that security managers know what they need to do to protect information assets, but they lack the time, resources, and management support for many initiatives. There is also broad dissatisfaction with security tools and vendors in general, in large part due to poor expectation setting during the sales process, and deliberately confusing marketing. It’s not that the tools don’t work, but that they’re never quite as easy as promised.
It’s an interesting dilemma, since there is clear and broad recognition that data security (and by extension, web application security) is likely our most pressing overall issue in terms of security, but due to a variety of factors (many of which we covered in our Business Justification for Data Security paper), the resources just aren’t there to really tackle it head-on.
Reader interactions
10 Replies to “The State of Web Application and Data Security—Mid 2009”
Great stuff. I’m particularly amused by the caption “There is also broad dissatisfaction with security tools and vendors in general, in large part due to *poor expectation setting* during the sales process, and deliberately confusing marketing.” I interpret “poor expectations” as buyers actually believing marketing BS. My goodness, all anyone needs to do is step back from the hype and ask themselves: Does security automation work 100% of the time? How much manpower does it demand? How much complexity does it add to operations? I believe once these realities set upon them, ex post facto, we have these less-than-happy sentiments. I also imagine that there IS A LOT OF MONEY associated with procuring these products, thus exposing one’s posterior elements to being chewed upon. (Promises+perceptions)-realities = less-than-happy assessment.
All industry IT initiatives relate to some specific technology. Compliance initiatives aside, let’s assume that a company simply buys the upper-right vendor from a leading analyst wave or slight-of-hand square recommendation. Will they have achieved a secure compliant environment as a result of buying one of everything? No, not unless they incorporate processes and procedures to monitor all this gear and ensure everything works properly. Will they have overpaid for these goods? Absolutely. The crutch comes to mind. Bells, whistles and analyst overhead are very expensive. Is this to say that tight firewall rules and close network monitoring negate the need for an IPS? Yep. IT people, and geeks in general-myself included, are enamored with shiny things. Who wants to delve into the down right banal crap unless they want to demonstrate some geek skilz. Flashy technology bits also looks great on one’s resume. My point is that there are simpler, less costly alternatives to all acronym solutions (SIEM, DLP, WAF, DRM, WTF?) that can produce similar if not better results. It may not be for everyone but McGiver has shown us the way that much can be accomplished with a few resources and a lot ingenuity.
Iconoclast? You betcha. Conventional practices follow conventional paths. Conventionalists do whatever everyone else does or they do what the experts tell them to do. There’s safety in numbers. And nobody is incurring great risk by following the pack. However, innovation and creativity occurs off the beaten path. How else does one know what’s possible unless they’re challenged to do without. This is a particularly salient conversation to have considering the severe beating suffered by IT department budgets. Can you get better results from existing systems without spending money? Can you improve processes and reduce costs? Can you reduce overhead?
@MikeA – Before I got my start in security, one of the first computer ‘hacks’ I ever saw was an injection attack. A co-worker, Tony Rems, figured out that the Finger Daemon did not examine the data it was returned from the named pipe (A ref. here). It assumed it was getting a text based .plan file, but if if you sent something like an executable or other commands, you could remotely execute code on the machine from which the user ‘fingered’ you. I learned of this back in 1994 and have seen a steady increase in exploitation with every platform since that point in time. We have know about injection attacks for a very long time and we will continue to see injection attacks forever because there will always be programmers who trust the data they receive or perform incomplete cleansing. I wish I was wrong, because it is a solvable problem, but SQLi is not going away.
-Adrian
@ Rich:
Yes, you are in right in some ways. But remember that in webappsec, things are different than buffer overflows and classic attacks. Chained attacks work too too well.
The focus is too much on the vulnerabilities/exploits themselves and not enough on the specific security controls visibly implemented in the software design and implementation. It’s not a “new” or “different” flaw because RSnake found one XSS and Gareth Hayes found another in your webapp. It’s that Arshan found a general lack of proper output encoding, combined with a rich input validation API in your platform, and left unimplemented in a few places beyond just the straightforward presentation tier.
@ MikeA:
I think we need to work on the low-hanging fruit and the harder problems simultaneously—and that we can. A door-stop may have its uses, but it’s just taking away the money to lock, bolt, and secure the door by wasting $110K on what amounts to being a rock or slate of wood (i.e. stuff you should get for free).
MSFT does not have their shit together because they can’t force Adobe, Apple, et al to use SafeSEH, ASLR, and DEP. MSFT is fine, but Google SDC and AppV are the future of applications… even if webapp and cloud aren’t already…
XSS and SQLi are not going away. The attacks and theory are building faster than the protections and defenses. XSS and SQLi may be low-hanging fruit, but they are not basic issues from an attack perspective. From a security control perspective, though, XSS and SQLi are easy—but nobody, even trained, aware, and technology-driven departments—can clean up their XSS and SQLi acts properly. Who has done it?
Didn’t #3 already happen about 2 years ago?
Great conversation going on – I posted some of my initial thought here [http://www.mikeandrews.com/2009/06/02/the-state-of-web-application-and-data-security-securosis/] (as no referers/linkbacks).
I think what people (dre) have to realize is that there’s a *lot* of low-hanging fruit to clear before we even start to think about the bigger picture. Rich’s next blog post on PCI (which I’m in the middle of reading – perahps a comment there as well!) says that perfectly. I’ve used the “jack and the beanstalk vs silver bullets” analogy for a while [http://www.mikeandrews.com/2008/01/14/silver-bullets-or-magic-beans/] – there are seldom few things out there that are completly worthless – they might be overstated, but even a door-stop has it’s uses.
I think that there’s a tsunami coming that very few people want to talk about. IMO, there’s 3 parts to this…
1. MSFT *really* get’s their shit together and produces software with so few vulns, and great mitigations/defences for the onces they miss, that the ROI on breaking or writing malware for a Windows box just isn’t worth it. What happens then? I think the attention will go to Linux and Mac, and in my own personal view, they just won’t be able to cut it.
2. For what ever reason – training, awareness, tools, compliance, etc – XSS and SQLi will go away. Once we’ve got rid of these basic issues that are *still* plaguing the webappsec scene, what’s left is *much* harder to address and once again I’m not sure of the abilities of many companies to step up to that.
3. The end of viruses that can be detected by signatures (of what ever means – hashes, fragments, behavior, etc). We’ll see very custom, very specific code, used for one purpose and that’s it. The ability for AV companies to move quickly enough even now is limitied, and if this happens it’s going to be game over for that technology. Maybe combine this with drive-by downloads when visiting sites (although it seems browsers are getting better, at every con with pw0n20wn they are getting popped) and the net become a lot more unsafe than it already is.
my $0.02
Dre,
I talk to users PRACTICALLY EVERY DAY that use WAFs successfully to block attacks- mostly SQL injection and XSS. I know EXACTLY how n-tiered web applications work, since I started developing them in 1997 or so, so don’t play that game.
I agree that many WAFs, as deployed, are expensive door stops, but the technology is far from useless. Inline, with a defined rules set built with knowledge of the underlying application, works. I’m sorry, but it does.
You need to break out the problems of bad deployments (even though some vendors push those deployments) from how the technology can be used effectively. You seem to intermingle them and get all upset about it.
Seriously- if we pick an app, find a SQL injection flaw, and protect that specific flaw with a WAF do you think you can still compromise that vulnerability? Maybe you’ll find a different flaw, but that’s not what I’m talking about when I say Shield then Patch.
@Rich:
“convinced they work when used properly”
Work for what? What do they prevent? What don’t they prevent? Do they “work”, but also cause new classes of vulnerabilities or open new channels of attacks?
“using them well”
Using what well to do what? What qualifies that a WAF is doing any better job than another WAF?
“Shield and Patch configuration we recommend”
So WAFs “shield” and “patch”? What do they shield against? What don’t they shield against? Do they “patch” or “virtual patch”? What is the difference?
“whole lot of people poorly managing them”
Who is managing WAFs? Who is installing them? Do they understand the applications? Did they even bother to speak with the developers of said applications? Are they placing a WAF in front of only one application, or multiple applications? Can you tune a single WAF (or WAF pair) for multiple applications all at once?
“put the effort in to tune them properly, they work well”
How much effort and, again, work well for what? What’s the purpose? What’s the problem that WAF is trying to solve? If WAF is trying to solve the web application vulnerability problem, then I have to tell you that it’s categorically impossible to do this and you should listen to me because I am an expert on the matter. Does anyone promoting WAF have any idea how a 3-Tier or Multi-Tier application works? Are you people even aware that web applications by default are 3 or Multi-Tier?
Gartner is about to release an MQ on WAFs, but what we’re looking at is technology that is the exact equivalent of paper weights and door stops. These things don’t do anything! They get bought to meet PCI-DSS Requirement 6.6 (or as compliance readiness to meet expectations for Req 6.6) and no other reason! Then the WAF is just left alone! What amazing functionality! I’m jumping in line to buy me a pair for every application I own!
I’m all for bashing Security Source Code Tools or other app/code scanners, but WAF in particular is a riot to bash on… it’s just so easy… I mean… out-of-band and zero tuning/optimization??? How crude! No wonder when I walk around the airport I see the Barracuda Networks’ advertising next to the GEICO cavemen—they share a remarkable commonality!
LonerVamp,
I’m having problems getting the kinds of references I’d like on WAF. I’m convinced they work when used properly, but I’m not sure how many organizations are using them well. I think there’s probably a small contingent using them in the Shield and Patch configuration we recommend, and a whole lot of people poorly managing them.
My gut feel is most people drop then in out of band with a basic rules set, then detune them to cut off false positives, kind of like we used to do with IDS. But when someone wants to put the effort in to tune them properly, they work well.
I think I want to try and dig up some more non-vendor supplied references to get more of a feel for things. Usually people calling me are in the product selection process or have problems, not the happy users.
Excellent information in that post!
Rich, have you been encouraged by the tone of those you’ve talked to regarding their WAF setups? I am not surprised by the larger number of WAF deployments (dropping in an appliance certainly seems easier!), but I’m curious how many really think they’re being effective. I’m not as big a skeptic as dre (hi!), but I realistically think deployment out of band and lots of false positives leave them doing absolutely nothing. I also wonder how many are deployed with nothing but a handful of basic triggers that are just default examples.
This would be the equivalent of deploying an ANY/ANY firewall 15 years ago just to say you have a firewall. Technically, you do have one. Technically, you might even be set up to look at the alerts, but because it detects nothing, it does nothing.
Done properly, I’m wondering if source code reviews are actually less costly and involve less effort and less expert-level skill…
@Zac (and Dre),
As a former paramedic and physical security guy I firmly believe this is a deep seated aspect of human nature. I had people die in front of me (or cleaned up after) because they kept putting things off until tomorrow. We are not wired to deal well with anything except imminent threats. I think us security types might be wired a little differently, or at least conditioned differently. I suspect it’s because we are more exposed and educated to the risks, so they seem more imminent to us than to others.
Anyway, I don’t really care. Just means I’ll never have trouble finding a job.
I am thinking that every ‘decision maker’ I’ve come across – you know, the ones that are actually able to commit resources/money – they all seem to have this “I’ve closed my eyes so I can’t see it and therefore it doesn’t exist” mentality.
“These problems are things that happen to other people.” “I don’t think we’re at risk right now.” Etc upon nauseating Etc. (I know… we all know all these idiotic quotes – too many of us from personal experience.)
Even when there is a compliance policy to act as a driver for committing resources/money there is no ‘pressure’ to complete the task well.
Example: I once did a contract were the entire national sales web site, the inventory system, all the head office files/emails/etc/mp3s/etc/LOLCatz/etc/important-stuff servers all resided in a single, solitary, lonely, overcrowded server room. (I am specifically avoiding using a infosec example here btw.)
So… since they had no redundant power source (aka: UPS), no redundant A/C chillers the did what any reasonable company did: they bought a DC grade UPS set up and a redundant A/C chiller. However, they did not complete the install of them right away since there was no pressure to complete this task and no resources were committed by those in power.
Guest what happened… no really, guess. That’s right, the A/C quit. Died. The temp in this DC was about 35 degrees C (~92 degrees F) at knee high. Over half of the sales web servers had shut down do to overheating by the time fans had been brought in and they started shutting off ‘non-essential’ servers. Disaster for this company (and the two brand names – one a major retailer in both Canada and the US) was averted by a lot of sweat and the narrowest of margins.
Now guess what happened the next day… no really, guess. You got it right. The redundant A/C was moved from the parting lot where it sat for months to the roof and hooked up. Cost about 3 times what it should have but that let them fix the old one.
All told the compliance policy driven exercise (compliance to the rule of “we can’t stop selling” that is) failed and ended up almost breaking the aforementioned rule and cost all told almost 5 times what it should have.
The point of this little (but true) story: This is not just a case (as Andre Gironda states quite well) of the blind leading the blind. No, there is more than that to this. There is also the deep seated belief that “this can’t happen to me” and “bad things only happen to others” that make ‘decision makers’ not push for timely completion of such necessary projects.
I don’t know about you… but I’m not going to hold my breathe waiting for the ‘decision makers’ to get a clue.