Securosis

Research

External Database Procedures

Just ran across this ‘new’ SQL Server vulnerability in my news feed. This should not be an issue because you should not be using this set of functions. If you are using external stored procedures on a production database, stop. In fact, you want to stop using them altogether by either locking them down or removing them entirely. Not just because of this reported instance. External stored procedures exploits are favorites of database hackers, and have been used to alter database functionality and to run arbitrary code, both externally and internally launched attacks! SQL Server has historically had issues with buffer overflow attacks (See Microsoft Technical Bulletin MS02-020) against the pre-built procedures, and while known issued have been cleared up, XP’s are a complex and powerful extension ripe for exploits. The database vendors in general recommend as a security best practice the restriction of these to administrative use at a minimum. Even then it violates the best practice of segregation of the OS / database functionality required by compliance and operational security. Use of external stored procedures is flagged by all of the database vulnerability assessment tools, as both a security and a compliance issue. And in case you think that I am picking on SQL Server, many similar problems have been reported on Oracle ExtProc as well. The DBA in me loves the ability to run native platform utilities to support database admin efforts. It’s a really handy extension, and I know it is tempting to leave these on the database so you can make admin easier, but you will be relying upon security through obscurity. It is a really big risk in a production environment and one that every database hacker will have scripts to find and exploit. Share:

Share:
Read Post

Structured Security Program, meet Agile Process

Bryan Sullivan’s thought-provoking post on Streamlining Security Practices for Agile Development caught my attention this morning. Reading it gave me the impression of a genuine generational divide. If you have ever witnessed a father and son talk about music, while they are talking about the same subject, there is little doubt the two are incompatible. The post is in line with what Rich and I have been discussing with the web application series, especially in the area of why the web apps are different, albeit on a slightly more granular level. This article is about process simplification and integration, and spells out a few of the things you need to consider if moving from more formalized waterfall process into Agile with Security. The two nuggets of valuable information are the risk based inclusion of requirements, where the higher risk issues are placed into the sprints, and the second is how to account for lower priority issues that require periodic inspection within a non-linear development methodology. The risk-based approach of placing higher security issues as code gets created in each sprint is very effective. It requires that issues and threats be classified in advance, but it makes the sprint requirements very clear while keeping security as a core function of the product. It is a strong motivator for code and test case re-use to reduce overhead during each sprint, especially in critical areas like input validation. Bryan also discusses the difficulties of fitting other lower priority security requirements extracted from SDL into Agile for Web Development. In fact, he closes the post with the conclusion that retrofitting waterfall based approaches to secure Agile development is not a good fit. Bravo to that! This is the heart of the issue, and while the granular inclusion of high risk issues into the sprint works, the rest of the ‘mesh’ is pretty much broken.  Checks and certifications triggered upon completed milestones must be rethought. The bucketing approach can work for you, but what you label the buckets and when you give them consideration will vary from team to team. You may decide to make them simple elements of the product and sprint backlog. But that’s the great thing about process is you get to change it to however it suits your purpose. Regardless, this post has some great food for though and is worth a read. Share:

Share:
Read Post

Database Security, Statistics and You

‘Doing some research on business justification stuff for several project Rich and I are working on. Ran across the Aberdeen Group research paper reference on the Imperva Blog,, which talks about business justification for database security spending. You can download a copy for free. It’s worth a read, but certainly needs to be kept in perspective. “Don’t you know about the new fashion honey? All you need are looks and a whole lotta money.” Best-in-Class companies 2.4 times more likely to have DB encryption. Best-in-Class companies are more likely to employ data masking, monitoring, patch management and encryption than Laggards. Hmmm, people who do more and spend more are leaders in security and compliance. Shocker! And this is a great quote: “… current study indicates that the majority of their data is maintained in their structured, back end systems.” As opposed to what? Unstructured front end systems? Perhaps I am being a bit unfair here, but valuable data is not stored on the perimeter. If the data has value, it is typicallystored in a structured repository because that makes it easier to query by a wider group for multiple purposes. I guess people steal data that has no value as well, but really, what’s the point. “Well, duh.” Saying it without saying it I guess, the Imperva comments are spot on. You can do more for less. The statistics show what we have been talking about for data security, specifically database security, for a long time. I have witnessed many large enterprises realized reduced compliance and security costs by changes in education, changes in process and implementation of software and tools that automate their work. But these reductions came after a significant investment. How long it takes to pay off in terms of reduced manpower, costs and efficiencies in productivity vary widely. And yes, you can screw it up. False starts are not uncommon. Success is not a given. Wrong tool, wrong process, lack of training, whatever. Lots of expense, Best-in-Class, poor results. “But mom, everyone’s doing it!” The paper provides some business justification for DB security, but raises as many questions as it answers. “Top Pressures Driving Investments” is baffling; if ‘Security-related incidents’ is it’s own category, what does ‘Protect the organization mean’? Legal? Barbed wire and rent-a-Cops? And how can 41% of the ‘Best-in-Class’ respondents be in three requirement areas. Is everything a top priority? If so, something is seriously wrong. “Best-in-Class companies are two-times more likely than Laggards to collect, normalize, and correlate security and compliance information related to protecting the database”. I read that as saying SIEM is kinda good for compliance and security stuff around the database, at least most of the time. According to my informal poll, this is 76.4% likely to confuse 100% of the people 50% of the time. “Does this make me look Phat?” If you quotes these statistics to justify acquisition and deployment of database security, that’s great. If you choose to implement a bunch of systems so that you are judged ‘best in class’, that’s your decision. But if I do, call me on it. There is just not enough concrete information here for me to be comfortable with creating an effective strategy, nor cobble together enough data to really know what separates the effective strategies from the bad ones. Seriously, my intention here is not to trash the paper because it contains some good general information on the database security market and some business justification. You are not going to find someone on this planet who promotes database security measures more than I do. But it is the antithesis of what I want to do and how I want to provide value. Jeez, I feel like I am scolding a puppy for peeing on the rug. It’s so cute, but at the same time, it’s just not appropriate. “I call Bu&@% on that!” I have been in and around security for a long time, but the analyst role is new to me. Balancing the trifecta of raising general awareness, providing specific pragmatic advice, and laying out the justification as to why you do it is a really tough trio of objectives. This blog’s readership from many different backgrounds which further compounds the difficulty in addressing an audience; some posts are going to be overtly technical, while others are for general users. Sure, I want to raise awareness of available options, but providing clear, pragmatic advice on how to proceed with security and compliance programs is the focus. If Rich or I say ‘implement these 20 tools and you will be fine’ it is neither accurate nor helpful. If we recommend a tool, ask us why, ask us how, because people and process are at least as important as the technology being harnessed. If you do not feel we are giving the proper weight to various options, tell us. Post a comment on the blog. We are confident enough in our experience and abilities to offer direct advice, but not so arrogant as to think we know everything. The reason that Rich and I are hammering on the whole Open Research angle is both so you know how and where our opinions come from, but to provide readers the ability to question our research as well as add value to it. Share:

Share:
Read Post

Friday Summary: 12-12-2008

When I was little, I remember seeing an interview on television of a Chicago con man who made his living by scheming people out of their money. Back when the term was in vogue, the con man was asked to define what a ‘Hustle’ was. His reply was “Get get as much as you can, as fast as you can for as little as you can”. December is the month when the hustlers come to my neighborhood. I live in a remote area where most of the roads are dirt, and the houses are far apart, so we never see foot traffic unless it is December. And every year at this time the con men, hucksters, and thieves come around, claiming they are selling some item or collecting for some charity. Today was an example, but our con man was collecting for a dubious sounding college fund dressed as a Mormon missionary, which was not a recipe for success. Rich had a visitor this week claiming to be a student from ASU, going door to door for bogus charity efforts. Last year’s prize winner at my place was a guy with a greasy old spray bottle, half-filled with water and Pinesol, claiming he was selling a new miracle cleaning product. He was more interested in looking into the windows of the houses, and we guess he was casing places to rob during Christmas as he neither had order forms nor actual product to sell. Kind of a tip off, one which gets my neighbors riled enough to point firearms. The good hustlers know all the angles, have a solid cover story & reasonable fake credentials, and dress for the part. And they are successful as there are plenty of trusting people out there, and hustlers work hard at finding ways to exploit your trust. If you read this blog, you know most of the good hustlers are not walking door to door, they work the Internet, extending their reach, reducing their risk, and raising their payday. All they need are a few programming skills and a little creativity. I was not surprised by the McDonald’s phish scam this week, for no other reason than that I expect it this time of year. The implied legitimacy of a URL coupled with a logo is a powerful way to leverage recognition and trust. Sprinkle in the lure of an easy $75, and you have enough to convince some to enter their credit card numbers for no good reason. This type of scam is not hard to do, as this mini How-To discussion on GNUCitizen shows how simple psychological sleight-of-hand , when combined with a surfjacking attack, is an effective method of distracting even educated users from noticing what is going on. If you want to give your non-technical relatives an inexpensive gift this holiday season, help them stay safe online. On a positive note I have finally created a Twitter account this month. Yeah, yeah, keep the Luddite jokes to yourself. Never really interested in talking about what I am doing at any given moment, but I confess I am actually enjoying it; both for meeting people and as an outlet to share some of the bizarre %!$@ I see on any given week. Here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: On the Network Security Podcast this week, with Martin in absentia, Rich and Chris Hoff discuss CheckFree, Microsoft, and EMC, plus a few other topics of interest. Chris makes some great points about outbound proxies and security about halfway through, and how it would be great to have bookmarks into these podcasts so we can fast forward when he goes off on some subject no one is interested in. Worth a listen! Favorite Securosis Posts: Rich: Is it too narcissistic to pick my own post? How the Cloud Destroys Everything I Love (About Web Application Security). Adrian: As it encapsulates the program we are working on and I am happy with the content overall, Part 4: The Web Application Lifecycle. Favorite Outside Posts: Adrian: And not because the title was one of my favorite Monty Python skits, this discussion was a very interesting give and take on Pen Testing on RiskAnalys.is. Rich: A two parter from me. First, Amrit on Amazon AWS security. Then, Hoff on virtualized network security in the cloud. Top News and Posts: A 50 BILLION dollar Ponzi scheme? How does this go unnoticed? The Automaker bail-out dies in the Senate. Hack A Day provided nice coverage on the WordPress update. Koobface worm targets MySpace and other social networking sites. This is the future of malware, folks. An Internet Explorer 7 0day on Windows XP is being exploited in the wild. Anton has a must read short post on HIPAA. HP and Symantec lose unencrypted laptops. Both companies are in the process of deploying encryption, but too late for these incidents. Blog Comment of the Week: Skott on our Building a Web Application Security Program series (too long for the entire comment, here’s the best bit): Tools and plain old testing are going to run into the same void without risk analysis (showing what’s valuable) and policy (defining what needs to be done for everything that’s valuable). Without them, you’re just locking the front door and leaving the windows, and oh, by the way, you probably forgot to put on the roof. Share:

Share:
Read Post

Building a Web Application Security Program, Part 5: Secure Development

Now that we’ve laid out the big picture for a web application security program, it’s time to dig into the individual details. In this part (see also Part 1, Part 2, Part 3, and Part 4) we’re going to discuss how to implement security during the development phases of the web application lifecycle, including which tools we recommend. In web application security, process modification, education, and development tool choices are all typically undeserved. Security is frequently bolted on as an afterthought, rather than built in by design. The intention in this section is to illuminate your best options for integrating security during pre-deployment phases of application development (i.e., requirements gathering, design, implementation, and QA). Web Application Security: Training and the SDLC Most web applications today were designed, built, and deployed before web application security was considered. Secure coding practices are just now entering the consciousness of most web development teams, and usually only after a security ‘event’. Project Management and Assurance teams typically take on security only when a compliance requirement is dropped into their laps. News may have raised awareness of SQL injection attacks, but many developers remain unaware of how reflected Cross Site Scripting and Cross Site Request Forgeries are conducted, much less what can be done to protect against them. Secure Application Development practices, and what typically become part of a Secure Software Development Lifecycle, are in their infancy- in terms of both maturity and adoption. Regardless of what drives your requirements, education and process modification are important first steps for producing secure web applications. Whether you are developing a new code base or retrofitting older applications, project managers, developers, and assurance personnel need to be educated about security challenges to address and secure design and coding techniques. The curriculum should cover both the general threats that need to be accounted for and the methods that hackers typically employ to subvert systems. Specialized training is necessary for each sub-discipline, including process modification options, security models for the deployment platform, security tools, and testing methodologies. Project management needs to be aware of what types of threats are relevant to the web application they are responsible for, and how to make trade-offs that minimize risk while still providing desired capabilities. Developers & QA need to understand how common exploits work, how to test for them, and how to address weaknesses. Whether your company creates its own internal training program, organizes peer educational events, or invests in third party classes, this is key for producing secure applications. Threat modeling, secure design principles, functional security requirements, secure coding practices, and security review/testing form the core of an effective secure SDLC, and are relatively straightforward to integrate into nearly all development processes. Process also plays an important role in code development, and affects security in much the same way it affects employee productivity and product quality. If the product’s specification lacks security requirements, you can’t expect it to be secure. A product that doesn’t undergo security testing, just like a product that skips functional testing, will suffer from flaws and errors. Modification to the Software Development Lifecycle to include security considerations is called Secure-SDLC, and includes simple sanity checks throughout the process to help discover problems early. While Secure-SDLC is far too involved for any real discussion in this post, our goal is instead to highlight the need for development organizations to consider security as a requirement during each phase of development. Tools and test cases, as we will discuss below, can be used to automate testing and assurance, but training and education are essential for taking advantage of them. Using them to augment the development and assurance process reduces overhead compared to ad hoc security adoption, and cuts down on vulnerabilities within the code. Team members educated on security issues are able to build libraries of tests that help catch typical flaws across all newer code. Extreme Programming techniques can be used to help certify that modules and components meet security requirements as part of unit testing, alongside non-security functional testing and regression sweeps provided by assurance teams. Remember- you are the vendor, and your team should know your code better than anyone, including how to break it. Static Analysis Tools There are a number of third party tools, built by organizations which understand the security challenges of web app development, to help with code review for security purposes. Static analysis examines the source code of a web application, looking for common vulnerabilities, errors, and omissions within the constructs of the language itself. This serves as an automated counterpart to peer review. Among other things, these tools generally scan for un-handled error conditions, object availability or scope, and potential buffer overflows. The concept is called “static analysis” because it examines the source code files, rather than either execution flow of a running program or executable object code. These products run during the development phase to catch problems prior to more formalized testing procedures. The earlier a problem is found the easier (and cheaper) it is to fix. Static analysis supplements code review performed by developers, speeding up scans and finding bugs more quickly and cheaply than humans. The tools can hook into source code management for automated execution on a periodic basis, which again helps with early identification of issues. Static analysis is effective at discovering ‘wetware’ problems, or problems in the code that are directly attributable to programmer error. The better tools integrate well with various development environments (providing educational feedback and suggesting corrective actions to programmers); can prioritize discovered vulnerabilities based on included or user-provided criteria; and include robust reporting to keep management informed, track trends, and engage the security team in the development process without requiring them double as programmers. Static analysis tools are only moderately effective against buffer overruns, SQL injection, and code misuse. They do not account for all of the pathways within the code, and are blind to certain types of vulnerabilities and problems that are only apparent at runtime. To fill this gap, dynamic

Share:
Read Post

WebAppSec: Part4, The Web Application Lifecycle

Just prior to this post, it dawned on us just how much ground we are covering. We’re looking at business justification, people, process, tools and technology, training, security mindset and more. Writing is an exercise in constraint- often pulling more content out than we are putting in. This hit home when we got lost within our own outline this morning. So before jumping into the technology discussion, we need to lay out our roadmap and show you the major pieces of a web application security program that we’ll be digging into. Our goal moving forward is to recommend actionable steps that promote web application security, and are in keeping with your existing development and management framework. While web applications offer different challenges, as we discussed in the last post, additional steps to address these issues aren’t radical deviations from what you likely do today. With a loose mapping to the Software Development Lifecycle, we are dividing this into three steps across seven coverage areas that look like this: Secure Development Process and Training – This section’s focus is on placing security into the development life-cycle. We discuss general enhancements, for lack of a better word, to the people who work on delivering web applications, and the processes used to guide their activity. Security awareness training through education, and supportive process modifications, as a precursor to making security a functional requirement of the application. We discuss tools that automate portions of the effort; static analysis tools that aid engineering in identifying vulnerable code, and dynamic analysis tools for detecting anomalous application behavior. Secure SDLC- Introducing secure development practices and software assurance to the web application programming process. Static Analysis- Tools that scan the source code of an application to look for security errors. Often called “white box” tools. Dynamic Analysis- Tools that interact with a running application and attempt to ‘break’ it, but don’t analyze the source code directly. Often called “black box” tools. Secure Deployment At the stage where an application is code-complete, or is ready for more rigorous testing and validation, is the time to confirm that it does not suffer serious known security flaws, and is configured in such a way that it is not subject to any known compromises. This is where we start introducing vulnerability assessments and penetration testing tools- along with their respective approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking. Vulnerability Assessment- Remote scanning of a web application both with and without credentialed access to find application vulnerabilities. Web application vulnerability assessments focus on the application itself, while standard vulnerability assessments focus on the host platform. May be a product, service, or both. Penetration Testing- Penetration testing is the process of actually breaking into an application to determine the full scope of security vulnerabilities and the risks they pose. While vulnerability assessments find security flaws, penetration tests explore those holes to measure impact and categorize/prioritize. May be a product, service, or both. Secure Operation In this section we move from preventative tools & processes to those that provide detection capabilities and can react to live events. The primary focus will be on web application firewalls’ ability to screen the application from unwanted uses, and monitoring tools that scan requests for inappropriate activity against the application or associated components. Recent developments in detection tools promote enforcement of policies, react intelligently to events, and couple several services into a cooperative hybrid model. Web Application Firewalls- Network tools that monitor web application traffic and alert on, or attempt to block, known attacks. Application and Database Activity Monitoring- Tools that monitor application and database activity (via a variety of techniques) for auditing and to generate security alerts based on policy violations. Web application security is a field undergoing rapid advancement- almost as fast as the bad guys come up with new attacks. While we often spend time on this blog talking about leading edge technologies and the future of the market, we want to keep this series grounded in what’s practical and available today. For the rest of the series we’re going to break down each of those areas and drill into an overview of how they fit into an overall program, as well as their respective advantages and disadvantages. Keep in mind that we could probably write a book, or two, on each of those tools, technologies, and processes, so for these posts we’ll just focus on the highlights. Share:

Share:
Read Post

Focus & Priorities

This scene I ran across last week captured the essence of one of the points I want to make regarding security programs. This is a picture from a foreclosed home that I walked into Friday. The view is from the throne room master bedroom door, and you can see the shower stall off to the left, the bed to the right. It appears that the owners spent a great deal of time buying tile at Home Depot and making ‘improvements’, what with pretty much the entire house being self expression in fired clay and strategically placed mood lights. Rather than focusing on the basics, like say, paying the mortgage, they spend hundreds of hours and thousands of dollars in materials building a shrine to some toilet deity I am unfamiliar with. In data security and home improvement alike, focus on any specific function or appliance will leave you exposed. Share:

Share:
Read Post

WebAppSec: Part 3, Why Web Applications Are Different

By now you’ve probably noticed that we’re spending a lot of time discussing the non-technical issues of web application security. We felt we needed to start more on the business side of the problem since many organizations really struggle to get the support they need to build out a comprehensive program. We have many years invested in understanding network and host security issues, and have built nearly all of our security programs to focus on them. But as we’ve laid out, web application security is fundamentally different than host or network security, and requires a different approach. Web application security is also different from traditional software security, although it has far more in common with that discipline. In today’s post we’re going to get a little (just a little) more technical and talk about the specific technical and non-technical reasons web application security is different, before giving an overview of our take on the web application security lifecycle in the next post. Part 1 Part 2 Why web application security is different than host and network security With network and host security our focus is on locking down our custom implementations of someone else’s software, devices, and systems. Even when we’re securing our enterprise applications, that typically involves locking down the platform, securing communications, authenticating users, and implementing security controls provided by the application platform. But with web applications we not only face all those issues- we are also dealing with custom code we’ve often developed ourselves. Whether internal-only or externally accessible, web application security differs from host and network in major ways: Custom code equals custom vulnerabilities: With web applications you typically generate most of the application code yourself (even when using common frameworks and plugins). That means most vulnerabilities will be unique to your application. It also means that unless you are constantly evaluating your own application, there’s no one to tell you when a vulnerability is discovered in the first place. You are the vendor: When a vulnerability appears, you won’t have an outside vendor providing a patch (you will, of course, have to install patches for whatever infrastructure components, frameworks, and scripting environments you use). If you provide external services to customers, you may need to meet any service level agreements you provide and must be prepared to be viewed by them just as you view your own software vendors, even if software isn’t your business. You have to patch your own vulnerabilities, deal with your own customer relations, and provide everything you expect from those who provide you with software and services. Firewalls/shielding alone can’t protect web applications: When we experience software vulnerabilities with our enterprise software, from operating systems, to desktop applications, to databases and everything else, we use tools like firewalls and IPS to block attacks while we patch the vulnerable software. This model of shield then patch has only limited effectiveness for web applications. A web application firewall (WAF) can’t protect you from logic flaws. While WAFs can help with certain classes of attack, out of the box they don’t know or understand your application and thus can’t protect custom vulnerabilities they aren’t tuned for. WAFs are an important part of web application security, but only when part of a comprehensive program, as we’ll discuss. Eternal Beta Cycles: When we program a traditional stand-alone application it’s usually designed, developed, and tested in a controlled environment before being carefully rolled out to select users for additional testing, then general release. While we’d like all web applications to run through this cycle, as we discussed in our first post in this series it’s often not so clean. Some applications are designated beta and treated as such by the development teams, but in reality they’ve quietly grown into full-bore essential enterprise applications. Other applications are under constant revision and don’t even attempt to follow formal release cycles. Continually changing applications challenge both existing security controls (like WAFs) and response efforts. Reliance on frameworks/platforms: We rarely build our web applications from the ground up in shiny new C code. We use a mixture of different frameworks, development tools, platforms, and off the shelf components to piece them together. We are challenged to secure and deploy these pieces as well as the custom code we build with and on top of them. In many cases we create security issues through unintended uses of these components or interactions between the multiple layers due to the complexity of the underlying code. Heritage (legacy) code: Even if we were able to instantly create perfectly secure code from here on forward, we still have massive code bases full of old vulnerabilities still to fix. If older code is in active use, it needs just as much security as anything new we develop. With links to legacy systems, modification of the older applications often ranges from impractical to impossible, placing the security burden on the newer web application. Dynamic content: Most web applications are extremely dynamic in nature, creating much of their content on the fly, not infrequently using elements (including code) provided by the user. Because of the structure of the web, while this kind of dynamically generated content would fail in a traditional application, our web browsers try their best to render it to the user- thus creating entire classes of security issues. New vulnerability classes: As with standard applications, researchers and bad guys are constantly discovering new classes of vulnerabilities. In the case of the Web, these often effect nearly every web site on the face of the planet the moment they are discovered. Even if we write perfect code today, there’s nothing to guarantee it will be perfect tomorrow. We’ve listed a number of reasons we need to look at web applications differently, but the easiest way to think about it is that web applications have the scope of externally-facing network security issues, the complexity of custom application development security, and the implications of ubiquitous host security vulnerabilities. At this point we’ve finished laying out the background for

Share:
Read Post

Building A Web Application Security Program: Part 2, The Business Justification

‘In our last post in this series we introduced some of the key reasons web application security is typically underfunded in most organizations. The reality is that it’s often difficult to convince management why they need additional protections for an application that seems to be up and running just fine. Or to change a development process the developers themselves are happy with. While building a full business justification model for web application security is beyond the scope of this post (and worthy of its own series), we can’t talk about building a program without providing at least some basic tools to determine how much you should invest, and how to convince management to support you. The following list isn’t a comprehensive business justification model, but provides typical drivers we commonly see used to justify web application security investments: Compliance – Like it or not, sometimes security controls are mandated by government regulation, industry standards/requirements, or contractual agreements. We like to break compliance into three separate justifications- mandated controls (PCI web application security requirements), non-mandated controls that avoid other compliance violations (data protection to avoid a breach disclosure), and investments to reduce the costs of compliance (lower audit costs or TCO). The average organization uses all three factors to determine web application security investments. Fraud Reduction – Depending on your ability to accurately measure fraud, it can be a powerful driver of, and justification for, security investments. In some cases you can directly measure fraud rates and show how they can be reduced with specific security investments. Keep in mind that you may not have the right infrastructure to detect and measure this fraud in the first place, which could provide sufficient justification by itself. Penetration tests are also useful is justifying investments to reduce fraud- a test may show previously unknown avenues for exploitation that could be under active attack, or open the door to future attack. You can use this to estimate potential fraud and map that to security controls to reduce losses to acceptable levels. Cost Savings – As we mentioned in the compliance section, some web application security controls can reduce your cost of compliance (especially audit costs), but there are additional opportunities for savings. Web application security tools and processes during the development and maintenance stages of the application can reduce costs of manual processes or controls and/or costs associated with software defects/flaws, and may cause general efficiency improvements. We can also include cost savings from incident reduction- including incident response and recovery costs. Availability – When dealing with web applications, we look at both total availability (direct uptime), and service availability (loss of part of the application due to attack or to repair a defect). For example, while it’s somewhat rare to see a complete site outage due to a web application security issue (although it definitely happens), it’s not all that rare to see an outage of a payment system or other functionality. We also see cases where, due to active attack, a site needs to shut down some of its own services to protect users, even if the attack didn’t break the services directly. User Protection – While this isn’t quantifiable with a dollar amount, a major justification for investment in web security is to protect users from being compromised by their trust in you (yes, this has reputation implications, but not ones we can precisely measure). Attackers frequently compromise trusted sites not to steal from that site, but to use it to attack the site’s users. Even if you aren’t concerned with fraud resulting in direct losses to your organization, it’s a problem if your web application is used to defraud your users. Reputation Protection – While many models attempt to quantify a company’s reputation and potential losses due to reputation damage, the reality is all those models are bunk- there is no accurate way to measure the potential losses associated with a successful attack. Despite surveys indicating users will switch to competitors if you lose their information, or that you’ll lose future business, real world stats show that user behavior rarely aligns with survey responses. For example, TJX was the largest retail breach notification in history, yet sales went up after the incident. But just because we can’t quantify reputation damage doesn’t mean it isn’t an important factor in justifying web application security. Just ask yourself (or management) how important that application is to the public image of your organization, and how willing you or they are to accept the risk of losses ranging from defacement, to lost customer information, to downtime. Breach Notification Costs – Aside from fraud, we also have direct losses associated with breach notifications (if sensitive information is involved). Ignore all the fluffy reputation/lost business/market value estimates and focus on the hard dollar costs of making a list, sending a notification, and manning the call center for customer inquiries. You might also factor in the cost of credit monitoring, if you’d offer that to your customers. You’ll know which combination of these will work best for you based on your own organizational needs and management priorities, but the key takeaway should be that you likely need to mix quantitative and qualitative assessments to prioritize your investments. If you’re dealing with private information (financial/retail/healthcare), compliance drivers and breach notification mixed with cost savings are your best option. For general web services, user protection & reputation, fraud reduction, and availability are likely at the top of your list. And let’s not forget many of these justifications are just as relevant for internal applications. Whatever your application, there is no shortage of business (not technical) reasons to invest in web application security. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.