Securosis

Research

Building a Web Application Security Program, Part 5: Secure Development

Now that we’ve laid out the big picture for a web application security program, it’s time to dig into the individual details. In this part (see also Part 1, Part 2, Part 3, and Part 4) we’re going to discuss how to implement security during the development phases of the web application lifecycle, including which tools we recommend. In web application security, process modification, education, and development tool choices are all typically undeserved. Security is frequently bolted on as an afterthought, rather than built in by design. The intention in this section is to illuminate your best options for integrating security during pre-deployment phases of application development (i.e., requirements gathering, design, implementation, and QA). Web Application Security: Training and the SDLC Most web applications today were designed, built, and deployed before web application security was considered. Secure coding practices are just now entering the consciousness of most web development teams, and usually only after a security ‘event’. Project Management and Assurance teams typically take on security only when a compliance requirement is dropped into their laps. News may have raised awareness of SQL injection attacks, but many developers remain unaware of how reflected Cross Site Scripting and Cross Site Request Forgeries are conducted, much less what can be done to protect against them. Secure Application Development practices, and what typically become part of a Secure Software Development Lifecycle, are in their infancy- in terms of both maturity and adoption. Regardless of what drives your requirements, education and process modification are important first steps for producing secure web applications. Whether you are developing a new code base or retrofitting older applications, project managers, developers, and assurance personnel need to be educated about security challenges to address and secure design and coding techniques. The curriculum should cover both the general threats that need to be accounted for and the methods that hackers typically employ to subvert systems. Specialized training is necessary for each sub-discipline, including process modification options, security models for the deployment platform, security tools, and testing methodologies. Project management needs to be aware of what types of threats are relevant to the web application they are responsible for, and how to make trade-offs that minimize risk while still providing desired capabilities. Developers & QA need to understand how common exploits work, how to test for them, and how to address weaknesses. Whether your company creates its own internal training program, organizes peer educational events, or invests in third party classes, this is key for producing secure applications. Threat modeling, secure design principles, functional security requirements, secure coding practices, and security review/testing form the core of an effective secure SDLC, and are relatively straightforward to integrate into nearly all development processes. Process also plays an important role in code development, and affects security in much the same way it affects employee productivity and product quality. If the product’s specification lacks security requirements, you can’t expect it to be secure. A product that doesn’t undergo security testing, just like a product that skips functional testing, will suffer from flaws and errors. Modification to the Software Development Lifecycle to include security considerations is called Secure-SDLC, and includes simple sanity checks throughout the process to help discover problems early. While Secure-SDLC is far too involved for any real discussion in this post, our goal is instead to highlight the need for development organizations to consider security as a requirement during each phase of development. Tools and test cases, as we will discuss below, can be used to automate testing and assurance, but training and education are essential for taking advantage of them. Using them to augment the development and assurance process reduces overhead compared to ad hoc security adoption, and cuts down on vulnerabilities within the code. Team members educated on security issues are able to build libraries of tests that help catch typical flaws across all newer code. Extreme Programming techniques can be used to help certify that modules and components meet security requirements as part of unit testing, alongside non-security functional testing and regression sweeps provided by assurance teams. Remember- you are the vendor, and your team should know your code better than anyone, including how to break it. Static Analysis Tools There are a number of third party tools, built by organizations which understand the security challenges of web app development, to help with code review for security purposes. Static analysis examines the source code of a web application, looking for common vulnerabilities, errors, and omissions within the constructs of the language itself. This serves as an automated counterpart to peer review. Among other things, these tools generally scan for un-handled error conditions, object availability or scope, and potential buffer overflows. The concept is called “static analysis” because it examines the source code files, rather than either execution flow of a running program or executable object code. These products run during the development phase to catch problems prior to more formalized testing procedures. The earlier a problem is found the easier (and cheaper) it is to fix. Static analysis supplements code review performed by developers, speeding up scans and finding bugs more quickly and cheaply than humans. The tools can hook into source code management for automated execution on a periodic basis, which again helps with early identification of issues. Static analysis is effective at discovering ‘wetware’ problems, or problems in the code that are directly attributable to programmer error. The better tools integrate well with various development environments (providing educational feedback and suggesting corrective actions to programmers); can prioritize discovered vulnerabilities based on included or user-provided criteria; and include robust reporting to keep management informed, track trends, and engage the security team in the development process without requiring them double as programmers. Static analysis tools are only moderately effective against buffer overruns, SQL injection, and code misuse. They do not account for all of the pathways within the code, and are blind to certain types of vulnerabilities and problems that are only apparent at runtime. To fill this gap, dynamic

Share:
Read Post

Mortality, Integrity, and Risk Management

I despise the very concept of mortality. That everything we were, are, and can be comes to a crashing close at some arbitrary deadline. I’ve never been one to accept someone telling me to do something just because “that’s the way it is”, and I feel pretty much the same way about death. Having seen far more than my fair share of it, I consider it nothing but random and capricious. For those that follow Twitter, yesterday afternoon mortality bitch slapped me upside the head. I found out that my cholesterol is two points shy of the thin black line that defines “high”. Being thirty seven, a lifetime athlete, and relatively healthy eater since my early twenties, my number shouldn’t even be on the same continent as “high”, never mind the same zip code. I clearly have my parent’s genes to blame, and since my father passed away many years ago of something other than heart disease, I get to have a long conversation with mother this weekend on her poor gene selection. I might bring up the whole short thing while I’m at it (seriously, all I asked for was 5’9”). I tend to look at situations like this as risk management problems. With potential mitigating actions, all of which come at a cost, and a potential negative consequence (well, negative for me), it slots nicely into a risk-based approach. It also highlights what is the single most important factor in any risk analysis- integrity. If you deceive yourself (or others) you can never make an effective risk decision. Let’s map it out: Asset Valuation – Really fracking high for me personally, $2M to the insurance company (time limited to 20 years), and somewhere between zero and whatever for the rest of the world (and, I suspect, a few negative values circulating out there). Risk Tolerance – Low. Oh sure, I’d like to say “none”, but the reality is if my risk tolerance was really 0, I’d mentally implode in a clash of irreconcilable risk factors as fear of my house burning around me conflicts with the danger of a meteor smashing open my skull like a ripe pumpkin when I walk outside. Since anything over 100 years old isn’t realistically quantifiable (and 80 is more reasonable), I’ll call 85 the low end of my tolerance, with no complaints if I can double that. Risk/Threat Factors – Genetics, lifestyle, and medication. This one is pretty easy, since there are really only 3 factors that effect the outcome (in this dimension, I’m skipping cancer, accidents, and those freaky brain eating bacteria found in certain lakes). I can only change two of the factors, each of which comes with both a financial cost, and, for lack of a better word, a “pleasure” cost. Risk Analysis – I’m going to build three scenarios: Since some of my cholesterol is good to normal (HDL and triglycerides), and only part of it bad (LDL and total serum), I can deceive myself into thinking I don’t need to do anything today and ignore the possibility of slowly clogging my arteries until a piece of random plaque breaks off and kills me in excruciating pain at an inconvenient moment. Since that’s what everyone else tends to do, we’ll call this option “best practices”. I can meet with my doctor, review the results, and determine which lifestyle changes and/or medication I can start today to reduce my long term risks. I can reduce the intake of certain foods, switch to things like Egg Beaters, and increase my intake of high fiber food and veggies. I’ll pay an additional financial cost for higher quality food, a time cost for the extra workouts, and a “pleasure” cost for fewer chocolate chip cookies. In exchange for those french fries and gooey burritos I’ll be healthier overall and live a higher quality of life until I’m disemboweled by an irate ostrich while on safari in Africa. I can immediately switch to a completely heart-healthy diet and disengage from any activity that increases my risk of premature death (and isn’t all death premature?). I’ll never eat another cookie or french fry, and I’ll move to a monastery in a meteor-free zone to eliminate all stress from my life as I engage in whatever the latest medical journals define as the optimum diet and exercise plan. I will lead a longer, lower quality life until I’m disemboweled by an irate monk who is sick of my self righteous preaching and mid-chant calisthenics. We’ll call this option the “consultant/analyst” recommendations. Risk Decision and Mitigation Plan – Those three scenarios represent the low, middle, and high option. In every case there is a cost- but the cost is either in the short term or the long term. None of the scenarios guarantees success. This is where the integrity comes in- I’ve tried to qualify all the appropriate costs in each scenario, and don’t try and fool myself into thinking I can avoid those costs to steer myself towards the easy decision. It would be easy to look at my various cholesterol levels and current lifestyle, then decide that maybe if I read the numbers from a certain angle nothing bad will happen. Or maybe I can just hang out without making changes until the numbers get worse, and fix things then. On the other end, I could completely deceive myself and decide that a bunch of extreme efforts will fix everything and I can completely control the end result, ignoring the cost and all the other factors out there. But if I’m really honest to myself, I know that despite my low tolerance for an early death, I’m unwilling to pay the costs of extreme actions. Thus I’m going to make immediate changes to my diet that I know I can tolerate in the long term, I’ll meet with my doctor and start getting annual tests, and I’ll slip less on my fitness plan when work gets out of control. I’m putting metrics in place

Share:
Read Post

The Biggest Difference Between Web Applications And Traditional Applications.

Adrian and I have been hard at work on our web application security overview series, and in a discussion we realized we left something off part 3 of the series when we dig into the differences between web applications and traditional applications. In most applications we program the user display/interface. With web applications, we rely on an external viewer (the browser) we can’t completely control, that can be interacting with other applications at the same time. Which is stupid, because it’s the biggest, most obvious difference of them all. Share:

Share:
Read Post

WebAppSec: Part4, The Web Application Lifecycle

Just prior to this post, it dawned on us just how much ground we are covering. We’re looking at business justification, people, process, tools and technology, training, security mindset and more. Writing is an exercise in constraint- often pulling more content out than we are putting in. This hit home when we got lost within our own outline this morning. So before jumping into the technology discussion, we need to lay out our roadmap and show you the major pieces of a web application security program that we’ll be digging into. Our goal moving forward is to recommend actionable steps that promote web application security, and are in keeping with your existing development and management framework. While web applications offer different challenges, as we discussed in the last post, additional steps to address these issues aren’t radical deviations from what you likely do today. With a loose mapping to the Software Development Lifecycle, we are dividing this into three steps across seven coverage areas that look like this: Secure Development Process and Training – This section’s focus is on placing security into the development life-cycle. We discuss general enhancements, for lack of a better word, to the people who work on delivering web applications, and the processes used to guide their activity. Security awareness training through education, and supportive process modifications, as a precursor to making security a functional requirement of the application. We discuss tools that automate portions of the effort; static analysis tools that aid engineering in identifying vulnerable code, and dynamic analysis tools for detecting anomalous application behavior. Secure SDLC- Introducing secure development practices and software assurance to the web application programming process. Static Analysis- Tools that scan the source code of an application to look for security errors. Often called “white box” tools. Dynamic Analysis- Tools that interact with a running application and attempt to ‘break’ it, but don’t analyze the source code directly. Often called “black box” tools. Secure Deployment At the stage where an application is code-complete, or is ready for more rigorous testing and validation, is the time to confirm that it does not suffer serious known security flaws, and is configured in such a way that it is not subject to any known compromises. This is where we start introducing vulnerability assessments and penetration testing tools- along with their respective approaches to configuration analysis, threat discovery, patch levels, and operational consistency checking. Vulnerability Assessment- Remote scanning of a web application both with and without credentialed access to find application vulnerabilities. Web application vulnerability assessments focus on the application itself, while standard vulnerability assessments focus on the host platform. May be a product, service, or both. Penetration Testing- Penetration testing is the process of actually breaking into an application to determine the full scope of security vulnerabilities and the risks they pose. While vulnerability assessments find security flaws, penetration tests explore those holes to measure impact and categorize/prioritize. May be a product, service, or both. Secure Operation In this section we move from preventative tools & processes to those that provide detection capabilities and can react to live events. The primary focus will be on web application firewalls’ ability to screen the application from unwanted uses, and monitoring tools that scan requests for inappropriate activity against the application or associated components. Recent developments in detection tools promote enforcement of policies, react intelligently to events, and couple several services into a cooperative hybrid model. Web Application Firewalls- Network tools that monitor web application traffic and alert on, or attempt to block, known attacks. Application and Database Activity Monitoring- Tools that monitor application and database activity (via a variety of techniques) for auditing and to generate security alerts based on policy violations. Web application security is a field undergoing rapid advancement- almost as fast as the bad guys come up with new attacks. While we often spend time on this blog talking about leading edge technologies and the future of the market, we want to keep this series grounded in what’s practical and available today. For the rest of the series we’re going to break down each of those areas and drill into an overview of how they fit into an overall program, as well as their respective advantages and disadvantages. Keep in mind that we could probably write a book, or two, on each of those tools, technologies, and processes, so for these posts we’ll just focus on the highlights. Share:

Share:
Read Post

Focus & Priorities

This scene I ran across last week captured the essence of one of the points I want to make regarding security programs. This is a picture from a foreclosed home that I walked into Friday. The view is from the throne room master bedroom door, and you can see the shower stall off to the left, the bed to the right. It appears that the owners spent a great deal of time buying tile at Home Depot and making ‘improvements’, what with pretty much the entire house being self expression in fired clay and strategically placed mood lights. Rather than focusing on the basics, like say, paying the mortgage, they spend hundreds of hours and thousands of dollars in materials building a shrine to some toilet deity I am unfamiliar with. In data security and home improvement alike, focus on any specific function or appliance will leave you exposed. Share:

Share:
Read Post

Friday Summary: 12-03-2008

Adrian and I are hard at work on our Building a Web Application Program series, and it led to an interesting discussion this morning on writing and writing styles. I’m fortunate that I’ve always been a pretty good writer; likely because I was a total bookworm as a kid. As with many things in life, if you are good at writing you often gain the opportunity to write more frequently. And the more you write, the better you write, and the more likely you are to develop and understand writing styles. Today we talked about passive voice, passive language, and brevity. Brevity is something I always struggle with. Most professional writers I talk with agree that it is more difficult to write a shorter piece than a longer one. As a college student in history, I didn’t worry too much about that since professors usually set minimum page counts and I wrote to fill that space as much as to cover the topic. At Gartner, we targeted 3-5 pages with a max of 14,000 words for a normal research note. When I write my online articles and columns for people like Macworld and Dark Reading, they typically ask for 500-800 words. It often takes me more time to write shorter than longer since I’m forced to focus more on the meat. I’ve become fascinated with the use of language now that I get paid to put my words on the page. How word and grammar choices affect the interpretation of my work, and audience receptiveness, as much or more than the content. For example, I find that passive voice makes you sound indecisive, confusing, and less authoritative. Passive voice is also closely tied to passive language in general- which although grammatically correct, is inefficient for communicating. For example, the first time I wrote this post I started with, “Adrian and I have been hard at work”. Now it reads, “Adrian and I are hard at work”. The language acts, it’s not acted upon. An example of passive voice is, “the risk of data loss is reduced by DLP”, as opposed to the active variant, “DLP reduces the risk of data loss”. One just sounds stronger and clearer. I could spend all day talking about writing and writing styles. My personal goals in writing are to keep a conversational style, use active language, be direct, avoid bullshit, and focus on clarity and simplicity. Sometimes that means breaking traditional grammar rules which can be too constraining and force sacrifices of effective style choices. I’m not perfect (just ask our editor Chris), but it seems to work well, Even in my “pontification” posts I try and focus on the main points and reduce extraneous language. Although Gartner left me free (in terms of style) to write how I wanted, I’m a bit more of a taskmaster here at Securosis and require contributors to follow those guiding principles. You pay us (not that most of you pay us) to save you time and money by providing insight and advice to help you do your job, not to write crap that’s hard to understand. And for those of you who write, and want to be successful, learn to say more with less, write to the correct audience, write with structure (don’t wander around), and always have a goal with each piece- be it an email, blog post, article, or novel. Develop your own writing style, rather than trying to channel someone else’s, and constantly critique your own work. Now that I’ve wasted four paragraphs on writing with brevity, here is the week’s security summary: Webcasts, Podcasts, Outside Writing, and Conferences: The print and online editions of Wired include a main feature article on Dan Kaminsky’s big DNS disclosure. I’m mentioned near the end of the article due to my involvement. Speaking of writing styles, Wired tends to focus on drama and personalities, and I was disappointed in how they portrayed some of what occurred. Dan comes across as some sort of mad/fringe hacker who almost decided to use the DNS vulnerability to take down banks, not a professional researcher who tried his best to handle an unusually sensitive bug. Anyway, you can judge for yourself, and I need to go buy another copy for my mom. I was interviewed at (IN)SECURE magazine. It’s a great publication, and I am excited to be included. On the Network Security Podcast this week, it’s just Martin and myself. At the end, we talk a fair bit about our home networks and my use of the Drobo. I wrote a TidBITS article on the Mac antivirus controversy this week. I was also interviewed about it by CNET and was in a hundred other articles, but my favorite take is by the Macalope. I’m happy to watch that game and drink that beer any time… I was interviewed on safe online holiday shopping for the Your Mac Life show. Yes, I was the total media whore this week. I also did a dozen interviews on the RSA/Microsoft partnership. Here’s Dark Reading, Information Week, CSO Magazine, and TechTarget/SearchSecurity. Favorite Securosis Posts: Rich: I’d like to say my How to be An Analyst post, but for this week it has to be my take on the Microsoft/RSA deal. This one has serious long term implications. Adrian: The Business Justification for Web Application Security: It may not be sexy, but it is important. Favorite Outside Posts: Adrian: This Rational Survivability post on ZoHo’s CloudSQL may not have been all that interesting to most, but after I read it, I must have spent half the day looking over the documentation, getting my API key and testing it out. This has a lot of ramifications for not only how we might provide data, how we implement SOA, and as Chris points out, security as well. More to come on this topic. Rich: The EFF guide for security researchers. Anyone who engages in any primary research absolutely must read this article. Although I do very little

Share:
Read Post

Analysis Of The Microsoft/RSA Data Loss Prevention Partnership

By the time I post this you won’t be able to find a tech news site that isn’t covering this one. I know, since my name was on the list of analysts the press could contact and I spent a few hours talking to everyone covering the story yesterday. Rather than just reciting the press release, I’d like to add some analysis, put things into context, and speculate wildly. For the record, this is a big deal in the long term, and will likely benefit all of the major DLP vendors, even though there’s nothing earth shattering in the short term. As you read this, Microsoft and RSA are announcing a partnership for Data Loss Prevention. Here are the nitty gritty details, not all of which will be apparent from the press release: This month, the RSA DLP product (Tablus for you old folks) will be able to assign Microsoft RMS (what Microsoft calls DRM) rights to stored data based on content discovery. The way this works is that the RMS administrator will define a data protection template (what rights are assigned to what users). The RSA DLP administrator then creates a content detection policy, which can then apply the RMS rights automatically based on the content of files. The RSA DLP solution will then scan file repositories (including endpoints) and apply the RMS rights/controls to protect the content. Microsoft has licensed the RSA DLP technology to embed into various Microsoft products. They aren’t offering much detail at this time, nor any timelines, but we do know a few specifics. Microsoft will slowly begin adding the RSA DLP content analysis engine to various products. The non-NDA slides hint at everything from SQL Server, Exchange, and Sharepoint, to Windows and Office. Microsoft will also include basic DLP management into their other management tools. Policies will work across both Microsoft and RSA in the future as the products evolve. Microsoft will be limiting itself to their environment, with RSA as the upgrade path for fuller DLP coverage. And that’s it for now. RSA DLP 6.5 will link into RMS, with Microsoft licensing the technology for future use in their products. Now for the analysis: This is an extremely significant development in the long term future of DLP. Actually, it’s a nail in the coffin of the term “DLP” and moves us clearly and directly to what we call “CMP”- Content Monitoring and Protection. It moves us closer and closer to the DLP engine being available everywhere (and somewhat commoditized), and the real value in being in the central policy management, analysis, workflow, and incident management system. DLP/CMP vendors don’t go away- but their focus changes as the agent technology is built more broadly into the IT infrastructure (this definitely won’t be limited to just Microsoft). It’s not very exciting in the short term. RSA isn’t the first to plug DLP into RMS (Workshare does it, but they aren’t nearly as big in the DLP market). RSA is only enabling this for content discovery (data at rest) and rights won’t be applied immediately as files are created/saved. It’s really the next stages of this that are interesting. This is good for all the major DLP vendors, although a bit better for RSA. It’s big validation for the DLP/CMP market, and since Microsoft is licensing the technology to embed, it’s reasonable to assume that down the road it may be accessible to other DLP vendors (be aware- that’s major speculation on my part). This partnership also highlights the tight relationship between DLP/CMP and identity management. Most of the DLP vendors plug into Microsoft Active Directory to determine users/groups/roles for the application of content protection policies. One of the biggest obstacles to a successful DLP deployment can be a poor directory infrastructure. If you don’t know what users have what roles, it’s awfully hard to create content-based policies that are enforced based on users and roles. We don’t know how much cash is involved, but financially this is likely good for RSA (the licensing part). I don’t expect it to overly impact sales in the short term, and the other major DLP vendors shouldn’t be too worried for now. DLP deals will still be competitive based on the capabilities of current products, more than what’s coming in an indeterminate future. Now just imagine a world where you run a query on a SQL database, and any sensitive results are appropriately protected as you place them into an Excel spreadsheet. You then drop that spreadsheet into a Powerpoint presentation and email it to the sales team. It’s still quietly protected, and when one sales guy tries to email it to his Gmail account, it’s blocked. When he transfers it to a USB device, it’s encrypted using a company key so he can’t put it on his home computer. If he accidentally sends it to someone in the call center, they can’t read it. In the final PDF, he can’t cut out the table and put it in another document. That’s where we are headed- DLP/CMP is enmeshed into the background, protecting content through it’s lifecycle based on central policies and content and context awareness. In summary, it’s great in the long term, good but not exciting in the short term, and beneficial to the entire DLP market, with a slight edge for RSA. There are a ton of open questions and issues, and we’ll be watching and analyzing this one for a while. As always, feel free to email me if you have any questions. Share:

Share:
Read Post

WebAppSec: Part 3, Why Web Applications Are Different

By now you’ve probably noticed that we’re spending a lot of time discussing the non-technical issues of web application security. We felt we needed to start more on the business side of the problem since many organizations really struggle to get the support they need to build out a comprehensive program. We have many years invested in understanding network and host security issues, and have built nearly all of our security programs to focus on them. But as we’ve laid out, web application security is fundamentally different than host or network security, and requires a different approach. Web application security is also different from traditional software security, although it has far more in common with that discipline. In today’s post we’re going to get a little (just a little) more technical and talk about the specific technical and non-technical reasons web application security is different, before giving an overview of our take on the web application security lifecycle in the next post. Part 1 Part 2 Why web application security is different than host and network security With network and host security our focus is on locking down our custom implementations of someone else’s software, devices, and systems. Even when we’re securing our enterprise applications, that typically involves locking down the platform, securing communications, authenticating users, and implementing security controls provided by the application platform. But with web applications we not only face all those issues- we are also dealing with custom code we’ve often developed ourselves. Whether internal-only or externally accessible, web application security differs from host and network in major ways: Custom code equals custom vulnerabilities: With web applications you typically generate most of the application code yourself (even when using common frameworks and plugins). That means most vulnerabilities will be unique to your application. It also means that unless you are constantly evaluating your own application, there’s no one to tell you when a vulnerability is discovered in the first place. You are the vendor: When a vulnerability appears, you won’t have an outside vendor providing a patch (you will, of course, have to install patches for whatever infrastructure components, frameworks, and scripting environments you use). If you provide external services to customers, you may need to meet any service level agreements you provide and must be prepared to be viewed by them just as you view your own software vendors, even if software isn’t your business. You have to patch your own vulnerabilities, deal with your own customer relations, and provide everything you expect from those who provide you with software and services. Firewalls/shielding alone can’t protect web applications: When we experience software vulnerabilities with our enterprise software, from operating systems, to desktop applications, to databases and everything else, we use tools like firewalls and IPS to block attacks while we patch the vulnerable software. This model of shield then patch has only limited effectiveness for web applications. A web application firewall (WAF) can’t protect you from logic flaws. While WAFs can help with certain classes of attack, out of the box they don’t know or understand your application and thus can’t protect custom vulnerabilities they aren’t tuned for. WAFs are an important part of web application security, but only when part of a comprehensive program, as we’ll discuss. Eternal Beta Cycles: When we program a traditional stand-alone application it’s usually designed, developed, and tested in a controlled environment before being carefully rolled out to select users for additional testing, then general release. While we’d like all web applications to run through this cycle, as we discussed in our first post in this series it’s often not so clean. Some applications are designated beta and treated as such by the development teams, but in reality they’ve quietly grown into full-bore essential enterprise applications. Other applications are under constant revision and don’t even attempt to follow formal release cycles. Continually changing applications challenge both existing security controls (like WAFs) and response efforts. Reliance on frameworks/platforms: We rarely build our web applications from the ground up in shiny new C code. We use a mixture of different frameworks, development tools, platforms, and off the shelf components to piece them together. We are challenged to secure and deploy these pieces as well as the custom code we build with and on top of them. In many cases we create security issues through unintended uses of these components or interactions between the multiple layers due to the complexity of the underlying code. Heritage (legacy) code: Even if we were able to instantly create perfectly secure code from here on forward, we still have massive code bases full of old vulnerabilities still to fix. If older code is in active use, it needs just as much security as anything new we develop. With links to legacy systems, modification of the older applications often ranges from impractical to impossible, placing the security burden on the newer web application. Dynamic content: Most web applications are extremely dynamic in nature, creating much of their content on the fly, not infrequently using elements (including code) provided by the user. Because of the structure of the web, while this kind of dynamically generated content would fail in a traditional application, our web browsers try their best to render it to the user- thus creating entire classes of security issues. New vulnerability classes: As with standard applications, researchers and bad guys are constantly discovering new classes of vulnerabilities. In the case of the Web, these often effect nearly every web site on the face of the planet the moment they are discovered. Even if we write perfect code today, there’s nothing to guarantee it will be perfect tomorrow. We’ve listed a number of reasons we need to look at web applications differently, but the easiest way to think about it is that web applications have the scope of externally-facing network security issues, the complexity of custom application development security, and the implications of ubiquitous host security vulnerabilities. At this point we’ve finished laying out the background for

Share:
Read Post

Apple Antivirus Thing: Much Ado About Nothing

All right, people, here’s the deal. I just published my take on the whole “Apple he said/she said you do/don’t need antivirus” thing over at TidBITS. Here’s my interpretation of what happened: Back in 2007 some support guy posted a list of major AV products supported on the Mac. On November 21st, it was updated to reflect current version numbers. Whoever wrote it is a shitty writer, and didn’t realize how people would interpret it. The press found it and trumpeted it to the world. Apple management went, “WTF?!? We don’t tell people they should install three different AV programs all at once. Hell, we never tell them they need AV at all. Not that we’re going to tell them *not* to use it…” The support article was pulled and statements issued. Some people called it a conspiracy, because they like that sort of thing. Somewhere deep in the bowels of 1 Infinite Loop, there is a pike, holding a bloody head, on prominent display. So no, most of you don’t need antivirus. You can read my article on this from back in March if you want more help deciding if you should take a look at AV on your Mac. Alan Shimel is one of a group of people who think it’s about time Mac users payed attention to security and installed AV. I like to break that argument into two sections. First, as I’ve learned since writing for TidBITS and Macworld, the average Mac user is definitely worried about security. But (second) this doesn’t mean desktop AV is the right answer. Right now, the risk of malware infection on the Mac is so low for the average user that AV really doesn’t make sense. That can change, heck, it probably will change, but that’s the situation today. Thus I recommend most people use mail filtering and browse safely rather than installing desktop AV. Not recommending AV isn’t Apple’s ego (and I don’t deny they have an ego), it’s a reflection of the risk to users in the current environment. Now the odds are us Mac security types will recommend AV long before Apple does, but that day definitely isn’t here yet. Apple didn’t reverse their policies- something slipped out from the lower levels by accident, and all the hubbub is much ado about nothing. The day will likely come when Mac users need additional malware protection, but today isn’t that day, and even then, AV may not be the answer. Read my older article on this, and keep up with the news so you’ll know when the time comes. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.