Saying Goodbye

I never thought I would say this, but I am leaving Securosis. By the time you read this I will have started a new position with Bank of America. I have been asked to help out with application and cloud security efforts. I have been giving a lot of thought to what I like to do, what makes me happy, and what I want to do with the rest of my career, and I came to the realization it is time for a change. There are aspects of the practice of security which I can never explore with Securosis or DisruptOps. The bank offers many challenges – and operates at a scale – which I have never experienced. That, and I will get to work with a highly talented team already in place. I could not really have written a better job description for myself, so I am jumping at this opportunity. 12 years ago I sat down with Rich to discuss, “What comes next?” when the demise of my former company was imminent. When he asked me to consider joining him, I asked, “What exactly is it you do?” I really didn’t know what analysts actually do, nor how the profession works. Mike will tell you I still don’t, and he is probably right. When I joined, friends and associates asked, “What the hell are you doing?” and said, “That is not what you are good at!”, and told me Securosis would never survive – there was no way we could compete against the likes of Gartner, Forrester, and IDC. But Securosis has been an amazing success. After a few years I think we proved that “No Bullshit” research, and our open Totally Transparent Research model, both work. We have been very lucky to have found such a workable niche, and had so much fun filling it. But more than anything I am very thankful for being able to work with Mike and Rich over the last decade. I simply could not ask for better business partners. Both are smart to the point of being prescient, tremendously supportive, and have an intuitive grasp of the security industry. And when we get it wrong – which have have done more than we like to admit – we learn from our mistakes and move on. I recently read a quote from Chuck Akre: “Good judgement comes from experience, and experience comes from bad judgement.” We always wondered how three guys with big egos and aspiration could coexist, but I think our ability to work as a team has been due in large part to learning from each other, learning from our mistakes, and constantly trying to improve the business. And we have constantly pushed this business to improve and move in new directions: from pure research, to training, to the research Nexus, to cloud security, security consulting, investment due diligence, and eventually launching DisruptOps. Each and every change was to address something in the industry we felt was not being served. Even the Disaster Recovery Breakfast stems from that ethos – another way we wanted to do things differently. And we have gained a lot of – ahem – experience along the way. It has been one hell of a ride! Thank you, Rich and Mike. &nbsp A hearty thank you to Chris Pepper for enduring my writing style and lack of grammar all these years. Early on one reader went so far as to compare my writing to Nazi atrocities against the English language, which was uncalled for, but perhaps not so far off the mark. Chris has helped me get a lot better, and for that I am very grateful. Finally, I wanted to thank all the readers of the posts, Friday Summaries, Incites, and research papers we have produced over the years. Some 15,000 blog posts, hundreds of research papers, and more webcasts than I can count. The support of the security community has made this work so rewarding. Thank you all for participating and helping us make Securosis a success. -Adrian Share:

Read Post

Understanding and Selecting RASP 2019: New Paper

Today we are launching our 2019 updated research paper from our recent series, Understanding and Selecting RASP (Runtime Application Self-Protection). RASP was part of the discussion on application security in just about every one of the hundreds of calls we have taken, and it’s clear that there is a lot of interest – and confusion – on the subject, so it was time to publish a new take on this category. And we would like to heartily thank you to Contrast Security for licensing this content. Without this type of support we could not bring this level of research to you, both free of charge and without requiring registration. We think this research paper will help developers and security professionals who are tackling application security from within understand what other security measures are at their disposal to protect application stacks from attack. And to be honest we were surprised by the volume of questions being asked. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. This was typically in response to difficulties with existing Web Application Firewalls (WAF) as those platforms have not fared well as development has gotten more agile. Since 2017 we have engaged in over 250 additional conversations on what has turned into a ‘DevSecOps’ phenomena, with both security and development groups asking about RASP, how it deploys and the realistic benefits it might provide. And make no mistake, it was not just IT security asking about WAF replacements, but security and development – facing a mountain of ‘technical debt’ with security defects – asking about monitoring/blocking malicious requests while in production. In this paper we cover what RASP is, how it works, use cases and how to differentiate one RASP product from another. And we address the perspectives and specific concerns of IT, application security and developer audiences. Again, thank you to Contrast Security for licensing this research. You can download from the attachment link below, or from the research library. And you can tune into our joint webcast on November 19 by registering here: Evaluating RASP Platforms. Understanding_RASP_2019_Final2.pdf Share:

Read Post

Enterprise DevSecOps: Security’s Role In DevOps

As we mentioned earlier, DevOps is not all about tools and technology – much of its success lies in how people work within the model. We have already gone into great detail about tools and process, and we approached much of this content from the perspective of security practitioners getting onboard with DevOps. This paper is geared toward helping security folks, so here we outline their role in a DevOps environment. We hope to help you work with other teams and reduce friction. And while we deliberately called this paper “Enterprise DevSecOps”, please keep in mind that your development and IT counterparts likely think there is no such thing. To them security becomes part of the operational process of integrating and delivering code. We call security out as a separate thing because, even woven into the DevOps framework, it’s substantially more difficult for security personnel to fit in. You need to consider how you can improve delivery of secure code without waste and without introducing bottlenecks in a development process you may not be intimately familiar with. The good news is that security fits nicely within a DevOps model, but you need to tailor things to work within your organization’s automation and orchestration to be successful. The Security Pro’s Responsibilities Learn the DevOps model: We have not even touched on theory or practice of DevOps in this series. There is a lot for you to learn on base concepts and practices. To work in a DevOps environment you need to understand what it is and how it works. You need to understand cultural and philosophical changes as well as how they effect process, tooling, and priorities. You need to understand your organization’s approach to optimally integrate security tooling and metrics. Once you understand the mechanics of the development team, you’ll have a better idea of how to work with them in their process. Learn how to be agile: Your participation in a DevOps team means you need to fit into DevOps – not the other way around. The goal of DevOps is fast, faster, fastest: small iterative changes which offer quick feedback, ultimately reducing Work In Progress (WIP). Small, iterative changes to improve security fit this model. Prioritize things which accelerate delivery of secure software over delivery of new features – keeping in mind that it is a huge undertaking to change culture away from “feature first”. You need to adjust requirements and recommendations so they fit into the process, often simplified into small steps, with enough information for tasks to be both automated and monitored. You can recommend manual code reviews or fuzz testing, so long as you understand where they fit within the process, and what can – and cannot – gate releases. Educate: Our experience shows that one of the best ways to bring a development team up to speed in security is training: in-house explanations or demonstrations, third-party experts to help with application security or threat modeling, eLearning, or various commercial courses. The historical downside has been cost, with many classes costing thousands of dollars. You’ll need to evaluate how best to use your resources – the answer typically includes some eLearning for all employees, and select people attending classes and then teaching peers. On-site experts can be expensive, but an entire group can participate in training. Grow your own support: There is simply no way for security teams to scale out their knowledge without help. This does not mean hundreds of new security employees – it means deputizing developers to help with product security. Security teams are typically small and outnumbered by developers 100:1. What’s more, security people are not present at most development meetings, so they lack visibility into day-to-day agile activities. To help extend the reach of the security team, see if you can get someone on each development team – a “security champion” – to act as a security advocate. This helps not only extend the security team’s reach, but also expand security awareness. Help DevOps team understand threats: Most developers don’t fully grasp how attackers approach systems, or what it means when a code or SQL injection attack is possible. The depth and breadth of security threats is outside their experience and most firms do not teach threat modeling. The OWASP Top Ten is a good guide to the types of code deficiencies which plague development teams, but you should map these threats back to real-world examples, show the damage that a SQL injection attack can cause, and explain how a Heartbleed type vulnerability can completely expose customer credentials. You can these real-world use cases as “shock and awe” to help developers “get it”. Advise on remediation practices: Your security program is inadequate if it simply says to “encrypt data” or “install WAF”. All too often, developers and IT have a singular idea of what constitutes security, centered on a single tool they want to set and forget. Help build out the elements of the security program, including both in-code enhancements and supporting tools. Teach how those each help to address specific threats, and offer help with deployment and policy setup. In the past, firms used to produce code style guides to teach younger developers what properly written code looked like. Typically these are not online. Consider a wiki page for security coding practices and other reference materials which are easily discovered and readable by people without a security background. Help evaluate security tools: It is unusual for people outside security to fully understand what security tools do or how they work. So you can help in two ways; first, help developers select tools. Misconceptions are rampant, and not just because vendors over-promise. Additionally it is uncommon for developers to evaluate code scanners, activity monitors, or even patch management system effectiveness. In your role as advisor it is your responsibility to help DevOps understand what the tools can provide and what fits within your shared testing framework. Sure, you might not be able to evaluate the quality of the API, but

Read Post

Enterprise DevSecOps: Security Test Integration and Tooling

In this section we show you how to weave security into the fabric of your DevOps automation framework. We are going to address the questions “We want to integrate security testing into the development pipeline, and are going to start with static analysis. How do we do this?”, “We understand “shift left”, but are the tools effective?” and “What tools do you recommend we start with, and how do we integrate them?”. As DevOps encourages testing in all phases of development and deployment, we will discuss what a build pipeline looks like, and the tooling appropriate for stage. The security tests typically sit side by side with functional and regression tests your quality assurance teams has likely already deployed. And beyond those typical post-build testing points, you can include testing on a developer’s desktop prior to check-in, in the code repositories before and after builds, and in pre-deployment staging areas. Build Process During a few of the calls we had, several of the senior security executives did not know what constituted a build process. This is not a condemnation as many people in security have not participated in software production and delivery, so we want to outline the process and the terminology used by developers. If you’re already familiar with this, skip forward to ‘Building s Security Toolchain’. Most of you reading this will be familiar with “nightly builds”, where all code checked in the previous day is compiled overnight. And you’re just as familiar with the morning ritual of sipping coffee while you read through the logs to see if the build failed and why. Most development teams have been doing this for a decade or more. The automated build is the first of many steps that companies go through on their way toward full automation of the processes that support code development. Over the last several years we have mashed our ‘foot to the floor’, leveraging more and more automation to accelerate the pace of software delivery. The path to DevOps is typically taken in two phases: first with continuous integration, which manages the building and testing of code; then continuous deployment, which assembles the entire application stack into an executable environment. And at the same time, there are continuous improvements to all facets of the process, making it easier, faster and more reliable. It takes a lot of work to get here, and the scripts and templates used often take months just to build out the basics, and years to mature them into reliable software delivery infrastructure. Continuous Integration The essence of Continuous Integration (CI) is developers regularly check-in small iterative code advances. For most teams this involves many updates to a shared source code repository, and one or more builds each day. The key is smaller, simpler additions, where we can more easily and quickly find code defects. These are essentially Agile concepts, implemented in processes which drive code, rather than processes that drive people (such as scrums and sprints). The definition of CI has evolved over the last decade, but in a DevOps context CI implies that code is not only built and integrated with supporting libraries, but also automatically dispatched for testing. Additionally, DevOps CI implies that code modifications are not applied to branches, but directly into the main body of the code, reducing the complexity and integration nightmares that can plague development teams. This sounds simple, but in practice it requires considerable supporting infrastructure. Builds must be fully scripted, and the build process occurs as code changes are made. With each successful build the application stack is bundled and passed along for testing. Test code is built before unit, functional, regression, and security testing, and checked into repositories and part of the automated process. Tests commence automatically whenever a new application bundle is available, but it means the new tests are applied with each new build as well. It also means that before tests can be launched test systems must be automatically provisioned, configured, and seeded with the necessary data. Automation scripts must provide monitoring for each part of the process, and communication of success or failure back to Development and Operations teams as events occur. The creation of the scripts and tools to make all this possible requires Operations, Testing and Development teams to work closely together. The following graphic shows an automated build pipeline, including security test points, for containers. Again, this level of orchestration does not happen overnight, rather an evolutionary process that take months to establish the basics and years to mature. But that is the very essence of continuous improvement. Continuous Deployment Continuous Deployment looks very similar to CI, but focuses on the releasing software to end users rather than building it. It involves similar packaging, testing, and monitoring tasks, with some additional wrinkles. Upon a successful completion of a build cycle, the results feed the Continuous Deployment (CD) process. CD takes another giant step forward for automation and resiliency but automating the release management, provisioning and final configuration for the application stack, then launches the new application code. When we talk about CD, there are two ways people embrace this concept. Some teams simply launch the new version of their application into an existing production environment. The CD process automates the application layer, but not the server, data or network layers. We find this common with on-premise applications and in private cloud deployments, and some public cloud deployments still use this model as well. A large percentage of the security teams we spoke with are genuinely scared of Continuous Deployment. They state “How can you possibly allow code to go live without checks and oversight!”, missing the point that the code does not go live until all of the security tests have been passed. And some CI pipelines contain manual inspection points for some tests. In our experience, CD means more reliable and more secure releases. CD addresses dozens of issues that plague code deployment — particularly in the areas of error-prone manual changes, and discrepancies in

Read Post

Enterprise DevSecOps: Security Planning

This post is intended to help security folks create an outline or structure for an application security program. We are going to answer such common questions as “How do we start building out an application security strategy?”, “How do I start incorporating DevSecOps?” and “What application security standards should I follow?”. I will discuss the Software Development Lifecycle (SDLC), introduce security items to consider as you put your plan in place, and reference some application security standards for use as guideposts for what to protect against. This post will help your strategy; the next one will cover tactical tool selection. Security Planning and your SDLC A Secure Software Development Lifecycle (S-SDLC) essentially describes how security fits into the different phases of a Software Development Lifecycle. We will look at each phase in an SDLC and discuss which security tools and techniques are appropriate. Note that an S-SDLC is typically drawn as a waterfall development process, with different phases in a linear progression, but that’s really just for clearer depiction – the actual SDLC which is being secured is as likely to be Agile, Extreme, or Spiral as Waterfall. There are good reasons to base an S-SDLC on a more modern SDLC; but the architecture, design, development, testing, and deployment phases all map well to development stages in any development process. They provide a good jumping-off point to adapt current models and processes into a DevOps framework. As in our previous post, we want you to think of the S-SDLC as a framework for building your security program, not a full step-by-step process. We recognize this is a departure from what is taught in classrooms and wikis, but it is better for planning security in each phase. Define and Architect Reference Security Architectures: Reference security architectures exist for different types of applications and services, including web applications, data processing applications, identity and access management services for applications, stream/event processing, messaging, and so on. The architectures are even more effective in public cloud environments, Kubernetes clusters, and service mesh environments – where we can tightly control via policy how each application operates and communicates. With cloud services we recommend you leverage service provider guidelines on deployment security, and while they may not call them ‘reference security architectures’ they do offer them. Educate yourself on the application platforms and ask software designers and architects which methods they employ. Do not be surprised if for legacy applications they give you a blank stare. But new applications should include plans for process isolation, segregation, and data security, with a full IAM model to promote segregation of duties and data access control. Operational Standards: Work with your development teams to define minimal security testing requirements, and critical and high priority issues. You will need to negotiate which security flaws will fail a build, and define the process in advance. You will probably need an agreement on timeframes for fixing issues, and some type of virtual patching to address hard-to-fix application security issues. You need to define these things up front and make sure your development and IT partners agree. Security Requirements: Just as with minimum functional tests which must run prior to code acceptance, you’ll have a set of security tests you run prior to deployment. These may be an agreed upon battery of unit tests for specific threats your which team writes. Or you may require all OWASP Top Ten vulnerabilities be mitigated in code or supporting products, mapping each threat to a specific security control for all web applications. Regardless of what you choose, your baseline requirements should account for new functionality as well as old. A growing body of tests requires more resources for validation and can slow your test and deployment cycle over time, so you have some decisions to make regarding which tests can block a release vs. what you scan for post-production. Monitoring and Metrics: If you will make small iterative improvements with each release, what needs fixing? Which code modules are problematic for security? What is working and how can you prove it? Metrics are key to answering all these questions. You need to think about what data you want to collect and build it into your CI:CD and production environments to measure how your scripts and tests perform. That means you need to engage developers and IT personnel in collecting data. You’ll continually evolve collection and use of metrics, but plan for basic collection and dissemination of data from the get-go. Design Security Design Principles: Some application security design and operational principles offer significant security improvement. Things like ephemeral instances to aid patching and reduce attacker persistence, immutable services to remove attack surface, configuration management to ensure servers and applications are properly set up, templating environments for consistent cloud deployment, automated patching, segregation of duties by locking development and QA personnel out of production resources, and so on. Just as important, these approaches are key to DevOps because they make delivery and management of software faster and easier. It sounds like a lot to tackle, but IT and development pitch in as it makes their lives easier too. Secure the Deployment Pipeline: With both development and production environments more locked down, development and test servers become more attractive targets. Traditionally these environments run with little or no security. But the need for secure source code management, build servers, and deployment pipelines is growing. And as CI/CD pipelines offer an automated pathway into production, you’ll need at minimum stricter access controls for these systems – particularly build servers and code repositories. And given scripts running continuously in the background with minimal human oversight, you’ll need additional monitoring to catch errors and misuse. Many of the tools offer good security, with digital fingerprinting, 2FA, logging, role-based access control, and other security features. When deployed in cloud environments, where the management plane allows control of your entire environment, great care must be taken with access controls and segregation of duties. Threat Modeling: Threat modeling remains one of the most productive exercises

Read Post

Enterprise DevSecOps: How Security Works With Development

In our first paper on ‘Building Security Into DevOps’, given the ‘newness’ of DevOps for most of our readers, we included a discussion on the foundational principles and how DevOps is meant to help tackle numerous problems common to software delivery. Please refer to that paper is you want more detailed background information. For our purposes here we will discuss just a few principles that directly relate to the integration of security teams and testing with DevOps principles. These concepts lay the foundations for addressing the questions we raised in the first section, and readers will need to understand these as we discuss security tooling and approaches in a DevOps environment. And before we dive in, let’s answer one of the most common questions from the previous section: “How do we get control over development?” The short answer is you do not. The longer answer is, in DevOps, you need to work along side your partner, not “control” them. Yes, a small percentage of organizations we spoke with gate all software releases by having security run a battery of tests prior to release, and certify every release from a security standpoint. It is rare that security gets to control software releases in this way, is anti-DevOps, but it can be very effective security control if not altogether efficient. That said, the remainder of this section should exemplify why your goal should not be to control development, and helpful approach to work with them. DevOps and Security Build Security In It is a terrible truth, but wide use of application security techniques within the code development process is relatively new. Sure, the field of study is decades old, but application security was more often bolted on with network or application firewalls, not baked into the code itself. Security product vendors discovered that understanding application requests in order to detect and then block attacks is incredibly difficult to do outside the application. It is far more effective to fix vulnerable code and close off attack vectors when possible. Add-on tools are getting better – and some work inside the application context – but better to address the issues in the code when possible. A central concept when building security in is ‘shift left’, or the idea that we integrate security testing earlier within the Software Development Lifecycle (SDLC) – the phases of which are typically listed left to right as design, development, testing, pre-production and production. Essentially we shift more resources away from production on the extreme right, and put more into design, testing and development phases. Born out of lean manufacturing, Kaizen and Deming’s principles, these ideas have been proven effective, but typically applied to the manufacture of physical goods. DevOps has promoted use in software development, demonstrating we can improve security at a lower cost by shifting security defect detection earlier in the process. Automation Automation is one of the keys to success for most firms we speak with, to the point that the engineering teams often equate DevOps and Automation as synonymous. The reality is the cultural and organizational changes that come with DevOps are equally important, it’s just that automation is sometimes the most quantifiable benefit. Automation brings speed, consistency and efficiency to all parties involved. DevOps, like Agile, is geared towards doing less, better, and faster. Software releases occur more regularly, with less code change between them. Less work means better focus, more clarity of purpose with each release, resulting in fewer mistakes. It also means it’s easier to rollback in the event of mistakes. Automation helps people get their jobs done with less hands-on work, but as automation software does exactly the same things every time, consistency is the most conspicuous benefit. The place where automation is first applied, where the benefits of automation are most pronounced, are the application build servers. Build servers (e.g.: Bamboo, Jenkins, ), commonly called Continuous Integration (CI) servers, automatically construct an application – and possibly the entire application stack – as code is changed. Once the application is built, these platforms may also launch QA and security tests, kicking back failed builds to the development team. Automation benefits other facets of software production, including reporting, metrics, quality assurance and release management, but security testing benefits are what we are focused on in this research. On the outset this may not seem like much; calling security testing tools instead of manually running the tests. That perspective misses the fundamental benefits of automated security testing. Automation is how we ensure that each update to software includes security tests, ensuring consistency. Automation is how we help avoid mistakes and omissions common with repetitive and – let’s be totally transparent here – boring manual tasks. But most importantly, as security teams are typically outnumbered by developers at a ratio of 100 to one, automation is the key ingredient to scaling security coverage without having to scale security personnel headcount. One Team A key DevOps principle is to break down silos and have better cooperation between developers and supporting QA, IT, security and other teams. We have heard this idea so often that it sounds cliché, but the reality is few in software development actually made changes to implement this idea. Most DevOps centric firms are changing development team composition to include representatives from all disciplines; that means every team has someone who knows a little security and/or represents security interest, even on small teams. And for those that do, they realize the benefits not just of better communication, but true alignment of goals and incentives. Development in isolation is incentivized to write new features. Quality Assurance in isolation is incented to get code coverage for various tests. When everyone on a team is responsible for the successful release of new software, there is a change in priorities and changes to behavior. This item remains a bit of a problem for many of the firms we have interviewed. The majority of firms we have spoken with are large in size, with hundreds of development teams

Read Post

Enterprise DevSecOps: New Series

DevOps is an operational framework which promotes software consistency and standardization through automation. It helps address many nightmare development issues around integration, testing, patching, and deployment – both by breaking down barriers between different development teams, and also by prioritizing things which make software development faster and easier. DevSecOps is the integration of security teams and security tools directly into the software development lifecycle, leveraging the automation and efficiencies of DevOps to ensure application security testing occurs in every build cycle. This promotes security and consistency, and helps to ensure that security is prioritized no lower that other quality metrics or features. Automated security testing, just like automated application build and deployment, must be assembled with the rest of the infrastructure. And there lies the problem. Software developers have traditionally not embraced security. It’s not because they do not care about security, but they were incentivized to to focus on delivery of new features and functions. DevOps is raising the priority of automating build processes – making them faster, easier, and more consistent. But that does not mean developers are going out of their way to include security or security tooling. That’s often because security tools don’t easily integrate well with development tools and processes, tend to flood queues with unintelligible findings, and lack development-centric filters to help prioritize. Worse, security platforms – and the security professionals who recommend them – were difficult to work with, or even failed to offer API support for integration. On the other side of equation are security teams, who fear automated software processes and commonly ask, “How can we get control over development?” This question misses the point of DevSecOps, and risks placing security in opposition to all other developer priorities: to improve velocity, efficiency, and consistency with each software release. The only way for security teams to cope with the changes within software development, and to scale their relatively small organizations, is to become just as agile as development teams by embracing automation. Why Did We Write This Paper? We discuss the motivation behind our research to help readers understand our goals and what we wish to convey. This is doubly relevant when we update a research paper, as it helps us spotlight recent changes in the industry which have made older papers inaccurate or inadequate to describe recent trends. DevOps has matured considerably in four years, so we have a lot to talk about. This will be a major rewrite of our 2015 research on Building Security into DevOps, with significant additions around common questions security teams ask about DevSecOps and a thorough update on tooling and integration approaches. Much of this paper will reflect 400+ conversations since 2017 across 200+ security teams at Fortune 2000 firms. So we will include considerably more discussion derived from those conversations. But DevOps has now been around for years, so discussion of its nature and value is less necessary, and we can focus on the practicalities of how to put together a DevSecOps program. Now let’s shake things up a bit. Different Focus, Different Value A plethora of new surveys and research papers are available, and some of them a very good. And there are more conferences and online resources popping up than I can count. For example Veracode recently released the latest iteration of its State of Software Security (SoSS) report and it’s a monster, with loads of data and observations. Their key takeaways are that the agility and automation employed by DevSecOps teams provide demonstrable security benefits, including faster patching cycles, shorter flaw persistence, faster reduction of technical debt, and ‘easier’ scanning – which leads to faster problem identification. Sonatype’s recently released 2019 State of the Software Supply Chain shows that “Exemplary Project Teams” who leverage DevOps principles drastically reduce code deployment failure rates, and remediate vulnerabilities in half the time of average groups. And we have events like All Day DevOps, where hundreds of DevOps practitioners share stories on cultural transformations, Continuous Integration / Continuous Deployment (CI/CD) techniques, site reliability engineering, and DevSecOps. All of which is great, and offers qualitative and quantitative data showing why DevOps works and how practitioners are evolving programs. So that’s not what this paper is about. Those resources do not address the questions I am asked each and every week. This paper is about putting together a comprehensive DevSecOps program. Overwhelmingly my questioners ask, “How do I put a DevSecOps program together?” and “How does security fit into DevOps?” They are not looking for justification or individual stories on nuances to address specific impediments. They want a security program in line with peer organizations, which embraces “security best practices”. These audiences are overwhelmingly comprised of security and IT practitioners, largely left behind by development teams who have at least embraced Agile concepts, if not DevOps outright. Their challenge is to understand what development is trying to accomplish, integrate with them in some fashion, and figure out how to leverage automated security testing to be at least as agile as development. DevOps vs. DevSecOps Which leads us to another controversial topic, and why this research is different: the name DevSecOps. We contend that calling out security – the ‘Sec’ in ‘DevSecOps’ – is needed in light of maturity and understanding of this topic. Stated another way, practitioners of DevOps who have fully embraced the movement will say there is no reason to add ‘Sec’ into DevOps, as security is just another ingredient. The DevOps ideal is to break down silos between individual teams (e.g., architecture, development, IT, security, and QA) to better promote teamwork and better incentivize each team member toward the same goals. If security is just another set of skills blended into the overall effort of building and delivering software, there is no reason to call it out any more than quality assurance. Philosophically they’re right. But in practice we are not there yet. Developers may embrace the idea, but they generally suck at facilitating team integration. Sure, security is welcome to participate, but it’s

Read Post

Understanding and Selecting RASP 2019: Selection Guide

We want to take a more formal look at the RASP selection process. For our 2016 version of this paper, the market was young enough that a simple list if features was enough to differentiate one platform from another. But the current level of platform maturity makes top-tier products more difficult to differentiate. In our previous section we discussed principal use cases, then delved into technical and business requirements. Depending upon who is driving your evaluation, your list of requirements may look like either of those. With those driving factors in mind – and we encourage you to refer back as you go through this list – here is our recommended process for evaluating RASP. We believe this process will help you identify which products and vendors fit your requirements, and avoid some pitfalls along the way. Define Needs Create a selection committee: Yes, we hate the term ‘committee’ as well, but the reality is that when RASP effectively replaces WAF (whether or not WAF is actually going away), RASP requirements come from multiple groups. RASP affects not only the security team, but also development, risk management, compliance, and operational teams as well. So it’s important to include someone from each of those teams (to the degree they exist in your organization) on the committee. Ensure that anyone who could say no, or subvert the selection at the 11th hour, is on board from the beginning. Define systems and platforms to monitor: Is your goal to monitor select business applications or all web-facing applications? Are you looking to block application security threats, or only for monitoring and instrumentation to find security issues in your code? These questions can help you refine and prioritize your functional needs. Most firms start small, figure out how best to deploy and manage RASP, then grow over time. Legacy apps, Struts-based applications, and applications which process highly sensitive data may be your immediate priorities; you can monitor other applications later. Determine security requirements: The committee approach is incredibly beneficial for understanding true requirements. Sitting down with the entire selection team usually adjusts your perception of what a platform needs to deliver, and the priorities of each function. Everyone may agree that blocking threats is a top priority, but developers might feel that platform integration is the next highest priority, while IT wants trouble-ticket system integration but security wants language support for all platforms in use. Create lists of “must have”, “should have”, and “nice to have”. Define: Here the generic needs determined earlier are translated into specific technical features, and any additional requirements are considered. With this information in hand, you can document requirements to produce a coherent RFI. Evaluate and Test Products Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading RASP vendors directly. If you are in a smaller organization start by sending your RFI to a trusted VAR and email a few RASP vendors which look appropriate. A Google search or brief contact with an industry analyst can help understand who the relevant vendors are. Define the short list: Before bringing anyone in, match any materials from vendors and other sources against your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. Also use outside research sources (like Securosis) and product comparisons. Understand that you’ll likely need to compromise at some point in this process, as it’s unlikely any vendor can meet every requirement. The dog & pony show: Bring the vendors in, but instead of generic presentations and demonstrations, ask the vendors to walk you through specific use cases which match your expected needs. This is critical because they are very good at showing eye candy and presenting the depth of their capabilities, but having them attempt to deploy and solve your specific use cases will help narrow down the field and finalize your requirements. Finalize RFP: At this point you should completely understand your specific requirements, so you can issue a final formal RFP. Bring any remaining products in for in-house testing. In-house deployment testing: Set up several test applications if possible; we find public and private cloud resources effective for setting up private test environments to put tools through their paces. Additionally, this exercise will very quickly show you how easy or hard a product is to use. Try embedding the product into a build tool and see how much of the heavy lifting the vendor has done for you. Since this reflects day-to-day efforts required to manage a RASP solution, deployment testing is key to overall satisfaction. In-house effectiveness testing: You’ll want to replicate the key capabilities in house. Build a few basic policies to match your use cases, and then violate them. You need a real feel for monitoring, alerting, and workflow. Many firms replay known attacks, or use penetration testers or red teams to hammer test applications to ensure RASP detects and blocks the malicious requests they are most worried about. Many firms leverage OWASP testing tools to exercise all major attack vectors and verify that RASP provides broad coverage. Make sure to tailor some of their features to your environment to ensure customization, UI, and alerts work as you need. Are you getting too many alerts? Are some of their findings false positives? Do their alerts contain actionable information so a developer can do something with them? Put the product through its paces to make sure it meets your needs. Selection and Deployment Select, negotiate, and purchase: Once testing is complete take the results to your full selection committee and begin negotiations with your top two choices. – assuming more than one meets your needs. This takes more time but it is very useful to know you can walk away from a vendor if they won’t play ball on pricing, terms, or conditions. Pay close attention to pricing models – are they per application, per application instance, per server, or some hybrid? As you

Read Post

Understanding and Selecting RASP 2019: Integration

*Editor’s note** We have been having VPN interruptions, so I apologize for the uneven cadence of delivery on these posts. We are working on the issue. In this section we will outline how RASP fits into the technology stack, in both production deployment and application build processes. We will show what that looks like and why it’s important to fit into these steps for newer application security technologies. We will close with a discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs of differing approaches. As we mentioned in the introduction, our research into DevOps unearthed many questions on RASP. The questions came from non-traditional buyers of security products: application developers and product managers. Their teams, by and large, were running Agile development processes. They wanted to know whether RASP could effectively block attacks and fit within their existing processes. I analyzed hundreds of customer call notes over the last couple years, and following are the top 7 RASP questions customers asked – roughly in order of how often often they came up. We presently use static analysis in our build process, but we are looking for solutions that scan code more quickly, and we would like a ‘preventative’ option. Can RASP help? Development releases code twice daily, which is a little scary, because we only scan with static analysis once a week (or month). Is RASP suitable for providing protection between scans? We would like a solution that provides some 0-day protection at runtime, and sees application calls. Development is moving to a microservices architecture, but WAF only provides visibility at the edge. Can we embed monitoring and blocking into microservices? We have many applications with technical debt in security, our in-house and third-party code is not fully scanned, and we need CSS/XSRF/Injection protection. Should we look at WAF or RASP? We are looking at a “defense in depth” approach to application security, and want to know if we can run WAF alongside RASP. We want to “shift left”: move security as early as possible, and also embed security into the application development process. Can RASP help? These questions clearly illustrate how changes in application deployment, increasing speed of application development, and declining applicability of WAF are driving interest in RASP. Those changes are key to RASP’s increasing relevance. Build Integration The majority of firms we spoke with are leveraging automation to provide Continuous Integration – essentially automated building and testing of applications as new code is checked in. Some are farther down the DevOps path, and have reached Continuous Deployment (CD). To address this development-centric perspective, the diagram below illustrates a modern Continuous Deployment / DevOps application build environment. Each arrow could be a script automating some portion of source code control, building, packaging, testing, or deployment. This is the build pipeline. Each time application code is checked in, or a change is made in a configuration management tool (e.g. Chef, Puppet, Ansible, or Salt) the build server (e.g. Jenkins, Bamboo, MSBuild, CircleCI) grabs the most recent bundle of code with templates and configuration, and builds the product. This may result in creation of a machine image, a container, or an executable. If the build succeeds a test environment is automatically started up, and a battery of functional, regression, and security tests begin. If the new code passes these tests it is passed along to QA or put into pre-production to await final approval and rollout to production. This degree of automation in modern build and QA processes is what’s making development teams faster and more agile. Some firms release code into production ten times a day. The speed of Development automation is forcing Security to look for ways to keep pace. Such tools must be automated, and embed into the Development pipeline. Production Integration The build pipeline gives us a mechanical view of development, but a process-centric view offers a different perspective on where security technologies can fit. The following diagram shows different logical phases in the process of code development, each staffed by people performing a different role (e.g. architects, developers, build managers, QA, release management, IT, and IT Security). The diagram’s step-by-step nature may imply waterfall development, but do not be misled – these phases apply to any development process, including spiral, waterfall, and agile. This graphic illustrates the major phases which teams go through. The callouts map common types of security tests at specific phases within Waterfall, Agile, CI, and DevOps frameworks. Keep in mind that we are still in early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus tests on new code, and others lack API support. So orchestration of security tools – ,basically what works where – is still maturing. The time each type of test takes to run, and the type of result it returns, drives where it fits into the phases above. RASP is designed to be bundled into applications so it is part of the application delivery process. RASP components can be included as part of an application – typically installed and configured under a configuration management script, so RASP starts up as part of the application stack. RASP offers two distinct approaches for tackling application security. The first is in the pre-release / pre-deployment phase, while the second is in production. In pre-release it is used to instrument an application to detect penetration tests, red team tests, and other synthetic attacks launched during testing. Pre-deployment integrations perform monitoring and blocking. Either way, RASP deployment looks very similar. Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to launch. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor security tests attempting to break

Read Post

Understanding and Selecting RASP 2019: Technology

It is time to discuss technical facets of RASP products – including how the technology works, how it integrates into an application environment, and the advantages of different integration options. We will also outline important considerations such as platform support which impact the selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years. How the Technology Works Over the last couple years the RASP market has settled on a couple basic approaches – with a few variations to enhance detection, reliability, or performance. Understanding the technology is important for understanding the strengths and weaknesses of different RASP offerings. Instrumentation: In this deployment model, the RASP system inserts sensors or callbacks at key junctions within the application stack to observe application behavior within and between custom code, application libraries, frameworks, and the underlying operating system. This approach is typically implemented using native application profiler/instrumentation API to monitor runtime application behavior. When a sensor is hit RASP gets a callback, and then evaluates the request against the policies relevant to the request and application context. For example database queries are examined for SQL Injection (SQLi). But they also provide request deserialization ‘sandboxing’ to detect malicious payloads, and what I call ‘checkpointing’ – a request that hits checkpoint A but bypasses checkpoint B can be confidently considered hostile. These approaches provide far more advanced application monitoring than WAF, with nuanced detection of attacks and misuse. But full visibility require monitoring of all relevant interfaces, with a cost to performance and scalability. Customers need to balance thorough coverage against performance. Servlet Filters & Plugins: Some RASP platforms are implemented as web server plugins or Java Servlets, typically installed in Apache Tomcat, JBoss, or Microsoft .NET to process requests. Plugins filter requests before they execute functions such as database queries or transactions, applying detection rules to each request on receipt. Requests which match known attack signatures are blocked. This is effectively the same functionality as a WAF blacklist, with added protections such as lexical analysis of inbound request structures. This is a simple way to retrofit protection into an application environment; it is effective at blocking malicious requests without the deep application understanding possible using other integration approaches Library or JVM Replacement: Some RASP products are installed by replacing standard application libraries and/or JAR files, and at least one vendor offers a full replacement Java Virtual Machine. This method basically hijacks calls to the underlying platform into a custom application. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. For example in the case of JVM replacement, RASP can alter classes as they are loaded into memory, augmenting or patching the application and its stack. Like Instrumentation integration, this approach provides complete visibility into application behaviors and analyzes user requests. Some customers prefer this option as effectively automated platform patching, but most customers we speak with are uncomfortable with dynamic alteration of the production application stack. Instrumentation & Static Hybrid: Like many firewalls, some RASP platforms can deploy as a reverse proxy; several vendors offer this as an option. In one case a novel variant couples a proxy, an Instrumentation module, and parts of a static analysis scan. Essentially it generates a Code-Property-Graph – like a static analysis tool – to build custom security controls for all application and open source functions. This approach requires full integration into the application build pipeline to scan all source code. It then bundles the scan result into the RASP engine as the application is deployed, effectively providing an application-specific functionality whitelist. The security controls are tailored to the application with excellent code coverage – at the expense of full build integration, the need to regularly rebuild the CPG profile, and some added latency for security checks. Several small companies have come and gone over the last couple years, offering a mixture of application logic crawlers (DAST) rule sets, application virtualization to mimic the replacement model listed above, and runtime mirroring in a cloud service. The full virtualization approach was interesting, but being too early to market and being dead wrong in approach are virtually indistinguishable. Still, over time I expect to see new RASP detection variations, possibly in the area of AI, and new cloud services for additional support layers. Detection RASP attack detection is complicated, with multiple techniques employed depending on request type. Most products examine both the request and its parameters, inspecting each component in multiple ways. The good news is that RASP is far more effective at detecting application attacks. Unlike other technologies which use signature-based detection, RASP fully decodes parameters and external references, maps application functions and third-party code usage, maps execution sequences, deserializes payloads, and applies polices accordingly. This not only enables more accurate detection, but improves performance by optimizing which checks are performed based on request context and code execution path. Enforcing rules at the point of use makes it much easier to both understand proper usage and detect misuse. Most RASP platforms employ structural analysis as well. They understand what framework is in use, and which common vulnerabilities it is vulnerable to. As RASP understands the entire application stack, it can detect variations in third-party code libraries — roughly comparable to a vulnerability scan of an open source library — to determine when outdated code is used. RASP can also quickly vet incoming requests and detect injection attacks. There are several approaches – one uses a form of tokenization (replacing parameters with tokens) to quickly verify that a request matches its intended structure. For example tokenizing clauses and parameters in a SQL query can quickly detect when a ‘FROM’ or ‘WHERE’ clause has more tokens than it should, indicating the query has been altered. Blocking When an attack is detected RASP, running within the application, can throw an application error. This prevents the malicious request from being further processed, with the protected application responsible for a graceful response and maintenance of application state.

Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.