Securosis

Research

Enterprise DevSecOps: How Security Works With Development

In our first paper on ‘Building Security Into DevOps’, given the ‘newness’ of DevOps for most of our readers, we included a discussion on the foundational principles and how DevOps is meant to help tackle numerous problems common to software delivery. Please refer to that paper is you want more detailed background information. For our purposes here we will discuss just a few principles that directly relate to the integration of security teams and testing with DevOps principles. These concepts lay the foundations for addressing the questions we raised in the first section, and readers will need to understand these as we discuss security tooling and approaches in a DevOps environment. And before we dive in, let’s answer one of the most common questions from the previous section: “How do we get control over development?” The short answer is you do not. The longer answer is, in DevOps, you need to work along side your partner, not “control” them. Yes, a small percentage of organizations we spoke with gate all software releases by having security run a battery of tests prior to release, and certify every release from a security standpoint. It is rare that security gets to control software releases in this way, is anti-DevOps, but it can be very effective security control if not altogether efficient. That said, the remainder of this section should exemplify why your goal should not be to control development, and helpful approach to work with them. DevOps and Security Build Security In It is a terrible truth, but wide use of application security techniques within the code development process is relatively new. Sure, the field of study is decades old, but application security was more often bolted on with network or application firewalls, not baked into the code itself. Security product vendors discovered that understanding application requests in order to detect and then block attacks is incredibly difficult to do outside the application. It is far more effective to fix vulnerable code and close off attack vectors when possible. Add-on tools are getting better – and some work inside the application context – but better to address the issues in the code when possible. A central concept when building security in is ‘shift left’, or the idea that we integrate security testing earlier within the Software Development Lifecycle (SDLC) – the phases of which are typically listed left to right as design, development, testing, pre-production and production. Essentially we shift more resources away from production on the extreme right, and put more into design, testing and development phases. Born out of lean manufacturing, Kaizen and Deming’s principles, these ideas have been proven effective, but typically applied to the manufacture of physical goods. DevOps has promoted use in software development, demonstrating we can improve security at a lower cost by shifting security defect detection earlier in the process. Automation Automation is one of the keys to success for most firms we speak with, to the point that the engineering teams often equate DevOps and Automation as synonymous. The reality is the cultural and organizational changes that come with DevOps are equally important, it’s just that automation is sometimes the most quantifiable benefit. Automation brings speed, consistency and efficiency to all parties involved. DevOps, like Agile, is geared towards doing less, better, and faster. Software releases occur more regularly, with less code change between them. Less work means better focus, more clarity of purpose with each release, resulting in fewer mistakes. It also means it’s easier to rollback in the event of mistakes. Automation helps people get their jobs done with less hands-on work, but as automation software does exactly the same things every time, consistency is the most conspicuous benefit. The place where automation is first applied, where the benefits of automation are most pronounced, are the application build servers. Build servers (e.g.: Bamboo, Jenkins, ), commonly called Continuous Integration (CI) servers, automatically construct an application – and possibly the entire application stack – as code is changed. Once the application is built, these platforms may also launch QA and security tests, kicking back failed builds to the development team. Automation benefits other facets of software production, including reporting, metrics, quality assurance and release management, but security testing benefits are what we are focused on in this research. On the outset this may not seem like much; calling security testing tools instead of manually running the tests. That perspective misses the fundamental benefits of automated security testing. Automation is how we ensure that each update to software includes security tests, ensuring consistency. Automation is how we help avoid mistakes and omissions common with repetitive and – let’s be totally transparent here – boring manual tasks. But most importantly, as security teams are typically outnumbered by developers at a ratio of 100 to one, automation is the key ingredient to scaling security coverage without having to scale security personnel headcount. One Team A key DevOps principle is to break down silos and have better cooperation between developers and supporting QA, IT, security and other teams. We have heard this idea so often that it sounds cliché, but the reality is few in software development actually made changes to implement this idea. Most DevOps centric firms are changing development team composition to include representatives from all disciplines; that means every team has someone who knows a little security and/or represents security interest, even on small teams. And for those that do, they realize the benefits not just of better communication, but true alignment of goals and incentives. Development in isolation is incentivized to write new features. Quality Assurance in isolation is incented to get code coverage for various tests. When everyone on a team is responsible for the successful release of new software, there is a change in priorities and changes to behavior. This item remains a bit of a problem for many of the firms we have interviewed. The majority of firms we have spoken with are large in size, with hundreds of development teams

Share:
Read Post

Enterprise DevSecOps: New Series

DevOps is an operational framework which promotes software consistency and standardization through automation. It helps address many nightmare development issues around integration, testing, patching, and deployment – both by breaking down barriers between different development teams, and also by prioritizing things which make software development faster and easier. DevSecOps is the integration of security teams and security tools directly into the software development lifecycle, leveraging the automation and efficiencies of DevOps to ensure application security testing occurs in every build cycle. This promotes security and consistency, and helps to ensure that security is prioritized no lower that other quality metrics or features. Automated security testing, just like automated application build and deployment, must be assembled with the rest of the infrastructure. And there lies the problem. Software developers have traditionally not embraced security. It’s not because they do not care about security, but they were incentivized to to focus on delivery of new features and functions. DevOps is raising the priority of automating build processes – making them faster, easier, and more consistent. But that does not mean developers are going out of their way to include security or security tooling. That’s often because security tools don’t easily integrate well with development tools and processes, tend to flood queues with unintelligible findings, and lack development-centric filters to help prioritize. Worse, security platforms – and the security professionals who recommend them – were difficult to work with, or even failed to offer API support for integration. On the other side of equation are security teams, who fear automated software processes and commonly ask, “How can we get control over development?” This question misses the point of DevSecOps, and risks placing security in opposition to all other developer priorities: to improve velocity, efficiency, and consistency with each software release. The only way for security teams to cope with the changes within software development, and to scale their relatively small organizations, is to become just as agile as development teams by embracing automation. Why Did We Write This Paper? We discuss the motivation behind our research to help readers understand our goals and what we wish to convey. This is doubly relevant when we update a research paper, as it helps us spotlight recent changes in the industry which have made older papers inaccurate or inadequate to describe recent trends. DevOps has matured considerably in four years, so we have a lot to talk about. This will be a major rewrite of our 2015 research on Building Security into DevOps, with significant additions around common questions security teams ask about DevSecOps and a thorough update on tooling and integration approaches. Much of this paper will reflect 400+ conversations since 2017 across 200+ security teams at Fortune 2000 firms. So we will include considerably more discussion derived from those conversations. But DevOps has now been around for years, so discussion of its nature and value is less necessary, and we can focus on the practicalities of how to put together a DevSecOps program. Now let’s shake things up a bit. Different Focus, Different Value A plethora of new surveys and research papers are available, and some of them a very good. And there are more conferences and online resources popping up than I can count. For example Veracode recently released the latest iteration of its State of Software Security (SoSS) report and it’s a monster, with loads of data and observations. Their key takeaways are that the agility and automation employed by DevSecOps teams provide demonstrable security benefits, including faster patching cycles, shorter flaw persistence, faster reduction of technical debt, and ‘easier’ scanning – which leads to faster problem identification. Sonatype’s recently released 2019 State of the Software Supply Chain shows that “Exemplary Project Teams” who leverage DevOps principles drastically reduce code deployment failure rates, and remediate vulnerabilities in half the time of average groups. And we have events like All Day DevOps, where hundreds of DevOps practitioners share stories on cultural transformations, Continuous Integration / Continuous Deployment (CI/CD) techniques, site reliability engineering, and DevSecOps. All of which is great, and offers qualitative and quantitative data showing why DevOps works and how practitioners are evolving programs. So that’s not what this paper is about. Those resources do not address the questions I am asked each and every week. This paper is about putting together a comprehensive DevSecOps program. Overwhelmingly my questioners ask, “How do I put a DevSecOps program together?” and “How does security fit into DevOps?” They are not looking for justification or individual stories on nuances to address specific impediments. They want a security program in line with peer organizations, which embraces “security best practices”. These audiences are overwhelmingly comprised of security and IT practitioners, largely left behind by development teams who have at least embraced Agile concepts, if not DevOps outright. Their challenge is to understand what development is trying to accomplish, integrate with them in some fashion, and figure out how to leverage automated security testing to be at least as agile as development. DevOps vs. DevSecOps Which leads us to another controversial topic, and why this research is different: the name DevSecOps. We contend that calling out security – the ‘Sec’ in ‘DevSecOps’ – is needed in light of maturity and understanding of this topic. Stated another way, practitioners of DevOps who have fully embraced the movement will say there is no reason to add ‘Sec’ into DevOps, as security is just another ingredient. The DevOps ideal is to break down silos between individual teams (e.g., architecture, development, IT, security, and QA) to better promote teamwork and better incentivize each team member toward the same goals. If security is just another set of skills blended into the overall effort of building and delivering software, there is no reason to call it out any more than quality assurance. Philosophically they’re right. But in practice we are not there yet. Developers may embrace the idea, but they generally suck at facilitating team integration. Sure, security is welcome to participate, but it’s

Share:
Read Post

Understanding and Selecting RASP 2019: Selection Guide

We want to take a more formal look at the RASP selection process. For our 2016 version of this paper, the market was young enough that a simple list if features was enough to differentiate one platform from another. But the current level of platform maturity makes top-tier products more difficult to differentiate. In our previous section we discussed principal use cases, then delved into technical and business requirements. Depending upon who is driving your evaluation, your list of requirements may look like either of those. With those driving factors in mind – and we encourage you to refer back as you go through this list – here is our recommended process for evaluating RASP. We believe this process will help you identify which products and vendors fit your requirements, and avoid some pitfalls along the way. Define Needs Create a selection committee: Yes, we hate the term ‘committee’ as well, but the reality is that when RASP effectively replaces WAF (whether or not WAF is actually going away), RASP requirements come from multiple groups. RASP affects not only the security team, but also development, risk management, compliance, and operational teams as well. So it’s important to include someone from each of those teams (to the degree they exist in your organization) on the committee. Ensure that anyone who could say no, or subvert the selection at the 11th hour, is on board from the beginning. Define systems and platforms to monitor: Is your goal to monitor select business applications or all web-facing applications? Are you looking to block application security threats, or only for monitoring and instrumentation to find security issues in your code? These questions can help you refine and prioritize your functional needs. Most firms start small, figure out how best to deploy and manage RASP, then grow over time. Legacy apps, Struts-based applications, and applications which process highly sensitive data may be your immediate priorities; you can monitor other applications later. Determine security requirements: The committee approach is incredibly beneficial for understanding true requirements. Sitting down with the entire selection team usually adjusts your perception of what a platform needs to deliver, and the priorities of each function. Everyone may agree that blocking threats is a top priority, but developers might feel that platform integration is the next highest priority, while IT wants trouble-ticket system integration but security wants language support for all platforms in use. Create lists of “must have”, “should have”, and “nice to have”. Define: Here the generic needs determined earlier are translated into specific technical features, and any additional requirements are considered. With this information in hand, you can document requirements to produce a coherent RFI. Evaluate and Test Products Issue the RFI: Larger organizations should issue an RFI though established channels and contact a few leading RASP vendors directly. If you are in a smaller organization start by sending your RFI to a trusted VAR and email a few RASP vendors which look appropriate. A Google search or brief contact with an industry analyst can help understand who the relevant vendors are. Define the short list: Before bringing anyone in, match any materials from vendors and other sources against your RFI and draft RFP. Your goal is to build a short list of 3 products which can satisfy most of your needs. Also use outside research sources (like Securosis) and product comparisons. Understand that you’ll likely need to compromise at some point in this process, as it’s unlikely any vendor can meet every requirement. The dog & pony show: Bring the vendors in, but instead of generic presentations and demonstrations, ask the vendors to walk you through specific use cases which match your expected needs. This is critical because they are very good at showing eye candy and presenting the depth of their capabilities, but having them attempt to deploy and solve your specific use cases will help narrow down the field and finalize your requirements. Finalize RFP: At this point you should completely understand your specific requirements, so you can issue a final formal RFP. Bring any remaining products in for in-house testing. In-house deployment testing: Set up several test applications if possible; we find public and private cloud resources effective for setting up private test environments to put tools through their paces. Additionally, this exercise will very quickly show you how easy or hard a product is to use. Try embedding the product into a build tool and see how much of the heavy lifting the vendor has done for you. Since this reflects day-to-day efforts required to manage a RASP solution, deployment testing is key to overall satisfaction. In-house effectiveness testing: You’ll want to replicate the key capabilities in house. Build a few basic policies to match your use cases, and then violate them. You need a real feel for monitoring, alerting, and workflow. Many firms replay known attacks, or use penetration testers or red teams to hammer test applications to ensure RASP detects and blocks the malicious requests they are most worried about. Many firms leverage OWASP testing tools to exercise all major attack vectors and verify that RASP provides broad coverage. Make sure to tailor some of their features to your environment to ensure customization, UI, and alerts work as you need. Are you getting too many alerts? Are some of their findings false positives? Do their alerts contain actionable information so a developer can do something with them? Put the product through its paces to make sure it meets your needs. Selection and Deployment Select, negotiate, and purchase: Once testing is complete take the results to your full selection committee and begin negotiations with your top two choices. – assuming more than one meets your needs. This takes more time but it is very useful to know you can walk away from a vendor if they won’t play ball on pricing, terms, or conditions. Pay close attention to pricing models – are they per application, per application instance, per server, or some hybrid? As you

Share:
Read Post

Understanding and Selecting RASP 2019: Integration

*Editor’s note** We have been having VPN interruptions, so I apologize for the uneven cadence of delivery on these posts. We are working on the issue. In this section we will outline how RASP fits into the technology stack, in both production deployment and application build processes. We will show what that looks like and why it’s important to fit into these steps for newer application security technologies. We will close with a discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs of differing approaches. As we mentioned in the introduction, our research into DevOps unearthed many questions on RASP. The questions came from non-traditional buyers of security products: application developers and product managers. Their teams, by and large, were running Agile development processes. They wanted to know whether RASP could effectively block attacks and fit within their existing processes. I analyzed hundreds of customer call notes over the last couple years, and following are the top 7 RASP questions customers asked – roughly in order of how often often they came up. We presently use static analysis in our build process, but we are looking for solutions that scan code more quickly, and we would like a ‘preventative’ option. Can RASP help? Development releases code twice daily, which is a little scary, because we only scan with static analysis once a week (or month). Is RASP suitable for providing protection between scans? We would like a solution that provides some 0-day protection at runtime, and sees application calls. Development is moving to a microservices architecture, but WAF only provides visibility at the edge. Can we embed monitoring and blocking into microservices? We have many applications with technical debt in security, our in-house and third-party code is not fully scanned, and we need CSS/XSRF/Injection protection. Should we look at WAF or RASP? We are looking at a “defense in depth” approach to application security, and want to know if we can run WAF alongside RASP. We want to “shift left”: move security as early as possible, and also embed security into the application development process. Can RASP help? These questions clearly illustrate how changes in application deployment, increasing speed of application development, and declining applicability of WAF are driving interest in RASP. Those changes are key to RASP’s increasing relevance. Build Integration The majority of firms we spoke with are leveraging automation to provide Continuous Integration – essentially automated building and testing of applications as new code is checked in. Some are farther down the DevOps path, and have reached Continuous Deployment (CD). To address this development-centric perspective, the diagram below illustrates a modern Continuous Deployment / DevOps application build environment. Each arrow could be a script automating some portion of source code control, building, packaging, testing, or deployment. This is the build pipeline. Each time application code is checked in, or a change is made in a configuration management tool (e.g. Chef, Puppet, Ansible, or Salt) the build server (e.g. Jenkins, Bamboo, MSBuild, CircleCI) grabs the most recent bundle of code with templates and configuration, and builds the product. This may result in creation of a machine image, a container, or an executable. If the build succeeds a test environment is automatically started up, and a battery of functional, regression, and security tests begin. If the new code passes these tests it is passed along to QA or put into pre-production to await final approval and rollout to production. This degree of automation in modern build and QA processes is what’s making development teams faster and more agile. Some firms release code into production ten times a day. The speed of Development automation is forcing Security to look for ways to keep pace. Such tools must be automated, and embed into the Development pipeline. Production Integration The build pipeline gives us a mechanical view of development, but a process-centric view offers a different perspective on where security technologies can fit. The following diagram shows different logical phases in the process of code development, each staffed by people performing a different role (e.g. architects, developers, build managers, QA, release management, IT, and IT Security). The diagram’s step-by-step nature may imply waterfall development, but do not be misled – these phases apply to any development process, including spiral, waterfall, and agile. This graphic illustrates the major phases which teams go through. The callouts map common types of security tests at specific phases within Waterfall, Agile, CI, and DevOps frameworks. Keep in mind that we are still in early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus tests on new code, and others lack API support. So orchestration of security tools – ,basically what works where – is still maturing. The time each type of test takes to run, and the type of result it returns, drives where it fits into the phases above. RASP is designed to be bundled into applications so it is part of the application delivery process. RASP components can be included as part of an application – typically installed and configured under a configuration management script, so RASP starts up as part of the application stack. RASP offers two distinct approaches for tackling application security. The first is in the pre-release / pre-deployment phase, while the second is in production. In pre-release it is used to instrument an application to detect penetration tests, red team tests, and other synthetic attacks launched during testing. Pre-deployment integrations perform monitoring and blocking. Either way, RASP deployment looks very similar. Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to launch. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor security tests attempting to break

Share:
Read Post

Understanding and Selecting RASP 2019: Technology

It is time to discuss technical facets of RASP products – including how the technology works, how it integrates into an application environment, and the advantages of different integration options. We will also outline important considerations such as platform support which impact the selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years. How the Technology Works Over the last couple years the RASP market has settled on a couple basic approaches – with a few variations to enhance detection, reliability, or performance. Understanding the technology is important for understanding the strengths and weaknesses of different RASP offerings. Instrumentation: In this deployment model, the RASP system inserts sensors or callbacks at key junctions within the application stack to observe application behavior within and between custom code, application libraries, frameworks, and the underlying operating system. This approach is typically implemented using native application profiler/instrumentation API to monitor runtime application behavior. When a sensor is hit RASP gets a callback, and then evaluates the request against the policies relevant to the request and application context. For example database queries are examined for SQL Injection (SQLi). But they also provide request deserialization ‘sandboxing’ to detect malicious payloads, and what I call ‘checkpointing’ – a request that hits checkpoint A but bypasses checkpoint B can be confidently considered hostile. These approaches provide far more advanced application monitoring than WAF, with nuanced detection of attacks and misuse. But full visibility require monitoring of all relevant interfaces, with a cost to performance and scalability. Customers need to balance thorough coverage against performance. Servlet Filters & Plugins: Some RASP platforms are implemented as web server plugins or Java Servlets, typically installed in Apache Tomcat, JBoss, or Microsoft .NET to process requests. Plugins filter requests before they execute functions such as database queries or transactions, applying detection rules to each request on receipt. Requests which match known attack signatures are blocked. This is effectively the same functionality as a WAF blacklist, with added protections such as lexical analysis of inbound request structures. This is a simple way to retrofit protection into an application environment; it is effective at blocking malicious requests without the deep application understanding possible using other integration approaches Library or JVM Replacement: Some RASP products are installed by replacing standard application libraries and/or JAR files, and at least one vendor offers a full replacement Java Virtual Machine. This method basically hijacks calls to the underlying platform into a custom application. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. For example in the case of JVM replacement, RASP can alter classes as they are loaded into memory, augmenting or patching the application and its stack. Like Instrumentation integration, this approach provides complete visibility into application behaviors and analyzes user requests. Some customers prefer this option as effectively automated platform patching, but most customers we speak with are uncomfortable with dynamic alteration of the production application stack. Instrumentation & Static Hybrid: Like many firewalls, some RASP platforms can deploy as a reverse proxy; several vendors offer this as an option. In one case a novel variant couples a proxy, an Instrumentation module, and parts of a static analysis scan. Essentially it generates a Code-Property-Graph – like a static analysis tool – to build custom security controls for all application and open source functions. This approach requires full integration into the application build pipeline to scan all source code. It then bundles the scan result into the RASP engine as the application is deployed, effectively providing an application-specific functionality whitelist. The security controls are tailored to the application with excellent code coverage – at the expense of full build integration, the need to regularly rebuild the CPG profile, and some added latency for security checks. Several small companies have come and gone over the last couple years, offering a mixture of application logic crawlers (DAST) rule sets, application virtualization to mimic the replacement model listed above, and runtime mirroring in a cloud service. The full virtualization approach was interesting, but being too early to market and being dead wrong in approach are virtually indistinguishable. Still, over time I expect to see new RASP detection variations, possibly in the area of AI, and new cloud services for additional support layers. Detection RASP attack detection is complicated, with multiple techniques employed depending on request type. Most products examine both the request and its parameters, inspecting each component in multiple ways. The good news is that RASP is far more effective at detecting application attacks. Unlike other technologies which use signature-based detection, RASP fully decodes parameters and external references, maps application functions and third-party code usage, maps execution sequences, deserializes payloads, and applies polices accordingly. This not only enables more accurate detection, but improves performance by optimizing which checks are performed based on request context and code execution path. Enforcing rules at the point of use makes it much easier to both understand proper usage and detect misuse. Most RASP platforms employ structural analysis as well. They understand what framework is in use, and which common vulnerabilities it is vulnerable to. As RASP understands the entire application stack, it can detect variations in third-party code libraries — roughly comparable to a vulnerability scan of an open source library — to determine when outdated code is used. RASP can also quickly vet incoming requests and detect injection attacks. There are several approaches – one uses a form of tokenization (replacing parameters with tokens) to quickly verify that a request matches its intended structure. For example tokenizing clauses and parameters in a SQL query can quickly detect when a ‘FROM’ or ‘WHERE’ clause has more tokens than it should, indicating the query has been altered. Blocking When an attack is detected RASP, running within the application, can throw an application error. This prevents the malicious request from being further processed, with the protected application responsible for a graceful response and maintenance of application state.

Share:
Read Post

Understanding and Selecting RASP 2019: Use Cases

Updated 9-13 to include business requirements The primary function of RASP is to protect web applications against known and emerging threats. In some cases it is deployed to block attacks at the application layer, before vulnerabilities can be exploited, but in many cases RASP tools process a request until it detects an attack and then blocks the action. Astute readers will notice that these are basically the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why look for something new, if other tools in the market already provide the same application security benefits? The answer is not in what RASP does, but rather in how it does works, which makes it more effective in a wide range of scenarios. Let’s delve into detail about what clients are asking for, so we can bring this into focus. Primary Market Drivers RASP is a relatively new technology, so current market drivers are tightly focused on addressing the security needs of two distinct “buying centers” which have been largely unaddressed by existing security applications. We discovered this important change since our last report in 2017 through hundreds of conversations with buyers, who expressed remarkably consistent requirements. The two “buying centers” are security and application development teams. Security teams are looking for a reliable WAF replacement without burdensome management requirements, and development teams ask for a security technology to protect applications within the framework of existing development processes. The security team requirement is controversial, so let’s start with some background on WAF functions and usability. It’s is essential to understand the problems driving firms toward RASP. Web Application Firewalls typically employ two methods of threat detection; blacklisting and whitelisting. Blacklisting is detection – and often blocking – of known attack patterns spotted within incoming application requests. SQL injection is a prime example. Blacklisting is useful for screening out many basic attacks against applications, but new attack variations keep showing up, so blacklists cannot stay current, and attackers keep finding ways to bypass them. SQL injection and its many variants is the best illustration. But whitelisting is where WAFs provide their real value. A whitelist is created by watching and learning acceptable application behaviors, recording legitimate behaviors over time, and preventing any requests which do not match the approved behavior list. This approach offers substantial advantages over blacklisting: the list is specific to the application monitored, which makes it feasible to enumerate good functions – instead of trying to catalog every possible malicious request – and therefore easier (and faster) to spot undesirable behavior. Unfortunately, developers complain that in the normal course of application deployment, a WAF can never complete whitelist creation – ‘learning’ – before the next version of the application is ready for deployment. The argument is that WAFs are inherently too slow to keep up with modern software development, so they devolve to blacklist enforcement. Developers and IT teams alike complain that WAF is not fully API-enabled, and that setup requires major manual effort. Security teams complain they need full-time personnel to manage and tweak rules. And both groups complain that, when they try to deploy into Infrastructure as a Service (IaaS) public clouds, the lack of API support is a deal-breaker. Customers also complain of deficient vendor support beyond basic “virtual appliance” scenarios – including a lack of support for cloud-native constructs like application auto-scaling, ephemeral application stacks, templating, and scripting/deployment support for the cloud. As application teams become more agile, and as firms expand their cloud footprint, traditional WAF becomes less useful. To be clear, WAF can provide real value – especially commercial WAF “Security as a Service” offerings, which focus on blacklisting and some additional protections like DDoS mitigation. These are commonly run in the cloud as a proxy service, often filtering requests “in the cloud” before they pass into your application and/or RASP solution. But they are limited to a ‘Half-a-WAF’ role – without the sophistication or integration to leverage whitelisting. Traditional WAF platforms continue to work for on-premise applications with slower deployment, where the WAF has time to build and leverage a whitelist. So existing WAF is largely not being “ripped and replaced”, but it is largely unused in the cloud and by more agile development teams. So security teams are looking for an effective application security tool to replace WAF, which is easier to manage. They need to cover application defects and technical debt – not every defect can be fixed in a timely fashion in code. Developer requirements are more nuanced: they cite the same end goal, but tend to ask which solutions can be fully embedded into existing application build and certification processes. To work with development pipelines, security tools need to go the extra mile, protecting against attacks and accommodating the disruption underway in the developer community. A solution must be as agile as application development, which often starts with compatible automation capabilities. It needs to scale with the application, typically by being bundled with the application stack at build time. It should ‘understand’ the application and tailor its protection to the application runtime. A security tool should not require that developers be security experts. Development teams working to “shift left” to get security metrics and instrumentation earlier in their process want tools which work in pre-production, as well as production. RASP offers a distinct blend of capabilities and usability options which make it a good fit for these use cases. This is why, over the last three years, we have been fielding several calls each week to discuss it. Functional Requirements The market drivers mentioned above change traditional functional requirements – the features buyers are looking for. Effectiveness: This seems like an odd buyer requirement. Why buy a product which does not actually work? The short answer is ‘false positives’ that waste time and effort. The longer answer is many security tools don’t work well, produce too many false positives to be usable, or require so much maintenance that building your bespoke seems like a better

Share:
Read Post

Understanding and Selecting RASP: 2019

During our 2015 DevOps research conversations, developers consistently turned the tables on us, asking dozens of questions about embedding security into their development process. We were surprised to discover how much developers and IT teams are taking larger roles in selecting security solutions, working to embed security products into tooling and build processes. Just like they use automation to build and test product functionality, they automate security too. But the biggest surprise was that every team asked about RASP, Runtime Application Self-Protection. Each team was either considering RASP or already engaged in a proof-of-concept with a RASP vendor. This was typically in response to difficulties with existing Web Application Firewalls (WAF) – most teams still carry significant “technical debt”, which requires runtime application protection. Since 2017 we have engaged in over 200 additional conversations on what gradually evolved into ‘DevSecOps’ – with both security and development groups asking about RASP, how it deploys, and benefits it can realistically provide. These conversations solidified the requirement for more developer-centric security tools which offer the agility developers demand, provide metrics prior to deployment, and either monitor or block malicious requests in production. Research Update Our previous RASP research was published in the summer of 2016. Since then Continuous Integration for application build processes has become the norm, and DevOps is no longer considered wild idea. Developers and IT folks have embraced it as a viable and popular tool approach for producing more reliable application deployments. But it has raised the bar for security solutions, which now need to be as agile and embeddable as developers’ other tools to be taken seriously. The rise of DevOps has also raised expectations for integration of security monitoring and metrics. We have witnessed the disruptive innovation of cloud services, with companies pivoting from “We are not going to the cloud.” to “We are building out our multi-cloud strategy.” in three short years. These disruptive changes have spotlit the deficiencies of WAF platforms, both lack of agility and inability to go “cloud native”. Similarly, we have observed advancements in RASP technologies and deployment models. With all these changes it has become increasingly difficult to differentiate one RASP platform from another. So we are kicking off a refresh of our RASP research. We will dive into the new approaches, deployment models, and revised selection criteria for buyers. Defining RASP Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities: Unpack and inspect requests in the application context, rather than at the network or HTTP layer Monitor and block application requests; products can sometimes alter requests to strip out malicious content Fully functional through RESTful APIs Protect against all classes of application attacks, and detect whether an attack would succeed Pinpoint the module, and possibly the specific line of code, where a vulnerability resides Instrument application functions and report on usage As with all our research, we welcome public participation in comments to augment or discuss our content. Securosis is known for research positions which often disagree with vendors, analyst firms, and other researchers, so we encourage civil debate and contribution. The more you add to the discussion, the better the research! Next we will discuss RASP use cases and how they have changed over the last few years. Share:

Share:
Read Post

Firestarter: Multicloud Deployment Structures and Blast Radius

In this, our second Firestarter on multicloud deployments, we start digging into the technological differences between the cloud providers. We start with the concept of how to organize your account(s). Each provider uses different terminology but all support similar hierarchies. From the overlay of AWS organizations to the org-chart-from-the-start of an Azure tenant we dig into the details and make specific recommendations. We also discuss the inherent security barriers and cover a wee bit of IAM. Watch or listen: Share:

Share:
Read Post

DisruptOps: Breaking Attacker Kill Chains in AWS: IAM Roles

Breaking Attacker Kill Chains in AWS: IAM Roles Over the past year I’ve seen a huge uptick in interest for concrete advice on handling security incidents inside the cloud, with cloud native techniques. As organizations move their production workloads to the cloud, it doesn’t take long for the security professionals to realize that the fundamentals, while conceptually similar, are quite different in practice. One of those core concepts is that of the kill chain, a term first coined by Lockheed Martin to describe the attacker’s process. Break any link and you break the attack, so this maps well to combining defense in depth with the active components of incident response. Read the full post at DisruptOps. Share:

Share:
Read Post

Firestarter: So you want to multicloud?

This is our first in a series of Firestarters covering multicloud. Using more than one IaaS cloud service provider is, well, a bit of a nightmare. Although this is widely recognized by anyone with hands-on cloud experience that doesn’t mean reality always matches our desires. From executives worried about lock in to M&A activity we are finding that most organizations are being pulled into multicloud deployments. In this first episode we lay out the top level problems and recommend some strategies for approaching them. Watch or listen: Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.