Securosis

Research

Summary: June 3, 2016

Adrian here. Unlike my business partners who have been logging thousands of air miles, speaking at conferences and with clients around the country, I have been at home. And with the mildest spring in Phoenix’s recored history, it’s been a blessing as we’re 45 days past the point we typically encounter 100 degree days. Bike rides. Hiking. Running. That is, when I get a chance to sneak outdoors and enjoy it. With our pivot there is _even more_ writing and research going on than normal, if that’s even possible. You will begin to see the results of this work within the next couple of weeks, and we are looking forward to putting a fresh face on the business. That launch will coincide with us posting lots more hands on advice for cloud security and migrations. And as a heads up, I’m going to be talking Big Data security over at SC Magazine on the 20th. I’ll tweet out a link (follow at @AdrianLane) next week if you’re interested. If you want to subscribe directly to the Friday Summary only list, just [click here](http://eepurl.com/bQfTPH). ​ ## Top Posts for the Week ​ * [Salesforce to Piggyback on Amazon’s Growing Cloud](http://www.morningstar.com/news/dow-jones/TDJNDN_2016052511417/in-400-million-deal-salesforce-to-piggyback-on-amazons-growing-cloud.html) * [Ex-VMWare CEO now EVP of GCP](http://techcrunch.com/2016/05/30/diane-greene-wants-to-put-the-enterprise-front-and-center-of-google-cloud-strategy/) * [Insights on Container Security with Azure Container Service (ACS)](https://blogs.msdn.microsoft.com/azuresecurity/2016/05/26/insights-on-container-security-with-azure-container-service-acs/) * [Comparing IAAS providers](http://fortycloud.com/iaas-security-state-of-the-industry/) * In ‘not cloud’ news, [Oracle accused of ‘improper accounting’ in attempt to pump-up cloud sales](http://www.computerworld.com/article/3078156/cloud-computing/oracle-employee-says-she-was-fired-for-refusing-to-fiddle-with-cloud-accounts.html). * [The Business Value of DevOps](http://devops.com/2016/06/02/devops-business-value/) ​ ## Tool of the Week “Server-less computing? What do you mean?” Rich and I were discussing cloud deployment options with one of the smartest engineering managers I know, and he was totally unaware server-less cloud computing architectures. If he was unaware of this capability, odds are lots of people are as well. So in this week’s ‘tool of the week’ section we will not discuss a single tool, but rather a functional paradigm offered by multiple cloud service vendors. What are they? Stealing from Google’s GCP page on the subject as they best capture the idea, essentially it’s a “lightweight, event-based, asynchronous solution that allows you to create small, single-purpose functions that respond to Cloud events without the need to manage a server or a runtime environment.” What Google did not mention is that these functions tend to be very fast, and you can run multiple copies in parallel to scale up capacity. It’s really the embodiment of micro-services. You can, in fact, construct and entire application from these functions. For example, take a stream of data and run it through a series of functions to process it. It could be audio or image file processing, or real time event data inspection, data transformation, data enrichment, data comparisons or any combination you can think of. The best part? There is _no server_. There is no OS to set up. No CPU or disk capacity to specify. No configuration files. No network ports to manage. It’s simply a logical function running out there in the ‘ether’ of your public cloud. Google’s version on GCP is called [cloud functions](https://cloud.google.com/functions/docs/). Amazon’s version on AWS is called (lambda functions](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html). Microsofts version on Azure is simply called [functions](https://azure.microsoft.com/en-us/services/functions/). Check the API documents as they all work slightly differently, and some have specific storage requirements to act as endpoints, but the idea is the same. And the pricing for these services is pretty low; with lambda for example, the first million requests are free, and it’s 20 cents for every million requests thereafter. This feature is one of the many reasons we tell companies to reconsider application architectures when moving to cloud services. We’ll post some tidbits on security for these services in future blog posts. For now, we recommend you check it out! ​ ## Securosis Blog Posts this Week ​ * [Incident Response in the Cloud Age: In Action](https://securosis.com/blog/incident-response-in-the-cloud-age-in-action). * [Understanding and Selecting RASP: Integration](https://securosis.com/blog/understanding-and-selecting-rasp-integration). * [https://securosis.com/blog/firestarter-where-to-start](https://securosis.com/blog/firestarter-where-to-start). * [Incident Response in the Cloud Age: Addressing the Skills Gap](https://securosis.com/blog/incident-response-in-the-cloud-age-addressing-the-skills-gap). ​ ​ ## Training and Events ​ * We are running two classes at Black Hat USA: * [Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus)](https://www.blackhat.com/us-16/training/cloud-security-hands-on-ccsk-plus.html) * [Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps](https://www.blackhat.com/us-16/training/advanced-cloud-security-and-applied-secdevops.html) Share:

Share:
Read Post

Understanding and Selecting RASP: Integration

This post will offer examples for how to integrate RASP into a development pipeline. We’ll cover both how RASP fits into the technology stack, and development processes used to deliver applications. We will close this post with a detailed discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs compared to other security technologies. As we mentioned in our introduction, our research into DevOps produced many questions on how RASP worked, and whether it is an effective security technology. The questions came from non-traditional buyers of security products: application developers and product managers. Their teams, by and large, were running Agile development processes. The majority were leveraging automation to provide Continuous Integration – essentially rebuilding and retesting the application repeatedly and automatically as new code was checked in. Some had gone as far as Continuous Deployment (CD) and DevOps. To address this development-centric perspective, we offer the diagram below to illustrate a modern Continuous Deployment / DevOps application build environment. Consider each arrow a script automating some portion of source code control, building, packaging, testing, or deployment of an application. Security tools that fit this model are actively being sought by development teams. They need granular API access to functions, quick production of test results, and delivery of status back to supporting services. Application Integration Installation: As we mentioned back in the technology overview, RASP products differ in how they embed within applications. They all offer APIs to script configuration and runtime policies, but how and where they fit in differ slightly between products. Servlet filters, plugins, and library replacement are performed as the application stack is assembled. These approaches augment an application or application ‘stack’ to perform detection and blocking. Virtualization and JVM replacement approaches augment run-time environments, modifying the subsystems that run your application modified to handle monitoring and detection. In all cases these, be it on-premise or as a cloud service, the process of installing RASP is pretty much identical to the build or deployment sequence you currently use. Rules & Policies: We found the majority of RASP offerings include canned rules to detect or block most known attacks. Typically this blacklist of attack profiles maps closely to the OWASP Top Ten application vulnerability classes. Protection against common variants of standard attacks, such as SQL injection and session mis-management, is included. Once these rules are installed they are immediately enforced. You can enable or disable individual rules as you see fit. Some vendors offer specific packages for critical attacks, mapped to specific CVEs such as Heartbleed. Bundles for specific threats, rather than by generic attack classes, help security and risk teams demonstrate policy compliance, and make it easier to understand which threats have been addressed. But when shopping for RASP technologies you need to evaluate the provided rules carefully. There are many ways to attack a site with SQL injection, and many to detect and block such attacks, so you need to verify the included rules cover most of the known attack variants you are concerned with. You will also want to verify that you can augment or add rules as you see fit – rule management is a challenge for most security products, and RASP is no different. Learning the application: Not all RASP technologies can learn how an application behaves, or offer whitelisting of application behaviors. Those that do vary greatly in how they function. Some behave like their WAF cousins, and need time to learn each application – whether by watching normal traffic over time, or by generating their own traffic to ‘crawl’ each application in a non-production environment. Some function similarly to white-box scanners, using application source to learn. Coverage capabilities: During our research we found uneven RASP coverage of common platforms. Some started with Java or .Net, and are iterating to cover Python, Ruby, Node.js, and others. Your search for RASP technologies may be strongly influenced by available platform support. We find that more and more, applications are built as collections of microservices across distributed architectures. Application developers mix and match languages, choosing what works best in different scenarios. If your application is built on Java you’ll have no trouble finding RASP technology to meet your needs. But for mixed environments you will need to carefully evaluate each product’s platform coverage. Development Process Integration Software development teams leverage many different tools to promote security within their overarching application development and delivery processes. The graphic below illustrates the major phases teams go through. The callouts map the common types of security tests at specific phases within an Agile, CI, and DevOps frameworks. Keep in mind that it is still early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus their tests on new code, and others do not offer API support. So orchestration of security tools – basically what works where – is far from settled territory. The time each type of test takes to run, and the type of result it returns, drives where it fits best into the phases below. RASP is designed to be bundled into applications, so it is part of the application delivery process. RASP offers two distinct approaches to help tackle application security. The first is in the pre-release or pre-deployment phase, while the second is in production. Either way, deployment looks very similar. But usage can vary considerably depending on which is chosen. Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to being launched. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor while security tests are invoked in an attempt to break the application, with RASP performing security analysis and transmitting its results. Development and Testing teams can learn whether

Share:
Read Post

Understanding and Selecting RASP: Use Cases

As you might expect, the primary function of RASP is to protect web applications against known and emerging threats; it is typically deployed to block attacks at the application layer, before vulnerabilities can be exploited. There is no question that the industry needs application security platforms – major new vulnerabilities are disclosed just about every week. And there are good reasons companies look to outside security vendors to help protect their applications. Most often we hear that firms simply have too many critical vulnerabilities to fix in a timely manner, with many reporting their backlog would take years to fix. In many cases the issue is legacy applications – ones which probably should never have been put on the Internet. These applications are often unsupported, with the engineers who developed them no longer available, or the platforms so fragile that they become unstable if security fixes are applied. And in many cases it is simply economics: the cost of securing the application itself is financially unfeasible, so companies are willing to accept the risk, instead choosing to address threats externally as best they can. But if these were the only reasons, organizations could simply use one of the many older technologies to application security, rather than needing RASP. Astute readers will notice that these are, by and large, the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why do people select RASP in lieu of more mature – and in many cases already purchased and deployed – technologies like IDS or WAF? The simple answer is that the use cases are different enough to justify a different solution. RASP integrates security one large step from “security bolted on” toward “security from within”. But to understand the differences between use cases, you first need to understand how user requirements differ, and where they are not adequately addressed by those older technologies. The core requirements above are givens, but the differences in how RASP is employed are best illustrated by a handful of use cases. Use Cases APIs & Automation: Most of our readers know what Application Programming Interfaces (APIs) are, and how they are used. Less clear is the greatly expanding need for programatic interfaces in security products, thanks to application delivery disruptions caused by cloud computing. Cloud service models – whether deployment is private, public, or hybrid – enable much greater efficiencies as networks, servers, and applications can all be constructed and tested as software. APIs are how we orchestrate building, testing, and deployment of applications. Security products like RASP – unlike IDS and most WAFs – offer their full platform functionality via APIs, enabling software engineers to work with RASP in the manner their native metaphor. Development Processes: As more application development teams tackle application vulnerabilities within the development cycle, they bring different product requirements than IT or security teams applying security controls post-deployment. It’s not enough for security products to identify and address vulnerabilities – they need to fit the development model. Software development processes are evolving (notably via continuous integration, continuous deployment, and DevOps) to leverage advantages of virtualization and cloud services. Speed is imperative, so RASP embedded within the application stack, providing real-time monitoring and blocking, supports more agile approaches. Application Awareness: As attackers continue to move up the stack, from networks to servers and then to applications, it is becoming more distinguish attacks from normal usage. RASP is differentiated by its ability to include application context in security policies. Many WAFs offer ‘positive’ security capabilities (particularly whitelisting valid application requests), but being embedded within applications provides additional application knowledge and instrumentation capabilities to RASP deployments. Further, some RASP platforms help developers by specifically reference modules or lines of suspect code. For many development teams, potentially better detection capabilities are less valuable than having RASP pinpoint vulnerable code. Pre-Deployment Validation: For cars, pacemakers, and software, it has been proven over decades that the earlier in the production cycle errors are discovered, the easier – and cheaper – they are to fix. This means testing in general, and security testing specifically, works better earlier into the development process. Rather than relying on vulnerability scanners and penetration testers after an application has been launched, we see more and more application security testing performed prior to deployment. Again, this is not impossible with other application-centric tools, but RASP is easier to build into automated testing. Our next post will talk about deployment, and working RASP into development pipelines. Share:

Share:
Read Post

Understanding and Selecting RASP: Technology Overview

This post will discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages or disadvantages of each. We will also spend some time on which application platforms supported are today, as this is one area where each provider is limited and working to expand, so it will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years. Integration RASP works at the application layer, so each product needs to integrate with applications somehow. To monitor application requests and make sense of them, a RASP solution must have access to incoming calls. There are several methods for monitoring either application usage (calls) or execution (runtime), each deployed slightly differently, gathering a slightly different picture of how the application functions. Solutions are installed into the code production path, or monitor execution at runtime. To block and protect applications from malicious requests, a RASP solution must be inline. Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat or Microsoft .NET to process inbound HTTP requests. Plugins filter requests before they reach application code, applying detection rules to each inbound request received. Requests that match known attack signatures are blocked. This is a relatively simple approach for retrofitting protection into the application environment, and can be effective at blocking malicious requests, but it doesn’t offer the in-depth application mapping possible with other types of integration. Library/JVM Replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, or even the Java Virtual Machine. This method basically hijacks calls to the underlying platform, whether library calls or the operating system. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. Under this model the RASP tool has a comprehensive view of application code paths and system calls, and can even learn state machine or sequence behaviors. The deeper analysis provides context, allowing for more granular detection rules. Virtualization or Replication: This integration effectively creates a replica of an application, usually as either a virtualized container or a cloud instance, and instruments application behavior at runtime. By monitoring – and essentially learning – application code pathways, all dynamic or non-static code is mimicked in the cloud. Learning and detection take place in this copy. As with replacement, application paths, request structure, parameters, and I/O behaviors can be ‘learned’. Once learning is complete rules are applied to application requests, and malicious or malformed requests are blocked. Language Support The biggest divide between RASP providers today is their platform support. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for Java; beyond that support is hit and miss. .NET support is increasingly common. Some vendors support Python, PHP, Node.js, and Ruby as well. If your application doesn’t run on Java you will need to discuss platform support with vendors. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor. Deployment Models Most RASP products are deployed as software, within an application software stack. These products work equally well on-premise and in cloud environments. Some solutions operate fully in a cloud replica of the application, as in the virtualization and replicated models mentioned above. Still others leverage a cloud component, essentially sending data from an application instance to a cloud service for request filtering. What generally doesn’t happen is dropping an appliance into a rack, or spinning up a virtual machine and re-routing network traffic. Detection Rules During our interviews with vendors it became clear that most are still focused on negative security: they detect known malicious behavior patterns. These vendors research and develop attack signatures for customers. Each signature explicitly describes one attack, such as SQL injection or a buffer overflow. For example most products include policies focused on the OWASP Top Ten critical web application vulnerabilities, commonly with multiple policies to detect variations of the top ten threat vectors. This makes their rules harder for attackers to evade. And many platforms include specific rules for various Common Vulnerabilities and Exposures, providing the RASP platform with signatures to block known exploits. Active vs. Passive Learning Most RASP platforms learn about the application they are protecting. In some cases this helps to refine detection rules, adapting generic rules to match specific application requests. In other cases this adds fraud detection capabilities, as the RASP learns to ‘understand’ application state or recognize an appropriate set of steps within the application. Understanding state is a prerequisite for detecting business logic attacks and multi-part transactions. Other RASP vendors are just starting to leverage a positive (whitelisting) security model. These RASP solutions learn how API calls are exercised or what certain lines of code should look like, and block unknown patterns. To do more than filter known attacks, a RASP tool needs to build a baseline of application behaviors, reflecting the way an application is supposed to work. There are two approaches: passive and active learning. A passive approach builds a behavioral profile as users use the application. By monitoring application requests over time and cataloging each request, linking the progression of requests to understand valid sequences of events, and logging request parameters, a RASP system can recognizes normal usage. The other baselining approach is similar to what Dynamic Application Security Testing (DAST) platforms use: by crawling through all available code paths, the scope of application features can be mapped. By generating traffic to exercise new code as it is deployed, application code paths can be synthetically enumerated do produce a complete mapping predictably and more quickly. Note that RASP’s positive security capabilities are nascent. We see threat intelligence and machine learning capabilities as a natural fit for RASP, but these capabilities have not yet fully arrived. Compared to competing platforms, they lack maturity and functionality. But RASP is still relatively new, and

Share:
Read Post

Understanding and Selecting RASP *edited* [New Series]

In 2015 we researched Putting Security Into DevOps, with a close look at how automated continuous deployment and DevOps impacted IT and application security. We found DevOps provided a very real path to improve application security using continuous automated testing, run each time new code was checked in. We were surprised to discover developers and IT teams taking a larger role in selecting security solutions, and bringing a new set of buying criteria to the table. Security products must do more than address application security issues; they need to mesh with continuous integration and deployment approaches, with automated capabilities and better integration with developer tools. But the biggest surprise was that every team which had moved past Continuous Integration and onto Continuous Deployment (CD) or DevOps asked us about RASP, Runtime Application Self-Protection. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. We understand we had a small sample size, and the number of firms who have embraced either CD or DevOps application delivery is a very small subset of the larger market. But we found that once they started continuous deployment, each firm hit the same issues. The ability to automate security, the ability to test in pre-production, configuration skew between pre-production and production, and the ability for security products to identify where issues were detected in the code. In fact it was our DevOps research which placed RASP at the top of our research calendar, thanks to perceived synergies. There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back office application that should never have been allowed on the Internet to begin with. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. So tools to protect against these attacks, which mesh well with the disruptive changes occuring in the development community, deserve a closer look. Defining RASP Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities: Monitor and block application requests; in some cases they can alter request to strip malicious content Full functionality through RESTful APIs Integration with the application or application instance (virtualization) Unpack and inspect requests in the application, rather than at the network layer Detect whether an attack would succeed Pinpoint the module, and possibly the specific line of code, where a vulnerability resides Instrument application usage These capabilities overlap with white box & black box scanners, web application firewalls (WAF), next-generation firewalls (NGFW), and even application intelligence platforms. And RASP can be used in coordination with any or all of those other security tools. So the question you may be asking yourself is “Why would we need another technology that does some of the same stuff?” It has to do with the way it is used and how it is integrated. Differing Market Drivers As RASP is a (relatively) new technology, current market drivers are tightly focused on addressing the security needs of one or two distinct buying centers. But RASP offers a distinct blend of capabilities and usability options which makes it unique in the market. Demand for security approaches focused on development, enabling pre-production and production application instances to provide real-time telemetry back to development tools Need for fully automated application security, deployed in tandem with new application code Technical debt, where essential applications contain known vulnerabilities which must be protected, either while defects are addressed or permanently if they cannot be fixed for any of various reasons Application security supporting development and operations teams who are not all security experts The remainder of this series will go into more detail on RASP technology, use cases, and deployment options: Technical Overview: This post will discuss technical aspects of RASP products – including what the technology does; how it works; and how it integrates into libraries, runtime code, or web application services. We will discuss the various deployment models including on-premise, cloud, and hybrid. We will discuss some of the application platforms supported today, and how we expect the technology to evolve over the next couple years. Use Cases: Why and how firms use this technology, with medium and large firm use case profiles. We will consider advantages of this technology, as well as the problems firms are trying to solve using it – especially around deficiencies in code security and development processes. We will provide some detail on monitoring vs. blocking threats, and discuss applicability to security and compliance requirements. Deploying RASP: This post will focus on how to integrate RASP into a development and release management processes. We will also jump into a more detailed discussion of how RASP differs from adjacent technologies like WAF, NGFW, and IDS, as well as static and dynamic application testing. We will walk through the advantages of each technology, with a special focus on operational considerations and keeping detection/blocking up to date. We will discuss advantages and tradeoffs compared to other relevant security technologies. This post will close with an example of a development pipeline, and how RASP technology fits in. Buyers Guide: This is a new market segment, so we will offer a basic analysis process for buyers to evaluate products. We will consider integration with existing development processes and rule management. Share:

Share:
Read Post

Friday Summary: April 21, 2016

Adrian here. Starting with the 2008 RSA conference, Rich and Chris Hoff presented each year on the then-current state of cloud services, and predicted where they felt cloud computing was going. This year Mike helped Rich conclude the series with some new predictions, but more importantly they went back to assess the accuracy of previous prognostications. My takeaway is that their predictions for what cloud services would do, and the value they would provide, were pretty well spot on. And in most cases, when a specific tool or product was identified as being critical, they totally missed the mark. Wildly. Trying to follow cloud services – Azure, AWS, and GCP, to name a few – is like running toward a rapidly-expanding explosion blast radius. Capabilities grow so fast in depth and breadth that you cannot keep pace. Part of our latest change to the Friday Summary is a weekly tools discussion to help you with the best of what we see. And ask IT folks or any development team: tools are a critical part of getting the job done. Adoption rates of Docker show how critical tools are to productivity. Keep in mind that we are not making predictions here – we are keenly aware that we don’t know what tools will be used a year from now, much less three. But we do know many old-school platforms don’t work in more agile situations. That’s why it’s essential to share some of the cool stuff we see, so you are aware of what’s available, and can be more productive. You can subscribe to only the Friday Summary. Top Posts for the Week Amazon Seeing ‘Momentous’ Change of Guard as Public Cloud ‘Booms,’ Says JP Morgan 6 roadblocks to cloud security, and how to move past them GitLab-Digital Ocean partnership to provide free hosting for continuous online code testing How Continuous Delivery Challenges Security Google Reimburses Cloud Clients After Massive Google Compute Engine Outage What does shared responsibility in the cloud mean? Microsoft takes an ‘apps for every platform’ approach with host of DevOps tools Internals of App Service Certificate Announcing the Preview of Storage Service Encryption for Data at Rest Tool of the Week This week’s tool is DynamoDB, a NoSQL database service native to Amazon AWS. A couple years ago when we were architecting our own services on Amazon, we began with comparing RDS and PostgreSQL, and even considered MySQL. After some initial prototyping work we realized that a relational – or quasi-relational, for some MySQL variants – platform really did not offer us any advantages but came some limitations, most notably around the use of JSON data elements and the need to store very small records very quickly. We settled on DynamoDB. Call it confirmation bias, but the more we used it the more we appreciated its capabilities, especially around security. While a full discussion of DynamoDB’s security capabilities – much less a full platform review – is way beyond the scope of this post, the following are some security features we found attractive: Query Filters: The ability to use filter expressions to simulate masking and label-based controls over what data is returned to the user. The Dynamo query can be consistent for all users, but filter return values after it is run. Filters can work by comparing elements returned from the query, but there is nothing stopping you from filtering on other values associated with the user running the query. And like with relational platforms, you can built your own functions to modify data or behavior depending upon the results. Fine-grained Authorization: Access polices can be used to restrict access to specific functions, and limit the impact of those functions to specific tables by user. As with query filters, you can limit results a specific user sees, or take into account things like geolocation to dynamically alter which data a user can access, using a set of access policies defined outside the application. IAM: The AWS platform offers a solid set of identity and access management capabilities, and group/role based authorization as you’d expect, which all integrates with Dynamo. But it also offers temporary credentials for delegation of roles. The also offer hooks to leverage Google and Facebook identities to cotnrol mobile access to the database. Again, we can choose to punt on some IAM responsibilities for speed of deployment, or we can choose to ratchet down access, depending on how the user authenticated. JSON Support: You can both store and query JSON documents in DynamoDB. Programmers reading this will understand why this is important, as it enables you to store everything from data to code snippets, in a semi-structured way. For security or DevOps folks, consider storing different versions of initialization and configuration scripts for different AMIs, or a list of defining characteristics (e.g., known email addresses, MAC addresses, IP addresses, mobile device IDs, corporate and personal accounts, etc.) for specific users. Logging: As with most NoSQL platforms, logging is integrated. You can set what to log conditionally, and even alert based on log conditions. We are just now prototyping log parsing with Lambda functions, streaming the results to S3 for cheap storage. This should be an easy way to enrich as well. DynamoDB has security feature parity with relational databases. As someone who has used relational platforms for the better part of 20 years, I can say that most of the features I want are available in one form or another on the major relational platforms, or can be simulated. The real difference is the ease and speed at which I can leverage these functions. Securosis Blog Posts this Week How iMessage distributes security to block “phantom devices” Summary: April 14, 2016 Other Securosis News and Quotes Rich quoted in Fortune Adrian quoted on WAF management Training and Events We are running two classes at Black Hat USA: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Maximizing WAF Value: Management

As described in last post, deploying a WAF requires knowledge of both application security and your specific application(s). Management it requires an ongoing effort to keep a WAF current with emerging attacks and frequent application changes. Your organization likely adds new applications and changes network architectures at least a couple times a year. We see more and more organizations embracing continuous deployment for their applications. This means application functions and usage are constantly changing as well. So you need to adjust your defenses regularly to keep pace. Test & Tune The deployment process is about putting rules in place to protect applications. Managing the WAF involves monitoring it to figure out how well your rules are actually working, which requires spending a bunch of time examining logs to learn what is working and what’s not. Tuning policies for both protection and performance is not easy. As we have mentioned, you need someone who understands the rule ‘grammars’ and how web protocols work. That person must also understand the applications, the types of data customers should have access to within them, what constitutes bad behavior/application misuse, and the risks the web applications pose to the business. An application security professional needs to balance security, applications, and business skills, because the WAF policies they write and manage touch all three disciplines. The tuning process involves a lot of trial and error, figuring out which changes have unintended consequences like adding attack surface or breaking application functionality, and which are useful for protecting applications from emerging attacks. You need dual tuning efforts, one for positive rules which must be updated when new application functionality is introduced, and another for negative rules which protect applications against emerging attacks. By the time a WAF is deployed customers should be comfortable creating whitelists for applications, having gained a decent handle on application functionality and leveraging the automated WAF learning capabilities. It’s fairly easy to observe these policies in monitor-only mode, but there is still a bit of nail-biting as new capabilities are rolled out. You’ll be waiting for users to exercise a function before you know if things really work, after which reviewing positive rules gets considerably easier. Tuning and keeping negative security policies current still relies heavily on WAF vendor and third-party assistance. Most enterprises don’t have research groups studying emerging attack vectors every day. These knowledge gaps, regarding how attackers work and cutting-edge attack techniques, create challenges when writing specific blacklist policies. So you are will depend on your vendor for as long as you use WAF, which is why we stress finding a vendor who acts as a partner and building support into your contract. As difficult as WAF management is, there is hope on the horizon, as firms embrace continuous deployment and DevOps, and accept daily updates and reconfiguration. These security teams have no choice but to build & test WAF policies as part of their delivery processes. WAF policies must be generated in tandem with new application features, which requires security and development teams to work shoulder-to-shoulder, integrating security as part of release management. New application code goes through several layers of functional testing and WAF rules get tested as code goes into a production environment, but before exposure to the general public. This integrated release process is called Blue-Green deployment testing. In this model both current (Blue) and new (Green) application code are run, on their own servers, in parallel in a fully functional production environment, ensuring applications run as intended in their ‘real’ environment. The new code is gated at the perimeter firewall or routers, limiting access to in-house testers. This way in-house security and application teams can verify that both the application and WAF rules function effectively and efficiently. If either fails the Green deployment is rolled back and Blue continues on. If Green works it becomes the new public production copy, and Blue is retired. It’s early days for DevOps, but this approach enables daily WAF rule tuning, with immediate feedback on iterative changes. And more importantly there are no surprises when updated code goes into production behind the WAF. WAF management is an ongoing process – especially in light of the dynamic attack space blacklists addresses, false-positive alerts which require tuning your ruleset, and application changes driving your whitelist. Your WAF management process needs to continually learn and catalog user and application behaviors, collecting metrics as part of the process. Which metrics are meaningful, and which activities you need to monitor closely, differs between customers. The only consistency is that you cannot measure success without logs and performance metrics. Reviewing what has happened over time, and integrating that knowledge into your policies, is key to success. Machine Learning At this point we need to bring “machine learning” into the discussion. This topic generates confusion, so let’s first discuss what it means to us. In its simplest form, machine learning is looking at application usage metrics to predict bad behavior. These algorithms examine data including stateful user sessions, user behavior, application attack heuristics, function misuse, and high error rates. Additional data sources include geolocation, IP address, and known attacker device fingerprints (IoC). The goal is to detect subtler forms of application misuse, and catch attacks quickly and accurately. Think of it as a form of 0-day detection. You want to spot behavior you know is bad, even if you haven’t seen that kind of badness before. Machine learning is a useful technique. Detecting attacks as they occur is the ideal we strive for, and automation is critical because you cannot manually review all application activity to figure out what’s an attack. So you’ll need some level of automation, to both scale scarce resources and fit better into new continuous deployment models. But it is still early days for this technology – this class of protection has a ways to go for maturity and effectiveness. We see varied success: some types of attacks are spotted, but false positive rates can be high. And we are not fans of the term “machine learning” for this functionality, because it’s too generic. We’ve seen some vendors

Share:
Read Post

Maximizing WAF Value: Deployment

Now we will dig into the myriad ways to deploy a Web Application Firewall (WAF), including where to position it and the pros & cons of on-premise devices versus WAF services. A key part of the deployment process is training the WAF for specific applications and setting up the initial rulesets. We will also highlight effective practices for moving from visibility (getting alerts) to control (blocking attacks). Finally we will present a Quick Wins scenario because it’s critical for any security technology to get a ‘win’ early in deployment to prove its value. Deployment Models The first major challenge for anyone using a WAF is getting it set up and effectively protecting applications. Your process will start with deciding where you want the WAF to work: on-premise, cloud-hosted, or a combination. On-premise means installing multiple appliances or virtual instances to balance incoming traffic and ensure they don’t degrade the user experience. With cloud services you have the option of scaling up or down with traffic as needed. We’ll go into benefits and tradeoffs of each later in this series. Next you will need to determine how you want the WAF to work. You may choose either inline or out-of-band. Inline entails installing the WAF “in front of” a web app so all traffic from and to the app runs through it. This blocks attacks directly as they come in, and in some cases before content is returned to users. Both on-premise WAF devices and cloud WAF services provide this option. Alternatively, some vendors offer an out-of-band option to assess application traffic via a network tap or spanning port. They use indirect methods (TCP resets, network device integration, etc.) to shut down attack sessions. This approach has no side-effects on application operation, because traffic still flows directly to the app. Obviously there are both advantages and disadvantages to having a WAF inline, and we don’t judge folks who opt for out-of-band rather than risking the application impact of inline deployment. But out-of-band enforcement can be evaded via tactics like command injection, SQL injection, and stored cross-site scripting (XSS) attacks that don’t require responses from the application. Another issue with out-of-band deployment is that attacks can make it through to applications, which puts them at risk. It’s not always a clear-cut choice, but balancing risks is why you get paid the big bucks, right? When possible we recommend inline deployment, because this model gives you flexibility to enforce as many or as few blocking rules as you want. You need to carefully avoid blocking legitimate traffic to your applications. Out-of-band deployment offers few reliable blocking options. Rule Creation Once the device is deployed you need to figure out what rules you’ll run on it. Rules embody what you choose to block, and what you let pass through to applications. The creation and maintenance of these rules where is you will spend the vast majority of your time, so we will spend quite a bit of time on it. The first step in rule creation is understanding how rules are built and employed. The two major categories are negative and positive security rules: the former are geared toward blocking known attacks, and the latter toward listing acceptable actions for each application. Let’s go into why each is important. Negative Security “Negative Security” policies essentially block known attacks. The model works by detecting patterns of known malicious behavior, or ‘signatures’. Things like content scraping, injection attacks, XML attacks, cross-site request forgeries, suspected botnets, Tor nodes, and even blog spam, are universal application attacks that affect all sites. Most negative policies come “out of the box” from vendors’ internal teams, who research and develop signatures for customers. Each signature explicitly describes one attack or several variants, these rules typically detect SQL injection and buffer overflows. The downside of this method is its fragility: the signature will fail to match any unrecognized variations, and will thus bypass the WAF. If you think “this sounds like traditional endpoint AV” you’re right. So signatures are only suitable when you can reliably and deterministically describe attacks, and don’t expect signatures to be immediately bypassed by simple evasion. WAFs usually provide a myriad of other detection options to compensate for the limitations of static signatures: heuristics, reputation scoring, detection of evasion techniques, and proprietary methods for qualitatively detecting attacks. Each method has its own strengths and weaknesses, and use cases for which it is better or worse suited. These techniques can be combined to provide a risk score for incoming requests, and with flexible blocking options based on the severity of the attack or your confidence level. This is similar to the “spam cocktail” approach used by email security gateways for years. But the devil is in the details, there are thousands of attack variations, and figuring out how to apply policies to detect and stop attacks is difficult. Finally there are rules you’ll need specifically to protect your web applications from a class of attacks designed to find flaws in the way application developers code, targeting gaps in how they enforce process and transaction state. These include rules to detect fraud, business logic attacks, content scraping, and data leakage, which cannot be detected using generic signatures or heuristics. Examples of these kinds of attacks include issuing order and cancellation requests in rapid succession to confuse the web server or database into revealing or altering shopping cart information, replaying legitimate transactions, and changing the order of events to attack transaction integrity. These application-specific rules are constructed using the same analytic techniques, but rather than focusing on the structure and use of HTTP and XML grammars, a fraud detection policy examines user behavior as it relates to the type of transaction being performed. These policies require a detailed understanding of both how attacks work and how your web applications work. Positive Security The other side of this coin is the positive security model, called ‘whitelisting.’ Positive security only allows known and authorized web requests, and blocks all others. Old-school network security professionals recall the term “default deny”. This is the web application analogue. It works

Share:
Read Post

Maximizing Value From Your WAF [New Series]

Web Application Firewalls (WAFs) have been in production use for well over a decade, maturing from point solutions primarily blocking SQL injection to mature application security tools. In most mature security product categories, such as anti-virus, there hasn’t been much to talk about, aside from complaining that not much has changed over the last decade. WAFs are different: they have continued to evolve in response to new threats, new deployment models, and a more demanding clientele’s need to solve more complicated problems. From SQL injection to cross-site scripting (XSS), from PCI compliance to DDoS protection, and from cross-site request forgeries (CSRF) to 0-day protection, WAFs have continued add capabilities to address emerging use cases. But WAF’s greatest evolution has taken place in areas undergoing heavy disruption, notably cloud computing and threat analytics. WAFs are back at the top of our research agenda, because users continue to struggle with managing WAF platforms as threats continue to evolve. The first challenge has been that attacks targeting the application layer require more than simple analysis of individual HTTP requests – they demand systemic analysis of the entire web application session. Detection of typical modern attack vectors including automated bots, content scraping, fraud, and other types of misuse, all require more information and deeper analysis. Second, as the larger IT industry flails to find security talent to manage WAFs, customers struggle to keep existing devices up and running; they have no choice but to emphasize ease of set-up and management during product selection. So we are updating our WAF research. This brief series will discuss the continuing need for Web Application Firewall technologies, and address the ongoing struggles of organizations to run WAFs. We will also focus on decreasing the time to value for WAF, by updating our recommendations for standing up a WAF for the first time, discussing what it takes to get a basic set of policies up and running, and covering the new capabilities and challenges customers face. WAF’s Continued Popularity The reasons WAF emerged in the first place, and still one of the most common reason customers use it, is that no other product really provides protection at the application layer. Cross-site scripting, request forgeries, SQL injection, and many common attacks which specifically target application stacks tend to go undetected. Intrusion Detection Systems (IDS) and general-purpose network firewalls are poorly suited to protecting the application layer, and remain largely ineffective for that use case. In order to detect application misuse and fraud, a security solution must understand the dialogue between application and end user. WAFs were designed for this need, to understand application protocols so they can identify applications under attack. For most organizations, WAF is still the only way get some measure of protection for applications. For many years sales of WAFs were driven by compliance, specifically a mandate from the Payment Card Industry’s Data Security Standard (PCI-DSS). This standard gave firms the option to either build security into their application (very hard), or protect them with WAF (easier). The validation requirements for WAF deployments are far less rigorous than for secure code development, so most companies opted for WAF. Shocking! You basically plug one in and get a certification. WAF has long been the fastest and most cost-effective way to satisfy Requirement 6 of the PCI-DSS standard, but it turns out there is long-term value as well. Users now realize that leveraging a WAF is both faster and cheaper than fixing bug-ridden legacy applications. The need has morphed from “get compliant fast!” to “secure legacy apps for less!” WAF Limitations The value of WAF is muted by difficulties in deployment and ongoing platform management. A tool cannot provide sustainable value if it cannot be effectively deployed and managed. The last thing organizations need is yet another piece of software sitting on a shelf. Or even worse an out-of-date WAF providing a false sense of security. Our research highlighted the following issues which contribute to insecure WAF implementations, allowing penetration testers and attackers to easily evade WAF and target applications directly. Ineffective Policies: Most firms complain about maintaining WAF policies. Some complaints are about policies falling behind new application features, and policies which fail to keep pace with emerging threats. Equally troubling is a lack of information on which policies are effective, so security professionals are flying blind. Better metrics and analytics are needed to tell users what’s working and how to improve. Breaking Apps: Security policies – the rules that determine what a WAF blocks and what passes through to applications – can and do sometimes block legitimate traffic. Web application developers are incentivized to push new code as often as possible. Code changes and new functionality often violate existing policies, so unless someone updates the whitelist of approved application requests for every application change, a WAF will block legitimate requests. Predictably, this pisses off customers and operational folks alike. Firms trying to “ratchet up” security by tightening policies may also break applications, or generate too many false positives for the SoC to handle, leading to legitimate attacks going ignored and unaddressed in a flood of irrelevant alerts. Skills Gap: As we all know, application security is non-trivial. The skills to understand spoofing, fraud, non-repudiation, denial of service attacks, and application misuse within the context of an application are rarely all found in any one individual. But they are all needed to be an effective WAF administrator. Many firms – especially those in retail – complain that “we are not in the business of security” – they want to outsource WAF management to someone with the necessary skills. Still others find their WAF in purgatory after their administrator is offered more money, leaving behind no-one who understands the policies. But outsourcing is no panacea – even a third-party service provider needs the configuration to be somewhat stable and burned-in before they can accept managerial responsibility. Without in-house talent for configuration you are hiring professional services teams to get up and running, and then scrambling to find budget for this unplanned expense. Cloud Deployments: Your on-premise applications are covered

Share:
Read Post

Securing Hadoop: Security Recommendations for Hadoop [New Paper]

We are pleased to release our updated white paper on big data security: Securing Hadoop: Security Recommendations for Hadoop Environments. Just about everything has changed in the four years since we published the original. Hadoop has solidified its position as the dominant big data platform, by constantly advancing in function and scale. While the ability to customize a Hadoop cluster to suit diverse needs has been its main driver, the security advances make Hadoop viable for enterprises. Whether embedded directly into Hadoop or deployed as add-on modules, services like identity, encryption, log analysis, key management, cluster validation, and fine-grained authorization are all available. Our goal for this research paper is first to introduce these technologies to IT and security teams, and also to help them assemble these technologies into an coherent security strategy. This research project provides a high-level overview of security challenges for big data environments. From there we discuss security technologies available for the Hadoop ecosystem, and then sketch out a set of recommendations to secure big data clusters. Our recommendations map threats and compliance requirements directly to supporting technologies to facilitate your selection process. We outline how these tactical responses work within the security architectures which firms employ, tailoring their approaches to the tools and technical talent on hand. Finally, we would like to thank Hortonworks and Vormetric for licensing this research. Without firms who appreciate our work enough to license our content, we could not bring you quality research free! We hope you find this research helpful in understanding big data and its associated security challenges. You can download a free copy of the white paper from our research library, or grab a copy directly: Securing Hadoop: Security Recommendations for Hadoop Environments (PDF). Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.