Securosis

Research

Incident Response in the Cloud Age: Shifting Foundations

Since we published our React Faster and Better research and Incident Response Fundamentals, quite a bit has changed relative to responding to incidents. First and foremost, incident response is a thing now. Not that it wasn’t a discipline mature security organizations focused on before 2012, but since then a lot more resources and funding have shifted away from ineffective prevention towards detection and response. Which we think is awesome. Of course, now that I/R is a thing and some organizations may actually have decent response processes, the foundation us is shifting. But that shouldn’t be a surprise – if you wanted a static existence, technology probably isn’t the best industry for you, and security is arguably the most dynamic part of technology. We see the cloud revolution taking root, promising to upend and disrupt almost every aspect of building, deploying and operating applications. We continue to see network speeds increase, putting scaling pressure on every aspect of your security program, including response. The advent of threat intelligence, as a means to get smarter and leverage the experiences of other organizations, is also having a dramatic impact on the security business, particularly incident response. Finally, the security industry faces an immense skills gap, which is far more acute in specialized areas such as incident response. So whatever response process you roll out needs to leverage technological assistance – otherwise you have little chance of scaling it to keep pace with accelerating attacks. This new series, which we are calling “Incident Response in the Cloud Age”, will discuss these changes and how your I/R process needs to evolve to keep up. As always, we will conduct this research using our Totally Transparent Research methodology, which means we’ll post everything to the blog first, and solicit feedback to ensure our positions are on point. We’d also like to thank SS8 for being a potential licensee of the content. One of the unique aspects of how we do research is that we call them a potential licensee because they have no commitment to license, nor do they have any more influence over our research than you. This approach enables us to write the kind of impactful research you need to make better and faster decisions in your day to day security activities. Entering the Cloud Age Evidently there is this thing called the ‘cloud’, which you may have heard of. As we have described for our own business, we are seeing cloud computing change everything. That means existing I/R processes need to now factor in the cloud, which is changing both architecture and visibility. There are two key impacts on your I/R process from the cloud. The first is governance, as your data now resides in a variety of locations and with different service providers. Various parties required to participate as you try to investigate an attack. The process integration of a multi-organization response is… um… challenging. The other big difference in cloud investigation is visibility, or its lack. You don’t have access to the network packets in an Infrastructure as a Service (IaaS) environment, nor can you see into a Platform as a Service (PaaS) offering to see what happened. That means you need to be a lot more creative about gathering telemetry on an ongoing basis, and figuring out how to access what you need during an investigation. Speed Kills We have also seen a substantial increase in the speed of networks over the past 5 years, especially in data centers. So if network forensics is part of your I/R toolkit (as it should be) how you architect your collection environment, and whether you actually capture and store full packets, are key decisions. Meanwhile data center virtualization is making it harder to know which servers are where, which makes investigation a bit more challenging. Getting Smarter via Threat Intelligence Sharing attack data between organizations still feels a bit strange for long-time security professionals like us. The security industry resisted admitting that successful attacks happen (yes, that ego thing got in the way), and held the entirely reasonable concern that sharing company-specific data could provide adversaries with information to facilitate future attacks. The good news is that security folks got over their ego challenges, and also finally understand they cannot stand alone and expect to understand the extent of the attacks that come at them every day. So sharing external threat data is now common, and both open source and commercial offerings are available to provide insight, which is improving incident response. We documented how the I/R process needs to change to leverage threat intelligence, and you can refer to that paper for detail on how that works. Facing down the Skills Gap If incident response wasn’t already complicated enough because of the changes described above, there just aren’t enough skilled computer forensics specialists (who we call forensicators) to meet industry demand. You cannot just throw people at the problem, because they don’t exist. So your team needs to work smarter and more efficiently. That means using technology more for gathering and analyzing data, structuring investigations, and automating what you can. We will dig into emerging technologies in detail later in this series. Evolving Incident Response Like everything else in security, incident response is changing. The rest of this series will discuss exactly how. First we’ll dig into the impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we’ll dig into how to streamline a response process to address the lack of people available to do the heavy lifting of incident response. Finally we’ll bring everything together with a scenario that illuminates the concepts in a far more tangible fashion. So buckle up – it’s time to evolve incident response for the next era in technology: the Cloud Age. Share:

Share:
Read Post

Summary: May 19, 2016

Rich here. Not a lot of news from us this week, because we’ve mostly been traveling, and for Mike and me the kids’ school year is coming to a close. Last week I was at the Rocky Mountain Information Security Conference in Denver. The Denver ISSA puts on a great show, but due to some family scheduling I didn’t get to see as many sessions as I hoped. I presented my usual pragmatic cloud pitch, a modification of my RSA session from this year. It seems one of the big issues organizations are still facing is a mixture of where to get started on cloud/DevOps, with switching over to understand and implement the fundamentals. For example, one person in my session mentioned his team thought they were doing DevOps, but actually mashed some tools together without understanding the philosophy or building a continuous integration pipeline. Needless to say, it didn’t go well. In other news, our advanced Black Hat class sold out, but there are still openings in our main class I highlighted the course differences in a post. You can subscribe to only the Friday Summary. Top Posts for the Week Another great post from the Signal Sciences team. This one highlights a session from DevOps Days Austin by Dan Glass of American Airlines. AA has some issues unique to their industry, but Dan’s concepts map well to any existing enterprise struggling to transition to DevOps while maintaining existing operations. Not everyone has the luxury of building everything from scratch. Avoiding the Dystopian Road in Software. One of the most popular informal talks I give clients and teach is how AWS networking works. It is completely based on this session, which I first saw a couple years ago at the re:Invent conference – I just cram it into 10-15 minutes and skip a lot of the details. While AWS-specific, this is mandatory for anyone using any kind of cloud. The particulars of your situation or provider will differ, but not the issues. Here is the latest, with additional details on service endpoints: AWS Summit Series 2016 | Chicago – Another Day, Another Billion Packets. In a fascinating move, Jenkins is linking up with Azure, and Microsoft is tossing in a lot of support. I am actually a fan of running CI servers in the cloud for security, so you can tie them into cloud controls that are hard to implement locally, such as IAM. Announcing collaboration with the Jenkins project. Speaking of CI in the cloud, this is a practical example from Flux7 of adding security to Git and Jenkins using Amazon’s CodeDeploy. TL;DR: you can leverage IAM and Roles for more secure access than you could achieve normally: Improved Security with AWS CodeCommit. Netflix releases a serverless Open Source SSH Certificate Authority. It runs on AWS Lambda, and is definitely one to keep an eye on: Netflix/bless. AirBnB talks about how they integrated syslog into AWS Kinesis using osquery (a Facebook tool I think I will highlight as tool of the week): Introducing Syslog to AWS Kinesis via Osquery – Airbnb Engineering & Data Science. Tool of the Week osquery by Facebook is a nifty Open Source tool to expose low-level operating system information as a real-time relational database. What does that mean? Here’s an example that finds every process running on a system where the binary is no longer on disk (a direct example from the documentation, and common malware behavior): SELECT name, path, pid FROM processes WHERE on_disk = 0; This is useful for operations but it’s positioned as a security tool. You can use it for File Integrity Monitoring, real-time alerting, and a whole lot more. The site even includes ‘packs’ for common needs including OS X attacks, compliance, and vulnerability management. Securosis Blog Posts this Week Incident Response in the Cloud Age: Shifting Foundations SIEM Kung Fu [New Paper] Updates to Our Black Hat Cloud Security Training Classes Understanding and Selecting RASP: Technology Overview Understanding and Selecting RASP edited [New Series] Shining a Light on Shadow Devices: Seeing into the Shadows Shining a Light on Shadow Devices: Attacks Other Securosis News and Quotes Another quiet week… Training and Events We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Understanding and Selecting RASP: Technology Overview

This post will discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages or disadvantages of each. We will also spend some time on which application platforms supported are today, as this is one area where each provider is limited and working to expand, so it will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years. Integration RASP works at the application layer, so each product needs to integrate with applications somehow. To monitor application requests and make sense of them, a RASP solution must have access to incoming calls. There are several methods for monitoring either application usage (calls) or execution (runtime), each deployed slightly differently, gathering a slightly different picture of how the application functions. Solutions are installed into the code production path, or monitor execution at runtime. To block and protect applications from malicious requests, a RASP solution must be inline. Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat or Microsoft .NET to process inbound HTTP requests. Plugins filter requests before they reach application code, applying detection rules to each inbound request received. Requests that match known attack signatures are blocked. This is a relatively simple approach for retrofitting protection into the application environment, and can be effective at blocking malicious requests, but it doesn’t offer the in-depth application mapping possible with other types of integration. Library/JVM Replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, or even the Java Virtual Machine. This method basically hijacks calls to the underlying platform, whether library calls or the operating system. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. Under this model the RASP tool has a comprehensive view of application code paths and system calls, and can even learn state machine or sequence behaviors. The deeper analysis provides context, allowing for more granular detection rules. Virtualization or Replication: This integration effectively creates a replica of an application, usually as either a virtualized container or a cloud instance, and instruments application behavior at runtime. By monitoring – and essentially learning – application code pathways, all dynamic or non-static code is mimicked in the cloud. Learning and detection take place in this copy. As with replacement, application paths, request structure, parameters, and I/O behaviors can be ‘learned’. Once learning is complete rules are applied to application requests, and malicious or malformed requests are blocked. Language Support The biggest divide between RASP providers today is their platform support. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for Java; beyond that support is hit and miss. .NET support is increasingly common. Some vendors support Python, PHP, Node.js, and Ruby as well. If your application doesn’t run on Java you will need to discuss platform support with vendors. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor. Deployment Models Most RASP products are deployed as software, within an application software stack. These products work equally well on-premise and in cloud environments. Some solutions operate fully in a cloud replica of the application, as in the virtualization and replicated models mentioned above. Still others leverage a cloud component, essentially sending data from an application instance to a cloud service for request filtering. What generally doesn’t happen is dropping an appliance into a rack, or spinning up a virtual machine and re-routing network traffic. Detection Rules During our interviews with vendors it became clear that most are still focused on negative security: they detect known malicious behavior patterns. These vendors research and develop attack signatures for customers. Each signature explicitly describes one attack, such as SQL injection or a buffer overflow. For example most products include policies focused on the OWASP Top Ten critical web application vulnerabilities, commonly with multiple policies to detect variations of the top ten threat vectors. This makes their rules harder for attackers to evade. And many platforms include specific rules for various Common Vulnerabilities and Exposures, providing the RASP platform with signatures to block known exploits. Active vs. Passive Learning Most RASP platforms learn about the application they are protecting. In some cases this helps to refine detection rules, adapting generic rules to match specific application requests. In other cases this adds fraud detection capabilities, as the RASP learns to ‘understand’ application state or recognize an appropriate set of steps within the application. Understanding state is a prerequisite for detecting business logic attacks and multi-part transactions. Other RASP vendors are just starting to leverage a positive (whitelisting) security model. These RASP solutions learn how API calls are exercised or what certain lines of code should look like, and block unknown patterns. To do more than filter known attacks, a RASP tool needs to build a baseline of application behaviors, reflecting the way an application is supposed to work. There are two approaches: passive and active learning. A passive approach builds a behavioral profile as users use the application. By monitoring application requests over time and cataloging each request, linking the progression of requests to understand valid sequences of events, and logging request parameters, a RASP system can recognizes normal usage. The other baselining approach is similar to what Dynamic Application Security Testing (DAST) platforms use: by crawling through all available code paths, the scope of application features can be mapped. By generating traffic to exercise new code as it is deployed, application code paths can be synthetically enumerated do produce a complete mapping predictably and more quickly. Note that RASP’s positive security capabilities are nascent. We see threat intelligence and machine learning capabilities as a natural fit for RASP, but these capabilities have not yet fully arrived. Compared to competing platforms, they lack maturity and functionality. But RASP is still relatively new, and

Share:
Read Post

Shining a Light on Shadow Devices: Seeing into the Shadows

As we have posted this Shadow Devices series, we have discussed the millions (likely billions) of new devices which will be connecting to networks over the coming decade. Clearly many of them won’t be traditional computer devices, which can be scanned and assessed for security issues. We called these other devices shadow devices because this is about more than the “Internet of Things” – any networked device which can be used to steal information – whether directly or by providing a stepping stone to targeted information – needs to be considered. Our last post explained how peripherals, medical devices, and control systems can be attacked. We showed that although traditional malware attacks on traditional computing and mobile get most of the attention in IT security circles, these other devices shouldn’t be ignored. As with most things, it’s not a matter of if but when these lower-profile devices will be used to perpetrate a major attack. So now what? How can you figure out your real attack surface, and then move to protect the systems and devices providing access to your critical data? It’s back to Security 101, which pretty much always starts with visibility, and then moves to control once you figure out what you have and how it is exposed. Risk Profiling Your first step is to shine a light into the ‘shadows’ on your network to gain a picture of all devices. You have a couple options to gain this visibility: Active Scanning: You can run a scan across your entire IP address space to find out what’s there. This can be a serious task for a large address space, consuming resources while you run your scanner(s). This process can only happen periodically, because it wouldn’t be wise to run a scanner continuously on internal networks. Keep in mind that some devices, especially ancient control systems, were not build with resilience in mind, so even a simple vulnerability scan can knock them over. Passive Monitoring: The other alternative is basically to listen for new devices by monitoring network traffic. This assumes that you have access to all traffic on all networks in your environment, and that new devices will communicate to something. Pitfalls of this approach include needing access to the entire network, and that new devices can spoof other network devices to evade detection. On the plus side, you won’t knock anything over by listening. But we don’t see a question of either/or for gaining full visibility into all devices on the network. There is a time and place for active scanning, but care must be taken to not take brittle systems down or consume undue network resources. We have also seen many scenarios where passive monitoring is needed to find new devices quickly once they show up on the network. Once you have full visibility, the next step is to identify devices. You can certainly look for indicators of what type of device you found during an active scan. This is harder when passively scanning, but devices can be identified by traffic patterns and other indicators within packets. A critical selection criteria for passive monitoring technology the vendor’s ability to identify the bulk of devices likely to show up on your network. Obviously in a very dynamic environment a fraction of devices cannot be identified through scanning or monitoring network traffic. But you want these devices to be a small minority, because anything you can’t identify through scanning requires manual intervention. Once you know what kind of device you are dealing with, you need to start evaluating risk, a combination of the device’s vulnerability and exploitability. Vulnerability is a question of what could possibly happen. An attacker can do certain things with a printer which are impossible with an infusion pump, and vice-versa. So device type is key context. You also need to assess security vulnerabilities within the device. They may warrant an active scan upon identification for more granular information. As we warned above, be careful with active scanning to avoid risking device availability. You can glean some information about vulnerabilities through passive scanning, but it requires quite a bit more interpretation, and is subject to higher false positive rates. Exploitability depends on the security controls and/or configurations already in place on the device. A warehouse picker robot may run embedded Windows XP, but if the robot also runs a whitelist malicious code cannot execute, so it might show up as vulnerable but not exploitable. The other main aspect of exploitability is attack path. If an external entity cannot access the warehouse system because it has no Internet-facing networks, even the vulnerable picker robot poses relatively little risk unless the physical location is attacked. The final aspect of determining risk to a device is looking at what it has access to. If a device has no access to anything sensitive, then again it poses little risk. Of course that assumes your networks are sufficiently isolated. Determining risk is all about prioritization. You only have so many resources, so you need to choose what to fix wisely, and evaluating risk is the best way to allocate those scarce resources. Controls Once you know what’s out there in the shadows, your next step is to figure out whether and perhaps how to protect those devices. This again comes back to the risk profiles discussed above. It doesn’t make much sense to spend a lot of time and money protecting devices which don’t present much risk to the organization. But in case a device does present sufficient risk, how will you go about protecting it? First things first: you should be making sure the device is configured in the most secure fashion. Yeah, yeah, that sounds trite and simple, but we mention it anyway because it’s shocking how many devices can be exploited due to open services that can easily be turned off. Once you have basic device hygiene taken care, here are some other ways to protect it: Active Controls: The first and most direct way to protect a shadow device is by implementing an active control on it. The available controls vary depending on kind of device

Share:
Read Post

SIEM Kung Fu [New Paper]

In the SIEM Kung Fu paper, we tell you what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu). We would like to thank Intel Security for licensing the content in this paper. Our unique Totally Transparent Research model allows us to do objective and useful research and still pay our bills, so we’re thankful to all of the companies that license our research. Check out the page in the research library or download the paper directly (PDF). Share:

Share:
Read Post

Updates to Our Black Hat Cloud Security Training Classes

We have been getting questions on our training classes this year, so I thought I should update everyone on major updates to our ‘old’ class, and what to expect from our ‘advanced’ class. The short version is that we are adding new material to our basic class, to align with upcoming Cloud Security Alliance changes and cover DevOps. It will still include some advanced material, but we are assuming the top 10% (in terms of technical skills) of students will move to our new advanced class instead, enabling us to focus the basic class on the meaty part of the bell curve. Over the past few years our Black Hat Cloud Security Hands On class became so popular that we kept adding instructors and seats to keep up with demand. Last year we sold out both classes and increased the size to 60 students, then still sold out the weekday class. That’s a lot of students, but the course is tightly structured with well-supported labs to ensure we can still provide a high-quality experience. We even added a bunch of self-paced advanced labs for people with stronger skills who wanted to move faster. The problem with that structure is that it really limits how well we can support more advanced students. Especially because we get a much wider range of technical skills than we expected at a Black Hat branded training. Every year we get sizable contingents from both extremes: people who no longer use their technical skills (managers/auditors/etc.), and students actively working in technology with hands-on cloud experience. When we started this training 6 years ago, nearly none of our students had ever launched a cloud instance. Self-paced labs work reasonably well, but don’t really let you dig in the same way as focused training. There are also many cloud major advances we simply cannot cover in a class which has to appeal to such a wide range of students. So this year we launched a new class (which has already sold out, and expanded), and are updating the main class. Here are some details, with guidance on which is likely to fit best: Cloud Security Hands-On (CCSK-Plus) is our introductory 2-day class for those with a background in security, but who haven’t worked much in the cloud yet. It is fully aligned with the Cloud Security Alliance CCSK curriculum: this is where we test out new material and course designs to roll out throughout the rest of the CSA. This year we will use a mixed lecture/lab structure, instead of one day of lecture with labs the second day. We have started introducing material to align with the impending CSA Guidance 4.0 release, which we are writing. We still need to align with the current exam, because the class includes a token to take the test for the certificate, but we also wrote the test, so we should be able to balance that. This class still includes extra advanced material (labs) not normally in the CSA training and the self-paced advanced labs. Time permitting, we will also add an intro to DevOps. But if you are more advanced you should really take Advanced Cloud Security and Applied SecDevOps instead. This 2-day class assumes you already know all the technical content in the Hands-On class and are comfortable with basic administration skills, launching instances in AWS, and scripting or programming. I am working on the labs now, and they cover everything from setting up accounts and VPCs usable for production application deployments, building a continuous deployment pipeline and integrating security controls, integrating PaaS services like S3, SQS, and SNS, through security automation through coding (both serverless with Lambda functions and server-based). If you don’t understand any of that, take the Hands-On class instead. The advanced class is nearly all labs, and even most lecture will be whiteboards instead of slides. The labs aren’t as tightly scripted, and there is a lot more room to experiment (and thus more margin for error). They do, however, all interlock to build a semblance of a real production deployment with integrated security controls and automation. I was pretty excited when I figured out how to build them up and tie them together, instead of having everything come out of a bucket of unrelated tasks. Hopefully that clears things up, and we look forward to seeing some of you in August. Oh, and if you work for BIGCORP and can’t make it, we also provide private trainings these days. Here are the signup links: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Shining a Light on Shadow Devices: Attacks

What is the real risk of the Shadow Devices we described back in our first post? It is clear that more organizations don’t really take their risks seriously. They certainly don’t have workarounds in place, or proactively segment their environments to ensure that compromising these devices doesn’t provide opportunity for attackers to gain presence and a foothold in their environments. Let’s dig into three broad device categories to understand what attacks look like. Peripherals Do you remember how cool it was when the office printer got a WiFi connection? Suddenly you could put it wherever you wanted, preserving the Feng Shui of your office, instead of having it tethered to the network drop. And when the printer makers started calling their products image servers, not just printers? Yeah, that was when they started becoming more intelligent, and also tempting targets. But what is the risk of taking over a printer? It turns out that even in our paperless offices of the future, organizations still print out some pretty sensitive stuff, and stuff they don’t want to keep may be scanned for storage/archival. Whether going in or out, sensitive content is hitting imaging servers. Many of them store the documents they print and scan until their memory (or embedded hard drive) is written over. So sensitive documents persist on devices, accessible to anyone with access to the device, either physical or remote. Even better, many printers are vulnerable to common wireless attacks like the evil twin, where a fake device with a stronger wireless signal impersonates the real printer. So devices connect (and print) documents to the evil twin and not the real printer – the same attack works with routers too, but the risk is much broader. Nice. But that’s not all! The devices typically use some kind of stripped-down UNIX variant at the core, and many organizations don’t change the default passwords on their image servers, enabling attackers to trigger remote firmware updates and install compromised versions of the printer OS. Another attack vector is that these imaging devices now connect to cloud-based services to email documents, so they have all the plumbing to act as a spam relay. Most printers use similar open source technologies to provide connectivity, so generic attacks generally work against a variety of manufacturers’ devices. These peripherals can be used to steal content, attack other devices, and provide a foothold inside your network perimeter. That makes these both direct and indirect targets. These attacks aren’t just theoretical. We have seen printers hijacked to spread inflammatory propaganda on college campuses, and Chris Vickery showed proof of concept code to access a printer’s hard drive remotely. Our last question is what kind of security controls run on imaging servers. The answer is: not much. To be fair, vendors have started looking at this more seriously, and were reasonably responsive in patching the attacks mentioned above. But that said, these products do not get the same scrutiny as other PC devices, or even some other connected devices we will discuss below. Imaging servers see relatively minimal security assessment before coming to market. We aren’t just picking on printers here. Pretty much every intelligent peripheral is similarly vulnerable, because they all have operating systems and network stacks which can be attacked. It’s just that offices tend to have dozens of printers, which are frequently overlooked during risk assessment. Medical Devices If printers and other peripherals seem like low-value targets, let’s discuss something a bit higher-value: medical devices. In our era of increasingly connected medical devices – including monitors, pumps, pacemakers, and pretty much everything else – there hasn’t been much focus on product security, except in the few cases where external pressure is applied by regulators. These devices either have IP network stacks or can be configured via Bluetooth – neither of which is particularly well protected. The most disturbing attacks threaten patient health. There are all too many examples of security researchers compromising infusion and insulin pumps, jackpotting drug dispensaries, and even the legendary Barnaby Jack messing with a pacemaker. We know one large medical facility that took it upon itself to hack all its devices in use, and deliver a list of issues to the manufacturers. But there has been no public disclosure of results, or whether device manufacturers have made changes to make their devices safe. Despite the very real risk of medical devices being targeted to attack patient health, we believe most of the current risk involves information. User data is much easier for attackers to monetize; medical profiles have a much longer shelf-life and much higher value than typical financial information. So ensuring that Protected Health Information is adequately protected remains a key concern in healthcare. That means making sure there aren’t any leakages in these devices, which is not easy without a full penetration test. On the positive front, many of these devices have purpose-built operating systems, so they cannot really be used as pivot points for lateral movement within the network. Yet few have any embedded security controls to ensure data does not leak. Further complicating matters, some devices still use deprecated operating systems such as Windows XP and even Windows 2000 (yes, seriously), and outdated compliance mandates often mean they cannot be patched without recertification. So administrators often don’t update the devices, and hope for the best. We can all agree that hope isn’t a sufficient strategy. With lives at stake, medical device makers are starting to talk about more proactive security testing. Similarly to the way a major SaaS breach could prove an existential threat to the SaaS market, medical device makers should understand what is at risk, especially in terms of liability, but that doesn’t mean they understand how to solve the problem. So the burden lands on customers to manage their medical device inventories, and ensure they are not misused to steal data or harm patients. Industrial Control Systems The last category of shadow devices we will consider is control systems. These devices range from SCADA systems running power grids, to warehousing systems ensuring the right merchandise is

Share:
Read Post

Understanding and Selecting RASP *edited* [New Series]

In 2015 we researched Putting Security Into DevOps, with a close look at how automated continuous deployment and DevOps impacted IT and application security. We found DevOps provided a very real path to improve application security using continuous automated testing, run each time new code was checked in. We were surprised to discover developers and IT teams taking a larger role in selecting security solutions, and bringing a new set of buying criteria to the table. Security products must do more than address application security issues; they need to mesh with continuous integration and deployment approaches, with automated capabilities and better integration with developer tools. But the biggest surprise was that every team which had moved past Continuous Integration and onto Continuous Deployment (CD) or DevOps asked us about RASP, Runtime Application Self-Protection. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. We understand we had a small sample size, and the number of firms who have embraced either CD or DevOps application delivery is a very small subset of the larger market. But we found that once they started continuous deployment, each firm hit the same issues. The ability to automate security, the ability to test in pre-production, configuration skew between pre-production and production, and the ability for security products to identify where issues were detected in the code. In fact it was our DevOps research which placed RASP at the top of our research calendar, thanks to perceived synergies. There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back office application that should never have been allowed on the Internet to begin with. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. So tools to protect against these attacks, which mesh well with the disruptive changes occuring in the development community, deserve a closer look. Defining RASP Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities: Monitor and block application requests; in some cases they can alter request to strip malicious content Full functionality through RESTful APIs Integration with the application or application instance (virtualization) Unpack and inspect requests in the application, rather than at the network layer Detect whether an attack would succeed Pinpoint the module, and possibly the specific line of code, where a vulnerability resides Instrument application usage These capabilities overlap with white box & black box scanners, web application firewalls (WAF), next-generation firewalls (NGFW), and even application intelligence platforms. And RASP can be used in coordination with any or all of those other security tools. So the question you may be asking yourself is “Why would we need another technology that does some of the same stuff?” It has to do with the way it is used and how it is integrated. Differing Market Drivers As RASP is a (relatively) new technology, current market drivers are tightly focused on addressing the security needs of one or two distinct buying centers. But RASP offers a distinct blend of capabilities and usability options which makes it unique in the market. Demand for security approaches focused on development, enabling pre-production and production application instances to provide real-time telemetry back to development tools Need for fully automated application security, deployed in tandem with new application code Technical debt, where essential applications contain known vulnerabilities which must be protected, either while defects are addressed or permanently if they cannot be fixed for any of various reasons Application security supporting development and operations teams who are not all security experts The remainder of this series will go into more detail on RASP technology, use cases, and deployment options: Technical Overview: This post will discuss technical aspects of RASP products – including what the technology does; how it works; and how it integrates into libraries, runtime code, or web application services. We will discuss the various deployment models including on-premise, cloud, and hybrid. We will discuss some of the application platforms supported today, and how we expect the technology to evolve over the next couple years. Use Cases: Why and how firms use this technology, with medium and large firm use case profiles. We will consider advantages of this technology, as well as the problems firms are trying to solve using it – especially around deficiencies in code security and development processes. We will provide some detail on monitoring vs. blocking threats, and discuss applicability to security and compliance requirements. Deploying RASP: This post will focus on how to integrate RASP into a development and release management processes. We will also jump into a more detailed discussion of how RASP differs from adjacent technologies like WAF, NGFW, and IDS, as well as static and dynamic application testing. We will walk through the advantages of each technology, with a special focus on operational considerations and keeping detection/blocking up to date. We will discuss advantages and tradeoffs compared to other relevant security technologies. This post will close with an example of a development pipeline, and how RASP technology fits in. Buyers Guide: This is a new market segment, so we will offer a basic analysis process for buyers to evaluate products. We will consider integration with existing development processes and rule management. Share:

Share:
Read Post

Summary: May 5, 2016

Rich here. It’s been a busy couple weeks, and the pace is only ramping up. This week I gave a presentation and a workshop at Interop. It seemed to go well, and the networking-focused audience was very receptive. Next week I’m out at the Rocky Mountain Infosec Conference, which is really just an excuse to spend a few more days back near my old home in Colorado. I get home just in time for my wife to take a trip, then even before she’s back I’m off to Atlanta to keynote an IBM Cybersecurity Seminar (free, if you are in the area). I’m kind of psyched for that one because it’s at the aquarium, and I’ve been begging Mike to take me for years. Not that I’ve been to Atlanta in years. Then some client gigs, and (hopefully) things will slow down a little until Black Hat. I’m updating our existing (now ‘basic’) cloud security class, and building the content for our Advanced Cloud Security and Applied SecDevOps class. It looks like it will be nearly all labs and whiteboarding, without too many lecture slides, which is how I prefer to learn. This week’s stories are wide-ranging, and we are nearly at the end of our series highlighting continuous integration security testing tools. Please drop me a line if you think we should include commercial tools. We work with some of those companies, so I generally try to avoid specific product mentions. Just email. You can subscribe to only the Friday Summary. Top Posts for the Week Leaking tokens in code is something I’m somewhat familiar with, and it doesn’t seem to be slacking off. Slack bot token leakage exposing business critical information. Oh, and also GitHub. Definitely GitHub. Avoid security credentials on GitHub. Full disclosure: I’ve done some work with Box, and knew this was coming. They now let you use AWS as a storage provider, to give you more control over the location of your data. Pretty interesting approach.Box Zones – Giving Enterprises Control Over Data Location Using AWS. Docker networking and sockets are definitely something you need to look at closely. Docker security is totally manageable, but the defaults can be risky if you don’t pay attention: The Dangers of Docker.sock. When working with clients we always end up spending a lot of time on cloud logging and alerting. This is just a sample of one of the approaches (I know, I need to post something soon). I’m starting to lean hard toward Lambda to filter and forward events to a SIEM/whatever, because set up properly it’s much faster than reading CloudTrail logs directly (as in 10-15 seconds vs. 10-20 minutes). Sending Amazon CloudWatch Logs to Loggly With AWS Lambda. Tool of the Week It’s time to finish off our series on integrating security testing tools into deployment pipelines with Mittn, which is maintained by F-Secure. Mittn is like Gauntlt and BDD-Security in that it wraps other security testing tools, allowing you to script automated tests into your CI server. Each of these tools defaults to a slightly different set of integrated security tools, and there’s no reason you can’t combine multiple tools in a build process. Basically, when you define a series of tests in your build process, you tie one of these into your CI server as a plugin or use scripted execution. You pass in security tests using the template for your particular tool, and it runs your automated tests. You can even spin up a full virtual network environment to test just like production. I am currently building this out myself, both for our training classes and our new securosis.com platform. For the most part it’s pretty straightforward… I have Jenkins pulling updates from Git, and am working on integrating Packer and Ansible to build new server images. Then I’ll mix in the security tests (probably using Gauntlt to start). It isn’t rocket science or magic, but it does take a little research and practice. Securosis Blog Posts this Week Updating and Pruning our Mailing Lists. Firestarter: What the hell is a cloud anyway?. Other Securosis News and Quotes Another quiet week. Training and Events I’m keynoting a free seminar for IBM at the Georgia Aquarium May 18th. I’m also presenting at the Rocky Mountain Information Security Conference in Denver May 11-12. Although I live in Phoenix these days, Boulder is still my home town, so I’m psyched any time I can get back there. Message me privately if you get in early and want to meet up. We are running two classes at Black Hat USA. Early bird pricing ends in a month – just a warning. Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Updating and Pruning our Mailing Lists

As part of updating All Things Securosis, the time has come to migrate our mailing lists to a new provider (MailChimp, for the curious). The CAPTCHA at our old provider wasn’t working properly, so people couldn’t sign up. I’m not sure if that’s technically irony for a security company, but it was certainly unfortunate. So… If you weren’t expecting this, for some reason our old provider had you listed as active!! If so we are really sorry and please click on the unsubscribe at the bottom of the email (yes some of you are just reading this on the blog). We did our best to prune the list and only migrated active subscriptions (our lists were always self-subscribe to start), but the numbers look a little funny and let’s just say there is a reason we switched providers. Really, we don’t want to spam you, we hate spam, and if this shows up in your inbox and is unwanted, the unsubscribe link will work, and feel free to email us/reply directly. I’m hoping it’s only a few people who unsubscribed during the transition. If you want to be added, we have two different lists – one for the Friday Summary (which is all cloud, security automation, and DevOps focused), and the Daily Digest of all emails sent the previous day. We only use these lists to send out email feeds from the blog, which is why I’m posting this on the site and not sending directly. We take our promises seriously and those lists are never shared/sold/whatever, and we don’t even send anything to them outside blog posts. Here are the signup forms: Daily Digest Friday Summary Now if you received this in email, and sign up again, that’s very meta of you and some hipster is probably smugly proud. Thanks for sticking with us, and hopefully we will have a shiny new website to go with our shiny new email system soon. But the problem with hiring designers that live in another state is flogging becomes logistically complex, and even the cookie bribes don’t work that well (especially since their office is, literally, right above a Ben and Jerry’s). And again, apologies if you didn’t expect or want this in your inbox; we spent hours trying to pull only active subscribers and then clean everything up but I have to assume mistakes still happened. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.