By Adrian Lane
As you might expect, the primary function of RASP is to protect web applications against known and emerging threats; it is typically deployed to block attacks at the application layer, before vulnerabilities can be exploited. There is no question that the industry needs application security platforms – major new vulnerabilities are disclosed just about every week. And there are good reasons companies look to outside security vendors to help protect their applications. Most often we hear that firms simply have too many critical vulnerabilities to fix in a timely manner, with many reporting their backlog would take years to fix. In many cases the issue is legacy applications – ones which probably should never have been put on the Internet. These applications are often unsupported, with the engineers who developed them no longer available, or the platforms so fragile that they become unstable if security fixes are applied. And in many cases it is simply economics: the cost of securing the application itself is financially unfeasible, so companies are willing to accept the risk, instead choosing to address threats externally as best they can.
But if these were the only reasons, organizations could simply use one of the many older technologies to application security, rather than needing RASP. Astute readers will notice that these are, by and large, the classic use cases for Intrusion Detection Systems (IDS) and Web Application Firewalls (WAFs). So why do people select RASP in lieu of more mature – and in many cases already purchased and deployed – technologies like IDS or WAF?
The simple answer is that the use cases are different enough to justify a different solution. RASP integrates security one large step from “security bolted on” toward “security from within”. But to understand the differences between use cases, you first need to understand how user requirements differ, and where they are not adequately addressed by those older technologies. The core requirements above are givens, but the differences in how RASP is employed are best illustrated by a handful of use cases.
- APIs & Automation: Most of our readers know what Application Programming Interfaces (APIs) are, and how they are used. Less clear is the greatly expanding need for programatic interfaces in security products, thanks to application delivery disruptions caused by cloud computing. Cloud service models – whether deployment is private, public, or hybrid – enable much greater efficiencies as networks, servers, and applications can all be constructed and tested as software. APIs are how we orchestrate building, testing, and deployment of applications. Security products like RASP – unlike IDS and most WAFs – offer their full platform functionality via APIs, enabling software engineers to work with RASP in the manner their native metaphor.
- Development Processes: As more application development teams tackle application vulnerabilities within the development cycle, they bring different product requirements than IT or security teams applying security controls post-deployment. It’s not enough for security products to identify and address vulnerabilities – they need to fit the development model. Software development processes are evolving (notably via continuous integration, continuous deployment, and DevOps) to leverage advantages of virtualization and cloud services. Speed is imperative, so RASP embedded within the application stack, providing real-time monitoring and blocking, supports more agile approaches.
- Application Awareness: As attackers continue to move up the stack, from networks to servers and then to applications, it is becoming more distinguish attacks from normal usage. RASP is differentiated by its ability to include application context in security policies. Many WAFs offer ‘positive’ security capabilities (particularly whitelisting valid application requests), but being embedded within applications provides additional application knowledge and instrumentation capabilities to RASP deployments. Further, some RASP platforms help developers by specifically reference modules or lines of suspect code. For many development teams, potentially better detection capabilities are less valuable than having RASP pinpoint vulnerable code.
- Pre-Deployment Validation: For cars, pacemakers, and software, it has been proven over decades that the earlier in the production cycle errors are discovered, the easier – and cheaper – they are to fix. This means testing in general, and security testing specifically, works better earlier into the development process. Rather than relying on vulnerability scanners and penetration testers after an application has been launched, we see more and more application security testing performed prior to deployment. Again, this is not impossible with other application-centric tools, but RASP is easier to build into automated testing.
Our next post will talk about deployment, and working RASP into development pipelines.
Posted at Wednesday 25th May 2016 12:03 am
(0) Comments •
By Mike Rothman
As we discussed in the first post of this series, incident response needs to change, given disruptions such as cloud computing and the availability of new data sources, including external threat intelligence. We wrote a paper called Leveraging Threat Intelligence in Incident Response (TI+IR) back in 2014 to update our existing I/R process map. Here is what we came up with:
So what has changed in the two years since we published that paper? Back then the cloud was nascent and we didn’t know if DevOps was going to work. Today both the cloud and DevOps are widely acknowledged as the future of computing and how applications will be developed and deployed. Of course we will take a while to get there, but they are clearly real already, and upending pretty much all the existing ways security currently works, including incident response.
The good news is that our process map still shows how I/R can leverage additional data sources and the other functions involved in performing a complete and thorough investigation. Although it is hard to get sufficient staff to fill out all the functions described on the map. But we’ll deal with that in our next post. For now let’s focus on integrating additional data sources including external threat intelligence, and handling emerging cloud architectures.
More Data (Threat Intel)
We explained why threat intelligence matters to incident response in our TI+IR paper:
To really respond faster you need to streamline investigations and make the most of your resources, a message we’ve been delivering for years. This starts with an understanding of what information would interest attackers. From there you can identify potential adversaries and gather threat intelligence to anticipate their targets and tactics. With that information you can protect yourself, monitor for indicators of compromise, and streamline your response when an attack is (inevitably) successful.
You need to figure out the right threat intelligence sources, and how to aggregate the data and run the analytics. We don’t want to rehash a lot of what’s in the TI+IR paper, but the most useful information sources include:
- Compromised Devices: This data source provides external notification that a device is acting suspiciously by communicating with known bad sites or participating in botnet-like activities. Services are emerging to mine large volumes of Internet traffic to identify such devices.
- Malware Indicators: Malware analysis continues to mature rapidly, getting better and better at understanding exactly what malicious code does to devices. This enables you to define both technical and behavioral indicators, across all platforms and devices to search for within your environment, as described in gory detail in Malware Analysis Quant.
- IP Reputation: The most common reputation data is based on IP addresses and provides a dynamic list of known bad and/or suspicious addresses based data such as spam sources, torrent usage, DDoS traffic indicators, and web attack origins. IP reputation has evolved since its introduction, and now features scores comparing the relative maliciousness of different addresses, factoring in additional context such as Tor nodes/anonymous proxies, geolocation, and device ID to further refine reputation.
- Malicious Infrastructure: One specialized type of reputation often packaged as a separate feed is intelligence on Command and Control (C&C) networks and other servers/sources of malicious activity. These feeds track global C&C traffic and pinpoint malware originators, botnet controllers, compromised proxies, and other IP addresses and sites to watch for as you monitor your environment.
- Phishing Messages: Most advanced attacks seem to start with a simple email. Given the ubiquity of email and the ease of adding links to messages, attackers typically find email the path of least resistance to a foothold in your environment. Isolating and analyzing phishing email can yield valuable information about attackers and tactics.
As depicted in the process map above, you integrate both external and internal security data sources, then perform analytics to isolate the root cause of the attacks and figure out the damage and extent of the compromise. Critical success factors in dealing with all this data are the ability to aggregate it somewhere, and then to perform the necessary analysis.
This aggregation happens at multiple layers of the I/R process, so you’ll need to store and integrate all the I/R-relevant data. Physical integration is putting all your data into a single store, and then using it as a central repository for response. Logical integration uses valuable pieces of threat intelligence to search for issues within your environment, using separate systems for internal and external data. We are not religious about how you handle it, but there are advantages to centralizing all data in one place. So as long as you can do your job, though – collecting TI and using it to focus investigation – either way works. Vendors providing big data security all want to be your physical aggregation point, but results are what matters, not where you store data.
Of course we are talking about a huge amount of data, so your choices for both data sources and I/R aggregation platform are critical parts of building an effective response process.
No Data (Cloud)
So what happens to response now that you don’t control a lot of the data used by your corporate systems? The data may reside with a Software as a Service (SaaS) provider, or your application may be deployed in a cloud computing service. In data centers with traditional networks it’s pretty straightforward to run traffic through inspection points, capture data as needed, and then perform forensic investigation. In the cloud, not so much.
To be clear, moving your computing to the cloud doesn’t totally eliminate your ability to monitor and investigate your systems, but your visibility into what’s happening on those systems using traditional technologies is dramatically limited.
So the first step for I/R in the cloud has nothing to do with technology. It’s all about governance. Ugh. I know most security professionals just felt a wave of nausea hit. The G word is not what anyone wants to hear. But it’s pretty much the only way to establish the rules of engagement with cloud service providers. What kinds of things need to be defined?
- SLAs: One of the first things we teach in our cloud security classes is the need to have strong Service Level Agreements (SLAs) with cloud providers. And these SLAs need to be established before you sign a deal. You don’t have much leverage during negotiations, but you have none after you signed. The kinds of SLAs include response time, access to specific data types, proactive alerts (them telling you when they had an issue), etc. We suggest you refer to the Cloud Security Alliance Guidance for specifics about proper governance structures for cloud computing.
- Hand-offs and Escalations: At some point there will be an issue, and you’ll need access to data the cloud provider has. How will that happen? The time to work through these issues is not while your cloud technology stack is crawling with attackers. Like all aspects of I/R, practice makes pretty good – there is no such thing as perfect. That means you need to practice your data gathering and hand-off processes with your cloud providers. The escalation process within the service provider also needs to be very well defined to make sure you can get adequate response under duress.
Once the proper governance structure is in place, you need to figure out what data is available to you in the various cloud computing models. In a SaaS offering you are pretty much restricted to logs (mostly activity, access, and identity logs) and information about access to the SaaS provider’s APIs. This data is quite limited, but can help figure out whether an employee’s account has been compromised, and what actions the account performed. Depending on the nature of the attack and the agreement with your SaaS provider, you may also be able to get some internal telemetry, but don’t count on that.
If you run your applications in an Infrastructure as a Service (IaaS) environment you will have access to logs (activity, access, and identity) of your cloud infrastructure activity at a granular level. Obviously a huge difference from SaaS is that you control the servers and networks running in your IaaS environment, so you can instrument your application stacks to provide granular activity logging, and route network traffic through an inspection/capture point to gather network forensics. Additionally many of the IaaS providers have fairly sophisticated offerings to provide configuration change data and provide light security assessments to pinpoint potential security issues, both of which are useful during incident response.
Those running private or hybrid clouds connecting to cloud environments at an IaaS provider, as well as your own data center, will also have access to logs generated by virtualization infrastructure. As we alluded before, regardless of where the application runs, you can (and should) be instrumenting the application itself to provide granular logging and activity monitoring to detect misuse. With the limited visibility in the cloud, you really don’t have a choice but to both build security into your cloud technology stacks, and make sure you are able to generate application logs to provide sufficient data to support an investigation.
Capture the Flag
In the cloud, whether it’s SaaS, IaaS, or hybrid cloud, you are unlikely to get access to the full network packet stream. You will have access to the specific instances running in the cloud (whether SaaS or hybrid cloud), but obviously the type of telemetry you can gather will vary. So how much forensics information is enough?
- Full Network Packet Capture: Packets are useful for knowing exactly what happened and being able to reconstruct and play back sessions. To capture packets you need either virtual taps to redirect network traffic to capture points, or to run network traffic through sensors in the cloud. But faster networks and less visibility are making full packet capture less feasible.
- Capture and Release: This approach involves capturing the packet stream and deriving metadata about network traffic dynamics, and content in the stream as well. It’s more efficient because you aren’t necessarily keeping the full data stream, but get a lot more information than can be gleaned from network flows. This still requires inline sensors or virtual taps to capture traffic before releasing it.
- Triggered Capture: When a suspicious alert happens you may want to capture the traffic and logs before and after the alert on the devices/networks in question. That requires at least a capture and release approach (to get the data), and provides flexibility to only capture when you think something is important, so it’s more efficient that full network packet capture.
- Network Flows: It will be increasingly common to get network flow data, which provides source and destination information for network traffic through your cloud environment, and enables you to see if there was some kind of anomalous activity prior to the compromise.
- Instance Logs: The closest analogy is the increasingly common endpoint detection and forensics offerings. If you deploy them within your cloud instances, you can figure out what happened, but may lack context on who and why unless you also fully capturing device activity. Also understand that these tools will need to be updated to handle the nuances of working in the cloud, including autoscaling and virtual networking.
We’ve always been fans of more rather than less data. But as we move into the Cloud Age practitioners need to be much more strategic and efficient about how and where to get data to drive incident response. It will come from external sources, as well as some logical sensors and capture points within the clouds (both public and private) in use. The increasing speed of networks and telemetry available from instances/servers, especially in data centers, will continue to challenge the scale of data collection infrastructure, so scale is a key consideration for I/R in the Cloud Age.
All this I/R data now requires technology that can actually analyze it within a reasonable timeframe. We hear a lot about “big data” for security monitoring these days. Regardless of what it’s called by the industry hype machine, you need technologies to index, search through, and find patterns within data – even when you don’t know exactly what you’re looking for, to start. Fortunately other industries – including retail – have been analyzing data to detect unseen and unknown patterns for years (they call it “business intelligence”), and many of their analytic techniques are available to security.
This scale issue is compounded by cloud usage requiring highly distributed collection infrastructure, which makes I/R collection more art than science, so you need to be constantly learning how much data is enough. The process feedback loop is absolutely critical to make sure that when the right data is not captured, the process evolves to collect the necessary infrastructure telemetry, and instrument applications to ensure sufficient visibility for thorough investigation.
But in the end, incident response always depends on people to some degree. That’s the problem nowadays, so our next post will tackle talent for incident response, and the potential shifts as cloud computing continues to take root.
Posted at Tuesday 24th May 2016 8:48 pm
(0) Comments •
This is the first in a four-part series on evolving encryption key management best practices. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free.
Data centers and applications are changing; so is key management.
Cloud. DevOps. Microservices. Containers. Big Data. NoSQL.
We are in the midst of an IT transformation wave which is likely the most disruptive since we built the first data centers. One that’s even more disruptive than the first days of the Internet, due to the convergence of multiple vectors of change. From the architectural disruptions of the cloud, to the underlying process changes of DevOps, to evolving Big Data storage practices, through NoSQL databases and the new applications they enable.
These have all changed how we use a foundational data security control: encryption. While encryption algorithms continue their steady evolution, encryption system architectures are being forced to change much faster due to rapid changes in the underlying infrastructure and the applications themselves. Security teams face the challenge of supporting all these new technologies and architectures, while maintaining and protecting existing systems.
Within the practice of data-at-rest encryption, key management is often the focus of this change. Keys must be managed and distributed in ever-more-complex scenarios, at the same time there is also increasing demand for encryption throughout our data centers (including cloud) and our application stacks.
This research highlights emerging best practices for managing encryption keys for protecting data at rest in the face of these new challenges. It also presents updated use cases and architectures for the areas where we get the most implementation questions. It is focused on data at rest, including application data; transport encryption is an entirely different issue, as is protecting data on employee computers and devices.
How technology evolution affects key management
Technology is always changing, but there is a reasonable consensus that the changes we are experiencing now are coming faster than even the early days of the Internet. This is mostly because we see a mix of both architectural and process changes within data centers and applications. The cloud, increased segregation, containers, and micro services, all change architectures; while DevOps and other emerging development and operations practices are shifting development and management practices. Better yet (or worse, depending on your perspective), all these changes mix and reinforce each other.
Enough generalities. Here are the top trends we see impacting data-at-rest encryption:
Cloud Computing: The cloud is the single most disruptive force affecting encryption today. It is driving very large increases in encryption usage, as organizations shift to leverage shared infrastructure. We also see increased internal use of encryption due to increased awareness, hybrid cloud deployments, and in preparation for moving data into the cloud.
The cloud doesn’t only affect encryption adoption – it also fundamentally influences architecture. You cannot simply move applications into the cloud without re-architecting (at least not without seriously breaking things – and trust us, we see this every day). This is especially true for encryption systems and key management, where integration, performance, and compliance all intersect to affect practice.
- Increased Segmentation: We are far past the days when flat data center architectures were acceptable. The cloud is massively segregated by default, and existing data centers are increasingly adding internal barriers. This affects key management architectures, which now need to support different distribution models without adding management complexity.
- Microservice architectures: Application architectures themselves are also becoming more compartmentalized and distributed as we move away from monolithic designs into increasingly distributed, and sometimes ephemeral, services. This again increases demand to distribute and manage keys at wider scale without compromising security.
- Big Data and NoSQL: Big data isn’t just a catchphrase – it encompasses a variety of very real new data storage and processing technologies. NoSQL isn’t necessarily big data, but has influenced other data storage and processing as well. For example, we are now moving massive amounts of data out of relational databases into distributed file-system-based repositories. This further complicates key management, because we need to support distributed data storage and processing on larger data repositories than ever before.
- Containers: Containers continue the trend of distributing processing and storage (noticing a theme?), on an even more ephemeral basis, where containers might appear in microseconds and disappear in minutes, in response to application and infrastructure demands.
- DevOps: To leverage these new changes and increase effectiveness and resiliency, DevOps continues to emerge as a dominant development and operational framework – not that there is any single definition of DevOps. It is a philosophy and collection of practices that support extremely rapid change and extensive automation. This makes it essential for key management practices to integrate, or teams will simply move forward without support.
These technologies and practices aren’t mutually exclusive. It is extremely common today to build a microservices-based application inside containers running at a cloud provider, leveraging NoSQL and Big Data, all managed using DevOps. Encryption may need to support individual application services, containers, virtual machines, and underlying storage, which might connect back to an existing enterprise data center via a hybrid cloud connection.
It isn’t always this complex, but sometimes it is. So key management practices are changing to keep pace, so they can provide the right key, at the right time, to the right location, without compromising security, while still supporting traditional technologies.
Posted at Monday 23rd May 2016 7:48 pm
(0) Comments •
By Mike Rothman
Perception of time is a funny thing. As we wind down the school year in Atlanta, it’s hard to believe how quickly this year has flown by. It seems like yesterday XX1 was starting high school and the twins were starting middle school. I was talking to XX1 last week as she was driving herself to school (yes, that’s a surreal statement) and she mentioned that she couldn’t believe the school year was over. I tried to explain that as you get older, time seems to move more quickly.
The following day I was getting a haircut with the Boy and our stylist was making conversation. She asked him if the school year seemed to fly by. He said, “Nope! It was sooooo slow.” They are only 3 years apart, but clearly the perception of time changes as tweens become teens.
The end of the school year always means dance recitals. For over 10 years now I’ve been going to recitals to watch my girls perform. From when they were little munchies in their tiny tutus watching the teacher on the side of the stage pantomiming the routine, to now when they both are advanced dancers doing 7-8 routines each year, of all disciplines. Ballet (including pointe), Jazz, Modern, Tap, Lyrical. You name it and my girls do it.
A lot of folks complain about having to go to recitals. I went to all 3 this year. There is no place I’d rather be. Watching my girls dance is one of the great joys of my life. Seeing them grow from barely being able to do a pirouette to full-fledged dancers has been incredible. I get choked up seeing how they get immersed in performance, and how happy it makes them to be on stage.
Although this year represents a bit of a turning point. XX2 decided to stop dancing and focus on competitive cheerleading. There were lots of reasons, but it mostly came down to passion. She was serious about improving her cheerleading skills, constantly stretching and working on core strength to improve her performance. She was ecstatic when she made the 7th grade competitive cheer team at her school. But when it came time for dance she said, “meh.” So the choice was clear, although I got a little nostalgic watching her last dance recital. It’s been a good run and I look forward to seeing her compete in cheer.
I’m the first to embrace change and chase passions. When something isn’t working, you make changes, knowing full well that it requires courage – lots of people resist change. Her dance company gave her a bit of a hard time and the teachers weren’t very kind during her last few months at the studio. But it’s OK – people show themselves at some point, and we learned a lot about those people. Some are keepers, and XX2 will likely maintain those relationships as others fade away.
It’s just like life. You realize who your real friends are when you make changes. Savor those friendships and let all the others go. We have precious few moments – don’t waste them on people who don’t matter.
Photo credit: “Korean Modern Dance” from Republic of Korea
Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.
We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).
The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.
Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.
Incident Response in the Cloud Age
Understanding and Selecting RASP
Maximizing WAF Value
Resilient Cloud Network Architectures
Building a Vendor IT Risk Management Program
Recently Published Papers
Incite 4 U
The Weakest Link: Huge financial institutions spend a ton of money on security. They buy and try one of everything, and have thousands of security professionals to protect their critical information. And they still get hacked, but it’s a major effort for their adversaries. The attackers just don’t want to work that hard, so mostly they don’t. They find the weakest link, and it turns out that to steal huge sums from banks, you check banks without sophisticated security controls, but with access to the SWIFT fund transfer network. So if you were curious whether Bank of Bangladesh has strong security, now you know. They don’t. That bank was the entry point for an $81 million fraud involving SWIFT and the Federal Reserve Bank of NY. Everything looked legit, so the big shops thought they were making a proper fund transfer. And then the money was gone. Poof. With such interconnected systems running the global financial networks, this kind of thing is bound to happen. Probably a lot. – MR
Racist in the machine: It shouldn’t be funny, but it is: Microsoft turned loose Tay Chatbot – a machine learning version of Microsoft Bob on the Internet. Within hours it became a plausible, creative racist ass***. Creative in that it learned, mostly pulling from cached Google articles, to evolve its own racism. Yes, all those Internet comment trolls taught the bot to spew irrational hatred, so well that it could pass for a Philadelphia sports fan (kidding). Some friends have pointed out other examples of chatbots on message boards claiming to be $DIETY as their learning engines did exactly what they were programmed to do. Some call it a reflection on society, as Tay learned people’s real behaviors, but it’s more likely its learning mode was skewed toward the wrong sources, with no ethics or logical validation. This is a good example of how easily things can go wrong in automated security event detection, heuristics, lexical analysis, metadata analysis, and machine learning. People can steer learning engines the wrong way, so don’t allow unfiltered user input, just like with any other application platform. – AL
Double edged sword: The thing about technology is that almost every innovation can be used for good. Or bad. Take, for instance, PowerShell, the powerful Microsoft scripting language. As security becomes more software-defined by the day, scripting tools like PowerShell (and Python) are going to be the way much of security gets implemented in continuous deployment pipelines. But as the CarbonBlack folks discuss in this NetworkWorld article, they are also powerful tools for automating a bunch of malware functions. So you need to get back to the basics of security: defining normal behavior and then looking for anomalies, because the tools can be used for good and not-so-good. – MR
Mastering the irrelevant: Visa stated some merchants see a dip in fraud due to chipped cards with 5 of the top 25 victims of forged cards seeing an 18.3% reduction in counterfeit transactions while non-compliant merchants saw an 11% increase. And they say over 70% of credit cards in US circulation now have chips, up from 20% at the October 2015 deadline. That’s great, but Visa is tap dancing around the real issue: why there is a measly 20% adoption rate among the top candidates for EMV fraud reduction. We understand that the majority of the 25 merchants referenced above have EMV terminals in place, but continue to point fingers at Visa and Mastercard’s failure to certify as the reason EMV is not fully deployed. Think about it this way: EMV does not stop a card cloner from using a non-chipped clone, because US terminals accept both card types. This is clearly not about security or fraud detection, but instead a self-promotional pat on the back to quiet their critics. If you’re impacted by EMV, you do want to migrate to enable mobile payments, which legitimately offer better customer affinity and security, and possibly lower fees. The rest is just noise. – AL
More Weakest Link: Speaking of weak links, it turns out call centers are rife with fraud today. At least according to a research report from Pindrop, who really really wants phone fraud to increase. The tactics in this weak link are different than the Bangladesh attack above: call centers are being gamed using social engineering. But in both cases big money is at stake. One of their conclusions is that it’s getting hard for fraudsters to clone credit cards (with Chip and PIN), so they are looking for a weaker link. They found the folks in these call centers. And the beat goes on. – MR
Posted at Friday 20th May 2016 2:16 pm
(0) Comments •
Not a lot of news from us this week, because we’ve mostly been traveling, and for Mike and me the kids’ school year is coming to a close.
Last week I was at the Rocky Mountain Information Security Conference in Denver. The Denver ISSA puts on a great show, but due to some family scheduling I didn’t get to see as many sessions as I hoped. I presented my usual pragmatic cloud pitch, a modification of my RSA session from this year. It seems one of the big issues organizations are still facing is a mixture of where to get started on cloud/DevOps, with switching over to understand and implement the fundamentals.
For example, one person in my session mentioned his team thought they were doing DevOps, but actually mashed some tools together without understanding the philosophy or building a continuous integration pipeline. Needless to say, it didn’t go well.
In other news, our advanced Black Hat class sold out, but there are still openings in our main class I highlighted the course differences in a post.
You can subscribe to only the Friday Summary.
Top Posts for the Week
- Another great post from the Signal Sciences team. This one highlights a session from DevOps Days Austin by Dan Glass of American Airlines. AA has some issues unique to their industry, but Dan’s concepts map well to any existing enterprise struggling to transition to DevOps while maintaining existing operations. Not everyone has the luxury of building everything from scratch. Avoiding the Dystopian Road in Software.
- One of the most popular informal talks I give clients and teach is how AWS networking works. It is completely based on this session, which I first saw a couple years ago at the re:Invent conference – I just cram it into 10-15 minutes and skip a lot of the details. While AWS-specific, this is mandatory for anyone using any kind of cloud. The particulars of your situation or provider will differ, but not the issues. Here is the latest, with additional details on service endpoints: AWS Summit Series 2016 | Chicago – Another Day, Another Billion Packets.
- In a fascinating move, Jenkins is linking up with Azure, and Microsoft is tossing in a lot of support. I am actually a fan of running CI servers in the cloud for security, so you can tie them into cloud controls that are hard to implement locally, such as IAM. Announcing collaboration with the Jenkins project.
- Speaking of CI in the cloud, this is a practical example from Flux7 of adding security to Git and Jenkins using Amazon’s CodeDeploy. TL;DR: you can leverage IAM and Roles for more secure access than you could achieve normally: Improved Security with AWS CodeCommit.
- Netflix releases a serverless Open Source SSH Certificate Authority. It runs on AWS Lambda, and is definitely one to keep an eye on: Netflix/bless.
- AirBnB talks about how they integrated
syslog into AWS Kinesis using
osquery (a Facebook tool I think I will highlight as tool of the week): Introducing Syslog to AWS Kinesis via Osquery – Airbnb Engineering & Data Science.
Tool of the Week
osquery by Facebook is a nifty Open Source tool to expose low-level operating system information as a real-time relational database. What does that mean? Here’s an example that finds every process running on a system where the binary is no longer on disk (a direct example from the documentation, and common malware behavior):
SELECT name, path, pid FROM processes WHERE on_disk = 0;
This is useful for operations but it’s positioned as a security tool. You can use it for File Integrity Monitoring, real-time alerting, and a whole lot more. The site even includes ‘packs’ for common needs including OS X attacks, compliance, and vulnerability management.
Securosis Blog Posts this Week
Other Securosis News and Quotes
Another quiet week…
Training and Events
- We are running two classes at Black Hat USA. Early bird pricing ends in a month, just a warning:
Posted at Friday 20th May 2016 5:13 am
(0) Comments •
By Mike Rothman
Since we published our React Faster and Better research and Incident Response Fundamentals, quite a bit has changed relative to responding to incidents. First and foremost, incident response is a thing now. Not that it wasn’t a discipline mature security organizations focused on before 2012, but since then a lot more resources and funding have shifted away from ineffective prevention towards detection and response. Which we think is awesome.
Of course, now that I/R is a thing and some organizations may actually have decent response processes, the foundation us is shifting. But that shouldn’t be a surprise – if you wanted a static existence, technology probably isn’t the best industry for you, and security is arguably the most dynamic part of technology. We see the cloud revolution taking root, promising to upend and disrupt almost every aspect of building, deploying and operating applications. We continue to see network speeds increase, putting scaling pressure on every aspect of your security program, including response.
The advent of threat intelligence, as a means to get smarter and leverage the experiences of other organizations, is also having a dramatic impact on the security business, particularly incident response. Finally, the security industry faces an immense skills gap, which is far more acute in specialized areas such as incident response. So whatever response process you roll out needs to leverage technological assistance – otherwise you have little chance of scaling it to keep pace with accelerating attacks.
This new series, which we are calling “Incident Response in the Cloud Age”, will discuss these changes and how your I/R process needs to evolve to keep up. As always, we will conduct this research using our Totally Transparent Research methodology, which means we’ll post everything to the blog first, and solicit feedback to ensure our positions are on point.
We’d also like to thank SS8 for being a potential licensee of the content. One of the unique aspects of how we do research is that we call them a potential licensee because they have no commitment to license, nor do they have any more influence over our research than you. This approach enables us to write the kind of impactful research you need to make better and faster decisions in your day to day security activities.
Entering the Cloud Age
Evidently there is this thing called the ‘cloud’, which you may have heard of. As we have described for our own business, we are seeing cloud computing change everything. That means existing I/R processes need to now factor in the cloud, which is changing both architecture and visibility.
There are two key impacts on your I/R process from the cloud. The first is governance, as your data now resides in a variety of locations and with different service providers. Various parties required to participate as you try to investigate an attack. The process integration of a multi-organization response is… um… challenging.
The other big difference in cloud investigation is visibility, or its lack. You don’t have access to the network packets in an Infrastructure as a Service (IaaS) environment, nor can you see into a Platform as a Service (PaaS) offering to see what happened. That means you need to be a lot more creative about gathering telemetry on an ongoing basis, and figuring out how to access what you need during an investigation.
We have also seen a substantial increase in the speed of networks over the past 5 years, especially in data centers. So if network forensics is part of your I/R toolkit (as it should be) how you architect your collection environment, and whether you actually capture and store full packets, are key decisions. Meanwhile data center virtualization is making it harder to know which servers are where, which makes investigation a bit more challenging.
Getting Smarter via Threat Intelligence
Sharing attack data between organizations still feels a bit strange for long-time security professionals like us. The security industry resisted admitting that successful attacks happen (yes, that ego thing got in the way), and held the entirely reasonable concern that sharing company-specific data could provide adversaries with information to facilitate future attacks.
The good news is that security folks got over their ego challenges, and also finally understand they cannot stand alone and expect to understand the extent of the attacks that come at them every day. So sharing external threat data is now common, and both open source and commercial offerings are available to provide insight, which is improving incident response. We documented how the I/R process needs to change to leverage threat intelligence, and you can refer to that paper for detail on how that works.
Facing down the Skills Gap
If incident response wasn’t already complicated enough because of the changes described above, there just aren’t enough skilled computer forensics specialists (who we call forensicators) to meet industry demand. You cannot just throw people at the problem, because they don’t exist. So your team needs to work smarter and more efficiently. That means using technology more for gathering and analyzing data, structuring investigations, and automating what you can. We will dig into emerging technologies in detail later in this series.
Evolving Incident Response
Like everything else in security, incident response is changing. The rest of this series will discuss exactly how. First we’ll dig into the impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we’ll dig into how to streamline a response process to address the lack of people available to do the heavy lifting of incident response. Finally we’ll bring everything together with a scenario that illuminates the concepts in a far more tangible fashion. So buckle up – it’s time to evolve incident response for the next era in technology: the Cloud Age.
Posted at Thursday 19th May 2016 3:53 pm
(0) Comments •
By Adrian Lane
This post will discuss technical facets of RASP products, including how the technology works, how it integrates into an application environment, and the advantages or disadvantages of each. We will also spend some time on which application platforms supported are today, as this is one area where each provider is limited and working to expand, so it will impact your selection process. We will also consider a couple aspects of RASP technology which we expect to evolve over next couple years.
RASP works at the application layer, so each product needs to integrate with applications somehow. To monitor application requests and make sense of them, a RASP solution must have access to incoming calls. There are several methods for monitoring either application usage (calls) or execution (runtime), each deployed slightly differently, gathering a slightly different picture of how the application functions. Solutions are installed into the code production path, or monitor execution at runtime. To block and protect applications from malicious requests, a RASP solution must be inline.
- Servlet Filters & Plugins: Some RASP platforms are implemented as web server plug-ins or Java Servlets, typically installed into either Apache Tomcat or Microsoft .NET to process inbound HTTP requests. Plugins filter requests before they reach application code, applying detection rules to each inbound request received. Requests that match known attack signatures are blocked. This is a relatively simple approach for retrofitting protection into the application environment, and can be effective at blocking malicious requests, but it doesn’t offer the in-depth application mapping possible with other types of integration.
- Library/JVM Replacement: Some RASP products are installed by replacing the standard application libraries, JAR files, or even the Java Virtual Machine. This method basically hijacks calls to the underlying platform, whether library calls or the operating system. The RASP platform passively ‘sees’ application calls to supporting functions, applying rules as requests are intercepted. Under this model the RASP tool has a comprehensive view of application code paths and system calls, and can even learn state machine or sequence behaviors. The deeper analysis provides context, allowing for more granular detection rules.
- Virtualization or Replication: This integration effectively creates a replica of an application, usually as either a virtualized container or a cloud instance, and instruments application behavior at runtime. By monitoring – and essentially learning – application code pathways, all dynamic or non-static code is mimicked in the cloud. Learning and detection take place in this copy. As with replacement, application paths, request structure, parameters, and I/O behaviors can be ‘learned’. Once learning is complete rules are applied to application requests, and malicious or malformed requests are blocked.
The biggest divide between RASP providers today is their platform support. For each vendor we spoke with during our research, language support was a large part of their product roadmap. Most provide full support for Java; beyond that support is hit and miss. .NET support is increasingly common. Some vendors support Python, PHP, Node.js, and Ruby as well. If your application doesn’t run on Java you will need to discuss platform support with vendors. Within the next year or two we expect this issue to largely go away, but for now it is a key decision factor.
Most RASP products are deployed as software, within an application software stack. These products work equally well on-premise and in cloud environments. Some solutions operate fully in a cloud replica of the application, as in the virtualization and replicated models mentioned above. Still others leverage a cloud component, essentially sending data from an application instance to a cloud service for request filtering. What generally doesn’t happen is dropping an appliance into a rack, or spinning up a virtual machine and re-routing network traffic.
During our interviews with vendors it became clear that most are still focused on negative security: they detect known malicious behavior patterns. These vendors research and develop attack signatures for customers. Each signature explicitly describes one attack, such as SQL injection or a buffer overflow. For example most products include policies focused on the OWASP Top Ten critical web application vulnerabilities, commonly with multiple policies to detect variations of the top ten threat vectors. This makes their rules harder for attackers to evade. And many platforms include specific rules for various Common Vulnerabilities and Exposures, providing the RASP platform with signatures to block known exploits.
Active vs. Passive Learning
Most RASP platforms learn about the application they are protecting. In some cases this helps to refine detection rules, adapting generic rules to match specific application requests. In other cases this adds fraud detection capabilities, as the RASP learns to ‘understand’ application state or recognize an appropriate set of steps within the application. Understanding state is a prerequisite for detecting business logic attacks and multi-part transactions. Other RASP vendors are just starting to leverage a positive (whitelisting) security model. These RASP solutions learn how API calls are exercised or what certain lines of code should look like, and block unknown patterns.
To do more than filter known attacks, a RASP tool needs to build a baseline of application behaviors, reflecting the way an application is supposed to work. There are two approaches: passive and active learning. A passive approach builds a behavioral profile as users use the application. By monitoring application requests over time and cataloging each request, linking the progression of requests to understand valid sequences of events, and logging request parameters, a RASP system can recognizes normal usage. The other baselining approach is similar to what Dynamic Application Security Testing (DAST) platforms use: by crawling through all available code paths, the scope of application features can be mapped. By generating traffic to exercise new code as it is deployed, application code paths can be synthetically enumerated do produce a complete mapping predictably and more quickly.
Note that RASP’s positive security capabilities are nascent. We see threat intelligence and machine learning capabilities as a natural fit for RASP, but these capabilities have not yet fully arrived. Compared to competing platforms, they lack maturity and functionality. But RASP is still relatively new, and we expect the gaps to close over time. On the bright side, RASP addresses application security use cases which competitive technologies cannot.
We have done our best to provide a detailed look at RASP technology, both to help you understand how it works and to differentiate it from other security products which sound similar. If you have questions, or some aspect of this technology is confusing, please comment below, and we will work to address your questions. A wide variety of platforms – including cloud WAF, signal intelligence, attribute-based fraud detection, malware detection, and network oriented intelligence services – all market value propositions which overlap with RASP. But unless the product can work in the application layer, it’s not RASP.
Next we will discuss emerging use cases, and why firms are looking for alternatives to what they have today.
Posted at Wednesday 18th May 2016 12:36 am
(3) Comments •
By Mike Rothman
As we have posted this Shadow Devices series, we have discussed the millions (likely billions) of new devices which will be connecting to networks over the coming decade. Clearly many of them won’t be traditional computer devices, which can be scanned and assessed for security issues. We called these other devices shadow devices because this is about more than the “Internet of Things” – any networked device which can be used to steal information – whether directly or by providing a stepping stone to targeted information – needs to be considered.
Our last post explained how peripherals, medical devices, and control systems can be attacked. We showed that although traditional malware attacks on traditional computing and mobile get most of the attention in IT security circles, these other devices shouldn’t be ignored. As with most things, it’s not a matter of if but when these lower-profile devices will be used to perpetrate a major attack.
So now what? How can you figure out your real attack surface, and then move to protect the systems and devices providing access to your critical data? It’s back to Security 101, which pretty much always starts with visibility, and then moves to control once you figure out what you have and how it is exposed.
Your first step is to shine a light into the ‘shadows’ on your network to gain a picture of all devices. You have a couple options to gain this visibility:
- Active Scanning: You can run a scan across your entire IP address space to find out what’s there. This can be a serious task for a large address space, consuming resources while you run your scanner(s). This process can only happen periodically, because it wouldn’t be wise to run a scanner continuously on internal networks. Keep in mind that some devices, especially ancient control systems, were not build with resilience in mind, so even a simple vulnerability scan can knock them over.
- Passive Monitoring: The other alternative is basically to listen for new devices by monitoring network traffic. This assumes that you have access to all traffic on all networks in your environment, and that new devices will communicate to something. Pitfalls of this approach include needing access to the entire network, and that new devices can spoof other network devices to evade detection. On the plus side, you won’t knock anything over by listening.
But we don’t see a question of either/or for gaining full visibility into all devices on the network. There is a time and place for active scanning, but care must be taken to not take brittle systems down or consume undue network resources. We have also seen many scenarios where passive monitoring is needed to find new devices quickly once they show up on the network.
Once you have full visibility, the next step is to identify devices. You can certainly look for indicators of what type of device you found during an active scan. This is harder when passively scanning, but devices can be identified by traffic patterns and other indicators within packets. A critical selection criteria for passive monitoring technology the vendor’s ability to identify the bulk of devices likely to show up on your network. Obviously in a very dynamic environment a fraction of devices cannot be identified through scanning or monitoring network traffic. But you want these devices to be a small minority, because anything you can’t identify through scanning requires manual intervention.
Once you know what kind of device you are dealing with, you need to start evaluating risk, a combination of the device’s vulnerability and exploitability. Vulnerability is a question of what could possibly happen. An attacker can do certain things with a printer which are impossible with an infusion pump, and vice-versa. So device type is key context. You also need to assess security vulnerabilities within the device. They may warrant an active scan upon identification for more granular information. As we warned above, be careful with active scanning to avoid risking device availability. You can glean some information about vulnerabilities through passive scanning, but it requires quite a bit more interpretation, and is subject to higher false positive rates.
Exploitability depends on the security controls and/or configurations already in place on the device. A warehouse picker robot may run embedded Windows XP, but if the robot also runs a whitelist malicious code cannot execute, so it might show up as vulnerable but not exploitable. The other main aspect of exploitability is attack path. If an external entity cannot access the warehouse system because it has no Internet-facing networks, even the vulnerable picker robot poses relatively little risk unless the physical location is attacked.
The final aspect of determining risk to a device is looking at what it has access to. If a device has no access to anything sensitive, then again it poses little risk. Of course that assumes your networks are sufficiently isolated. Determining risk is all about prioritization. You only have so many resources, so you need to choose what to fix wisely, and evaluating risk is the best way to allocate those scarce resources.
Once you know what’s out there in the shadows, your next step is to figure out whether and perhaps how to protect those devices. This again comes back to the risk profiles discussed above. It doesn’t make much sense to spend a lot of time and money protecting devices which don’t present much risk to the organization. But in case a device does present sufficient risk, how will you go about protecting it?
First things first: you should be making sure the device is configured in the most secure fashion. Yeah, yeah, that sounds trite and simple, but we mention it anyway because it’s shocking how many devices can be exploited due to open services that can easily be turned off. Once you have basic device hygiene taken care, here are some other ways to protect it:
- Active Controls: The first and most direct way to protect a shadow device is by implementing an active control on it. The available controls vary depending on kind of device and its embedded operating system. The most reliable means of protection is to stop execution of non-authorized programs. Yes, we are referring to whitelisting. There are commercial products available for embedded Windows devices, and a few options (some commercial, some open source) for other operating environments. There are also other defenses, including malware detection and even some forensic offerings, to really determine what’s happening on devices. Just keep in mind that active controls consume resources and can impair device stability. So you’ll want to exhaustively test any controls to be implemented on these devices to ensure you understand their impact.
- Network Isolation: These devices are on the network, which means traditional network security defenses can (and should) be used in conjunction with active controls. This involves using firewalls and/or routers & switches to provide a measure of isolation between these device networks and your data stores.
- Egress Filtering: We also recommend egress filtering on networks with these shadow devices. You can detect command and control, as well as remote access, by looking at outbound network traffic. Remember, these devices aren’t inherently malicious, so if something bad it happening it’s because an adversary is controlling the device, which means it’s communicating to a bot network or other controller outside your network, and that can be tracked.
An increasing limitation for network-oriented controls is encrypted traffic. We have published research on dealing with encrypted networks – there are ways to peek into encrypted traffic streams. When deciding between network and active controls for a devices, you need to consider encryption – network-based controls are easiest because they don’t require device interaction, but they may introduce blind spots, particularly thanks to encryption.
Finally we should mention the challenges of implementing fixes and workarounds – not only when dealing with shadow devices, but for all devices on networks now. There just isn’t enough staff to address all requirements, which means organizations need to think creatively about how to make limited staff more productive. One emerging way of magnifying staff efforts is automation. You have existing controls, which can be reconfigured based on specific rules. Of course automation of existing technologies requires integration with the existing security controls, but lately we have seen considerable innovation in this area, born out of necessity.
Our last step in figuring out your strategy to provide visibility and control over shadow devices is to determine which fixes can be automated, and establishing a strategy for that automation. We caution again that automation is a double-edged sword, which must be used carefully to ensure the ‘cure’ isn’t much worse than the symptoms. So ensure sufficient testing, and have reasonable expectations for what you can automate – in both the short and longer terms.
With that we wrap up the our series on “Shining a Light on Shadow Devices”. As we described, there will be an explosion of devices connecting to networks you are tasked to protect, and without sufficient visibility over what is connecting to them you are blind to potential attacks. There are a variety of ways to find these devices, and to implement controls protecting them. But you can’t protect what you can’t see, so make sure to shine a light on all the dark places in your networks so you are fully aware of your attack surface.
Posted at Monday 16th May 2016 8:49 pm
(1) Comments •
By Mike Rothman
In the SIEM Kung Fu paper, we tell you what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu).
We would like to thank Intel Security for licensing the content in this paper. Our unique Totally Transparent Research model allows us to do objective and useful research and still pay our bills, so we’re thankful to all of the companies that license our research.
Check out the page in the research library or download the paper directly (PDF).
Posted at Tuesday 10th May 2016 7:39 pm
(1) Comments •
By Adrian Lane
In 2015 we researched Putting Security Into DevOps, with a close look at how automated continuous deployment and DevOps impacted IT and application security. We found DevOps provided a very real path to improve application security using continuous automated testing, run each time new code was checked in. We were surprised to discover developers and IT teams taking a larger role in selecting security solutions, and bringing a new set of buying criteria to the table. Security products must do more than address application security issues; they need to mesh with continuous integration and deployment approaches, with automated capabilities and better integration with developer tools.
But the biggest surprise was that every team which had moved past Continuous Integration and onto Continuous Deployment (CD) or DevOps asked us about RASP, Runtime Application Self-Protection. Each team was either considering RASP, or already engaged in a proof-of-concept with a RASP vendor. We understand we had a small sample size, and the number of firms who have embraced either CD or DevOps application delivery is a very small subset of the larger market. But we found that once they started continuous deployment, each firm hit the same issues. The ability to automate security, the ability to test in pre-production, configuration skew between pre-production and production, and the ability for security products to identify where issues were detected in the code. In fact it was our DevOps research which placed RASP at the top of our research calendar, thanks to perceived synergies.
There is no lack of data showing that applications are vulnerable to attack. Many applications are old and simply contain too many flaws to fix. You know, that back office application that should never have been allowed on the Internet to begin with. In most cases it would be cheaper to re-write the application from scratch than patch all the issues, but economics seldom justify (or even permit) the effort. Other application platforms, even those considered ‘secure’, are frequently found to contain vulnerabilities after decades of use. Heartbleed, anyone? New classes of attacks, and even new use cases, have a disturbing ability to unearth previously unknown application flaws. We see two types of applications: those with known vulnerabilities today, and those which will have known vulnerabilities in the future. So tools to protect against these attacks, which mesh well with the disruptive changes occuring in the development community, deserve a closer look.
Runtime Application Self-Protection (RASP) is an application security technology which embeds into an application or application runtime environment, examining requests at the application layer to detect attacks and misuse in real time. RASP products typically contain the following capabilities:
- Monitor and block application requests; in some cases they can alter request to strip malicious content
- Full functionality through RESTful APIs
- Integration with the application or application instance (virtualization)
- Unpack and inspect requests in the application, rather than at the network layer
- Detect whether an attack would succeed
- Pinpoint the module, and possibly the specific line of code, where a vulnerability resides
- Instrument application usage
These capabilities overlap with white box & black box scanners, web application firewalls (WAF), next-generation firewalls (NGFW), and even application intelligence platforms. And RASP can be used in coordination with any or all of those other security tools. So the question you may be asking yourself is “Why would we need another technology that does some of the same stuff?” It has to do with the way it is used and how it is integrated.
Differing Market Drivers
As RASP is a (relatively) new technology, current market drivers are tightly focused on addressing the security needs of one or two distinct buying centers. But RASP offers a distinct blend of capabilities and usability options which makes it unique in the market.
- Demand for security approaches focused on development, enabling pre-production and production application instances to provide real-time telemetry back to development tools
- Need for fully automated application security, deployed in tandem with new application code
- Technical debt, where essential applications contain known vulnerabilities which must be protected, either while defects are addressed or permanently if they cannot be fixed for any of various reasons
- Application security supporting development and operations teams who are not all security experts
The remainder of this series will go into more detail on RASP technology, use cases, and deployment options:
- Technical Overview: This post will discuss technical aspects of RASP products – including what the technology does; how it works; and how it integrates into libraries, runtime code, or web application services. We will discuss the various deployment models including on-premise, cloud, and hybrid. We will discuss some of the application platforms supported today, and how we expect the technology to evolve over the next couple years.
- Use Cases: Why and how firms use this technology, with medium and large firm use case profiles. We will consider advantages of this technology, as well as the problems firms are trying to solve using it – especially around deficiencies in code security and development processes. We will provide some detail on monitoring vs. blocking threats, and discuss applicability to security and compliance requirements.
- Deploying RASP: This post will focus on how to integrate RASP into a development and release management processes. We will also jump into a more detailed discussion of how RASP differs from adjacent technologies like WAF, NGFW, and IDS, as well as static and dynamic application testing. We will walk through the advantages of each technology, with a special focus on operational considerations and keeping detection/blocking up to date. We will discuss advantages and tradeoffs compared to other relevant security technologies. This post will close with an example of a development pipeline, and how RASP technology fits in.
- Buyers Guide: This is a new market segment, so we will offer a basic analysis process for buyers to evaluate products. We will consider integration with existing development processes and rule management.
Posted at Tuesday 10th May 2016 12:51 am
(0) Comments •
We have been getting questions on our training classes this year, so I thought I should update everyone on major updates to our ‘old’ class, and what to expect from our ‘advanced’ class. The short version is that we are adding new material to our basic class, to align with upcoming Cloud Security Alliance changes and cover DevOps. It will still include some advanced material, but we are assuming the top 10% (in terms of technical skills) of students will move to our new advanced class instead, enabling us to focus the basic class on the meaty part of the bell curve.
Over the past few years our Black Hat Cloud Security Hands On class became so popular that we kept adding instructors and seats to keep up with demand. Last year we sold out both classes and increased the size to 60 students, then still sold out the weekday class. That’s a lot of students, but the course is tightly structured with well-supported labs to ensure we can still provide a high-quality experience. We even added a bunch of self-paced advanced labs for people with stronger skills who wanted to move faster.
The problem with that structure is that it really limits how well we can support more advanced students. Especially because we get a much wider range of technical skills than we expected at a Black Hat branded training. Every year we get sizable contingents from both extremes: people who no longer use their technical skills (managers/auditors/etc.), and students actively working in technology with hands-on cloud experience. When we started this training 6 years ago, nearly none of our students had ever launched a cloud instance.
Self-paced labs work reasonably well, but don’t really let you dig in the same way as focused training. There are also many cloud major advances we simply cannot cover in a class which has to appeal to such a wide range of students. So this year we launched a new class (which has already sold out, and expanded), and are updating the main class. Here are some details, with guidance on which is likely to fit best:
Cloud Security Hands-On (CCSK-Plus) is our introductory 2-day class for those with a background in security, but who haven’t worked much in the cloud yet. It is fully aligned with the Cloud Security Alliance CCSK curriculum: this is where we test out new material and course designs to roll out throughout the rest of the CSA. This year we will use a mixed lecture/lab structure, instead of one day of lecture with labs the second day.
We have started introducing material to align with the impending CSA Guidance 4.0 release, which we are writing. We still need to align with the current exam, because the class includes a token to take the test for the certificate, but we also wrote the test, so we should be able to balance that. This class still includes extra advanced material (labs) not normally in the CSA training and the self-paced advanced labs. Time permitting, we will also add an intro to DevOps.
But if you are more advanced you should really take Advanced Cloud Security and Applied SecDevOps instead. This 2-day class assumes you already know all the technical content in the Hands-On class and are comfortable with basic administration skills, launching instances in AWS, and scripting or programming. I am working on the labs now, and they cover everything from setting up accounts and VPCs usable for production application deployments, building a continuous deployment pipeline and integrating security controls, integrating PaaS services like S3, SQS, and SNS, through security automation through coding (both serverless with Lambda functions and server-based).
If you don’t understand any of that, take the Hands-On class instead.
The advanced class is nearly all labs, and even most lecture will be whiteboards instead of slides. The labs aren’t as tightly scripted, and there is a lot more room to experiment (and thus more margin for error). They do, however, all interlock to build a semblance of a real production deployment with integrated security controls and automation. I was pretty excited when I figured out how to build them up and tie them together, instead of having everything come out of a bucket of unrelated tasks.
Hopefully that clears things up, and we look forward to seeing some of you in August.
Oh, and if you work for BIGCORP and can’t make it, we also provide private trainings these days.
Here are the signup links:
Posted at Monday 9th May 2016 7:44 pm
(0) Comments •
By Mike Rothman
What is the real risk of the Shadow Devices we described back in our first post? It is clear that more organizations don’t really take their risks seriously. They certainly don’t have workarounds in place, or proactively segment their environments to ensure that compromising these devices doesn’t provide opportunity for attackers to gain presence and a foothold in their environments. Let’s dig into three broad device categories to understand what attacks look like.
Do you remember how cool it was when the office printer got a WiFi connection? Suddenly you could put it wherever you wanted, preserving the Feng Shui of your office, instead of having it tethered to the network drop. And when the printer makers started calling their products image servers, not just printers? Yeah, that was when they started becoming more intelligent, and also tempting targets.
But what is the risk of taking over a printer? It turns out that even in our paperless offices of the future, organizations still print out some pretty sensitive stuff, and stuff they don’t want to keep may be scanned for storage/archival. Whether going in or out, sensitive content is hitting imaging servers. Many of them store the documents they print and scan until their memory (or embedded hard drive) is written over. So sensitive documents persist on devices, accessible to anyone with access to the device, either physical or remote.
Even better, many printers are vulnerable to common wireless attacks like the evil twin, where a fake device with a stronger wireless signal impersonates the real printer. So devices connect (and print) documents to the evil twin and not the real printer – the same attack works with routers too, but the risk is much broader. Nice. But that’s not all! The devices typically use some kind of stripped-down UNIX variant at the core, and many organizations don’t change the default passwords on their image servers, enabling attackers to trigger remote firmware updates and install compromised versions of the printer OS. Another attack vector is that these imaging devices now connect to cloud-based services to email documents, so they have all the plumbing to act as a spam relay.
Most printers use similar open source technologies to provide connectivity, so generic attacks generally work against a variety of manufacturers’ devices. These peripherals can be used to steal content, attack other devices, and provide a foothold inside your network perimeter. That makes these both direct and indirect targets.
These attacks aren’t just theoretical. We have seen printers hijacked to spread inflammatory propaganda on college campuses, and Chris Vickery showed proof of concept code to access a printer’s hard drive remotely.
Our last question is what kind of security controls run on imaging servers. The answer is: not much. To be fair, vendors have started looking at this more seriously, and were reasonably responsive in patching the attacks mentioned above. But that said, these products do not get the same scrutiny as other PC devices, or even some other connected devices we will discuss below. Imaging servers see relatively minimal security assessment before coming to market.
We aren’t just picking on printers here. Pretty much every intelligent peripheral is similarly vulnerable, because they all have operating systems and network stacks which can be attacked. It’s just that offices tend to have dozens of printers, which are frequently overlooked during risk assessment.
If printers and other peripherals seem like low-value targets, let’s discuss something a bit higher-value: medical devices. In our era of increasingly connected medical devices – including monitors, pumps, pacemakers, and pretty much everything else – there hasn’t been much focus on product security, except in the few cases where external pressure is applied by regulators. These devices either have IP network stacks or can be configured via Bluetooth – neither of which is particularly well protected.
The most disturbing attacks threaten patient health. There are all too many examples of security researchers compromising infusion and insulin pumps, jackpotting drug dispensaries, and even the legendary Barnaby Jack messing with a pacemaker. We know one large medical facility that took it upon itself to hack all its devices in use, and deliver a list of issues to the manufacturers. But there has been no public disclosure of results, or whether device manufacturers have made changes to make their devices safe.
Despite the very real risk of medical devices being targeted to attack patient health, we believe most of the current risk involves information. User data is much easier for attackers to monetize; medical profiles have a much longer shelf-life and much higher value than typical financial information. So ensuring that Protected Health Information is adequately protected remains a key concern in healthcare.
That means making sure there aren’t any leakages in these devices, which is not easy without a full penetration test. On the positive front, many of these devices have purpose-built operating systems, so they cannot really be used as pivot points for lateral movement within the network. Yet few have any embedded security controls to ensure data does not leak. Further complicating matters, some devices still use deprecated operating systems such as Windows XP and even Windows 2000 (yes, seriously), and outdated compliance mandates often mean they cannot be patched without recertification. So administrators often don’t update the devices, and hope for the best. We can all agree that hope isn’t a sufficient strategy.
With lives at stake, medical device makers are starting to talk about more proactive security testing. Similarly to the way a major SaaS breach could prove an existential threat to the SaaS market, medical device makers should understand what is at risk, especially in terms of liability, but that doesn’t mean they understand how to solve the problem. So the burden lands on customers to manage their medical device inventories, and ensure they are not misused to steal data or harm patients.
Industrial Control Systems
The last category of shadow devices we will consider is control systems. These devices range from SCADA systems running power grids, to warehousing systems ensuring the right merchandise is picked and shipped, to manufacturing systems running robotics, and heavy building machinery. All these devices are networked (whether directly or indirectly) in today’s advanced factories, so there is attack surface to monitor and protect.
We know these systems can be attacked. Stuxnet was a very advanced attack on nuclear centrifuges. Once within the nuclear facilities network, the adversaries compromised a number of different types of control systems to access centrifuges and break them. In a recent attack on a German blast furnace, the control systems were compromised and general failsafes were inoperable; the facility went offline while they cleaned the systems up.
In both cases, and likely many others that aren’t publicized, the adversaries are very advanced. They need to be – to attack a centrifuge like Stuxnet you need your own centrifuges to test on, and they aren’t exactly easy to find on eBay. You cannot just load a blast furnace into the pick-up Saturday morning for a pen-test.
That may comfort some people, but it shouldn’t. The implication is that control system defenders aren’t dealing with the Metasploit crowd, but instead trying to repel well-funded and capable adversaries. They need a very clear idea of what their attack surface looks like, and some way of monitoring their devices; they cannot rely on compliance mandates to require advanced security on their control systems.
Another consideration with control systems is the brittle nature of many of them. They are hard to test because you could bring down the system while trying to figure out whether it’s vulnerable. Most organizations don’t like that trade-off, so they don’t test directly. This means you need indirect techniques – definitely to figure out how vulnerable they are, and probably to discover and monitor them as well.
Which makes a good segue to our next post, where we will dig into two aspects of protecting shadow devices: Visibility and Control. First and foremost you need to figure out where these devices really are. Then you can worry about how to ensure they are not being attacked or misused.
Posted at Monday 9th May 2016 3:56 pm
(0) Comments •
It’s been a busy couple weeks, and the pace is only ramping up. This week I gave a presentation and a workshop at Interop. It seemed to go well, and the networking-focused audience was very receptive. Next week I’m out at the Rocky Mountain Infosec Conference, which is really just an excuse to spend a few more days back near my old home in Colorado. I get home just in time for my wife to take a trip, then even before she’s back I’m off to Atlanta to keynote an IBM Cybersecurity Seminar (free, if you are in the area). I’m kind of psyched for that one because it’s at the aquarium, and I’ve been begging Mike to take me for years.
Not that I’ve been to Atlanta in years.
Then some client gigs, and (hopefully) things will slow down a little until Black Hat. I’m updating our existing (now ‘basic’) cloud security class, and building the content for our Advanced Cloud Security and Applied SecDevOps class. It looks like it will be nearly all labs and whiteboarding, without too many lecture slides, which is how I prefer to learn.
This week’s stories are wide-ranging, and we are nearly at the end of our series highlighting continuous integration security testing tools. Please drop me a line if you think we should include commercial tools. We work with some of those companies, so I generally try to avoid specific product mentions. Just email.
You can subscribe to only the Friday Summary.
Top Posts for the Week
Tool of the Week
It’s time to finish off our series on integrating security testing tools into deployment pipelines with Mittn, which is maintained by F-Secure. Mittn is like Gauntlt and BDD-Security in that it wraps other security testing tools, allowing you to script automated tests into your CI server. Each of these tools defaults to a slightly different set of integrated security tools, and there’s no reason you can’t combine multiple tools in a build process.
Basically, when you define a series of tests in your build process, you tie one of these into your CI server as a plugin or use scripted execution. You pass in security tests using the template for your particular tool, and it runs your automated tests. You can even spin up a full virtual network environment to test just like production.
I am currently building this out myself, both for our training classes and our new securosis.com platform. For the most part it’s pretty straightforward… I have Jenkins pulling updates from Git, and am working on integrating Packer and Ansible to build new server images. Then I’ll mix in the security tests (probably using Gauntlt to start). It isn’t rocket science or magic, but it does take a little research and practice.
Securosis Blog Posts this Week
Other Securosis News and Quotes
Another quiet week.
Training and Events
Posted at Friday 6th May 2016 6:00 am
(0) Comments •
As part of updating All Things Securosis, the time has come to migrate our mailing lists to a new provider (MailChimp, for the curious). The CAPTCHA at our old provider wasn’t working properly, so people couldn’t sign up. I’m not sure if that’s technically irony for a security company, but it was certainly unfortunate. So…
If you weren’t expecting this, for some reason our old provider had you listed as active!! If so we are really sorry and please click on the unsubscribe at the bottom of the email (yes some of you are just reading this on the blog). We did our best to prune the list and only migrated active subscriptions (our lists were always self-subscribe to start), but the numbers look a little funny and let’s just say there is a reason we switched providers. Really, we don’t want to spam you, we hate spam, and if this shows up in your inbox and is unwanted, the unsubscribe link will work, and feel free to email us/reply directly. I’m hoping it’s only a few people who unsubscribed during the transition.
If you want to be added, we have two different lists – one for the Friday Summary (which is all cloud, security automation, and DevOps focused), and the Daily Digest of all emails sent the previous day. We only use these lists to send out email feeds from the blog, which is why I’m posting this on the site and not sending directly. We take our promises seriously and those lists are never shared/sold/whatever, and we don’t even send anything to them outside blog posts. Here are the signup forms:
Now if you received this in email, and sign up again, that’s very meta of you and some hipster is probably smugly proud.
Thanks for sticking with us, and hopefully we will have a shiny new website to go with our shiny new email system soon. But the problem with hiring designers that live in another state is flogging becomes logistically complex, and even the cookie bribes don’t work that well (especially since their office is, literally, right above a Ben and Jerry’s).
And again, apologies if you didn’t expect or want this in your inbox; we spent hours trying to pull only active subscribers and then clean everything up but I have to assume mistakes still happened.
Posted at Wednesday 4th May 2016 10:25 pm
(0) Comments •
In our wanderings we’ve noticed that when we pull our heads out of the bubble, not everyone necessarily understands what cloud is or where it’s going. Heck, many smart IT people are still framing it within the context of what they currently do. It’s only natural, especially when they get crappy advice from clueless consultants, but it certainly can lead you down some ugly paths. This week Mike, Adrian and Rich are also joined by Dave Lewis (who accidentally sat down next to Rich at a conference) to talk about how people see cloud, the gaps, and how to navigate the waters.
Watch or listen:
Posted at Tuesday 3rd May 2016 1:00 pm
(1) Comments •