Securosis

Research

Summary: June 10, 2016

Adrian here. A phone call about Activity Monitoring administrative actions on mainframes, followed by a call on security architectures for new applications in AWS. A call on SAP vulnerability scans, followed by a call on Runtime Application Self-Protection. A call on protecting relational databases against SQL injection, followed by a discussion of relevant values to key security event data for a big data analytics project. Consulting with a firm which releases code every 12 months, and discussing release management with a firm that is moving to two-a-day in a continuous deployment model. This is what my call logs look like. If you want to see how disruptive technology is changing security, you can just look at my calendar. On any given day I am working at both extremes in security. On one hand we have the old and well-worn security problems; familiar, comfortable and boring. On the other hand we have new security problems, largely part driven by cloud and mobile technologies, and the corresponding side-effects – such as hybrid architectures, distributed identity management, mobile device management, data security for uncontrolled environments, and DevOps. Answers are not rote, problems do not always have well-formed solutions, and crafting responses takes a lot of work. Worse, the answer I gave yesterday may be wrong tomorrow, if the pace of innovation invalidates my answer. This is our new reality. Some days it makes me dizzy, but I’ve embraced the new, if for no other reason that to avoid being run over by it. It’s challenging as hell, but it’s not boring. On to this week’s summary: If you want to subscribe directly to the Friday Summary only list, just click here. Top Posts for the Week Azure Infrastructure Security Book Coming Fujitsu to Integrate Box into enterprise software Gene Kin on The Three Ways Big data increasingly a driver of cloud services Microsoft partners with Jenkins Oracle will sue the former employee who allegedly would not embrace cloud computing accounting methods Tool of the Week I decided to take some to and learn about tools more common to clouds other than AWS. I was told Kubernetes was the GCP open source version of Docker, so I though that would be a good place to start. After I spent some time playing with it, I realized what I was initially told was totally wrong! Kubernetes is called a “container manager”, but it’s really focused on setting up services. Docker focuses on addressing app dependencies and packaging; Kubernetes on app orchestration. And it runs anywhere you want – not just GCP and GCE, but in other clouds or on-premise. If you want to compare Kubernetes to something in the Docker universe, it’s closest to Docker Swarm, which tackles some of the management and scalability issues. Kubernetes has three basic parts: controllers that handle things like replication and pod behaviors; a simple naming system – essentially using key-value pairs – to identify pods; and a services directory for discovery, routing, and load balancing. A pod can be one or more Docker containers, or a standalone application. These three primitives make it pretty easy to stand up code, direct application requests, manage clusters of services, and provide basic load balancing. It’s open source and works across different clouds, so your application should work the same on GCP, Azure, or AWS. It’s not super easy to set up, but it’s not a nightmare either. And it’s incredibly flexible – once set up, you can easily create pods for different services, with entirely different characteristics. A word of caution: if you’re heavily invested in Docker, you might instead prefer Swarm. Early versions of Kubernetes seemed to have Docker containers in mind, but the current version does not integrate with native Docker tools and APIs, so you have to duct tape some stuff together to get Docker compliant containers. Swarm is compliant with Docker’s APIs and works seamlessly. But don’t be swayed by studies that compare container startup times as a main measure of performance; that is one of the least interesting metrics for comparing container management and orchestration tools. Operating performance, ease of use, and flexibility are all far more important. If you’re not already a Docker shop, check out Kubernetes – its design is well-thought-out and purpose-built to tackle micro-service deployment. And I have not yet had a chance to use Google’s Container Engine, but it is supposed to make setup easier, with a number of supporting services. Securosis Blog Posts this Week Evolving Encryption Key Management Best Practices: Use Cases Incite 6/7/2016: Nature Mr. Market Loves Ransomeware Building a Vendor (IT) Risk Management Program (New Paper) Evolving Encryption Key Management Best Practices: Part 2 Other Securosis News and Quotes Mike did a webcast with Chris over at IANS Training and Events We are running two classes at Black Hat USA: Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus) Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Evolving Encryption Key Management Best Practices: Use Cases

This is the third in a three-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Use Cases Now that we’ve discussed best practices, it’s time to cover common use cases. Well, mostly common – one of our goals for this research is to highlight emerging practices, so a couple of our use cases cover newer data-at-rest key management scenarios, while the rest are more traditional options. Traditional Data Center Storage It feels a bit weird to use the word ‘traditional’ to describe a data center, but people give us strange looks when we call the most widely deployed storage technologies ‘legacy’. We’d say “old school”, but that sounds a bit too retro. Perhaps we should just say “big storage stuff that doesn’t involve the cloud or other weirdness”. We typically see three major types of data storage encrypted at rest in traditional data centers: SAN/NAS, backup tapes, and databases. We also occasionally we also see file servers encrypted, but they are in the minority. Each of these is handled slightly differently, but normally one of three ‘meta-architectures’ is used: Silos: Some storage tools include their own encryption capabilities, managed within the silo of the application/storage stack. For example a backup tape system with built-in encryption. The keys are managed by the tool within its own stack. In this case an external key manager isn’t used, which can lead to a risk of application dependency and key loss, unless it’s a very well-designed product. Centralized key management: Rather than managing keys locally, a dedicated central key management tool is used. Many organizations start with silos, and later integrate them with central key management for advantages such as improved separation of duties, security, auditability, and portability. Increasing support for KMIP and the PKCS 11 standards enables major products to leverage remote key management capabilities, and exchange keys. Distributed key management: This is very common when multiple data centers are either actively sharing information or available for disaster recovery (hot standby). You could route everything through a single key manager, but this single point of failure would be a recipe for disaster. Enterprise-class key management products can synchronize keys between multiple key managers. Remote storage tools should connect to the local key manager to avoid WAN dependency and latency. The biggest issue with this design is typically ensuring the different locations synchronize quickly enough, which tends to be more of an issue for distributed applications balanced across locations than for a hot standby sites, where data changes don’t occur on both sides simultaneously. Another major concern is ensuring you can centrally manage the entire distributed deployment, rather than needing to log into each site separately. Each of those meta-architectures can manage keys for all of the storage options we see in use, assuming the tools are compatible, even using different products. The encryption engine need not come from the same source as the key manager, so long as they are able to communicate. That’s the essential requirement: the key manager and encryption engines need to speak the same language, over a network connection with acceptable performance. This often dictates the physical and logical location of the key manager, and may even require additional key manager deployments within a single data center. But there is never a single key manager. You need more than one for availability, whether in a cluster or using a hot standby. As we mentioned under best practices, some tools support distributing only needed keys to each ‘local’ key manager, which can strike a good balance between performance and security. Applications There are as many different ways to encrypt an application as there are developers in the world (just ask them). But again we see most organizations coalescing around a few popular options: Custom: Developers program their own encryption (often using common encryption libraries), and design and implement their own key management. These are rarely standards-based, and can become problematic if you later need to add key rotation, auditing, or other security or compliance features. Custom with external key management: The encryption itself is, again, programmed in-house, but instead of handling key management itself, the application communicates with a central key manager, usually using an API. Architecturally the key manager needs to be relatively close to the application server to reduce latency, depending on the particulars of how the application is programmed. In this scenario, security depends strongly on how well the application is programmed. Key manager software agent or SDK: This is the same architecture, but the application uses a software agent or pre-configured SDK provided with the key manager. This is a great option because it generally avoids common errors in building encryption systems, and should speed up integration, with more features and easier management. Assuming everything works as advertised. Key manager based encryption: That’s an awkward way of saying that instead of providing encryption keys to applications, each application provides unencrypted data to the key manager and gets encrypted data in return, and vice-versa. We deliberately skipped file and database encryption, because they are variants of our “traditional data center storage” category, but we do see both integrated into different application architectures. Based on our client work (in other words, a lot of anecdotes), application encryption seems to be the fastest growing option. It’s also agnostic to your data center architecture, assuming the application has adequate access to the key manager. It doesn’t really care whether the key manager is in the cloud, on-premise, or a hybrid. Hybrid Cloud Speaking of hybrid cloud, after application encryption (usually in cloud deployments) this is where we see the most questions. There are two main use cases: Extending existing key management to the cloud: Many organizations already have a key manager they are happy with. As they move into the cloud they may either want to maintain consistency by using the same product,

Share:
Read Post

Incite 6/7/2016: Nature

Like many of you, I spend a lot of time sitting on my butt banging away at my keyboard. I’m lucky that the nature of my work allows me to switch locations frequently, and I can choose to have a decent view of the world at any given time. Whether it’s looking at a wide assortment of people in the various Starbucks I frequent, my home office overlooking the courtyard, or pretty much any place I can open my computer on my frequent business travels. Others get to spend all day in their comfy (or not so comfy) cubicles, and maybe stroll to the cafeteria once a day. I have long thought that spending the day behind a desk isn’t the most effective way to do things. Especially for security folks, who need to be building relationships with other groups in the organization and proselytizing the security mindset. But if you are reading this, your job likely involves a large dose of office work. Even if you are running from meeting to meeting, experiencing the best conference rooms, we spend our days inside breathing recycled air under the glare of florescent lights. Every time I have the opportunity to explore nature a bit, I remember how cool it is. Over the long Memorial Day weekend, we took a short trip up to North Georgia for some short hikes, and checked out some cool waterfalls. The rustic hotel where we stayed didn’t have cell service (thanks AT&T), but that turned out to be great. Except when Mom got concerned because she got a message that my number was out of service. But through the magic of messaging over WiFi, I was able to assure her everything was OK. I had to exercise my rusty map skills, because evidently the navigation app doesn’t work when you have no cell service. Who knew? It was really cool to feel the stress of my day-to-day activities and responsibilities just fade away once we got into the mountains. We wondered where the water comes from to make the streams and waterfalls. We took some time to speculate about how long it took the water to cut through the rocks, and we were astounded by the beauty of it all. We explored cute towns where things just run at a different pace. It really put a lot of stuff into context for me. I (like most of you) want it done yesterday, whatever we are talking about. Being back in nature for a while reminded me there is no rush. The waterfalls and rivers were there long before I got here. And they’ll be there long after I’m gone. In the meantime I can certainly make a much greater effort to take some time during the day and get outside. Even though I live in a suburban area, I can find some green space. I can consciously remember that I’m just a small cog in a very large ecosystem. And I need to remember that the waterfall doesn’t care whether I get through everything on my To Do list. It just flows, as should I. –Mike Photo credit: _”Panther Falls – Chattachoochee National Forest”_ – Mike Rothman May 28, 2016 ——- Security is changing. So is Securosis. Check out Rich’s post on how [we are evolving our business](https://securosis.com/blog/security-is-changing.-so-is-securosis). We’ve published this year’s _Securosis Guide to the RSA Conference_. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the [blog post](https://securosis.com/blog/presenting-the-rsa-conference-guide-2016) or download [the guide directly (PDF)](https://securosis.com/assets/library/reports/SecurosisGuidetoRSAC-2016.pdf). The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. [You can check it out on YouTube.](http://youtu.be/nBua0KfbVx8) Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. ——- ##Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. * May 31 — [Where to Start?](https://securosis.com/blog/firestarter-where-to-start) * May 2 — [What the hell is a cloud anyway?](https://securosis.com/blog/what-the-hell-is-a-cloud-anyway) * Mar 16 — [The Rugged vs. SecDevOps Smackdown](https://securosis.com/blog/the-rugged-vs.-secdevops-smackdown) * Feb 17 — [RSA Conference — The Good, Bad and Ugly](https://securosis.com/blog/firestarter-rsa-conference-the-good-bad-and-the-ugly) * Dec 8 — [2015 Wrap Up and 2016 Non-Predictions](https://securosis.com/blog/2015-wrap-up-and-2016-non-predictions) * Nov 16 — [The Blame Game](https://securosis.com/blog/the-blame-game) * Nov 3 — [Get Your Marshmallows](https://securosis.com/blog/get-your-marshmallows) * Oct 19 — [re:Invent Yourself (or else)](https://securosis.com/blog/reinvent-yourself-or-else) * Aug 12 — [Karma](https://securosis.com/blog/karma) * July 13 — [Living with the OPM Hack](https://securosis.com/blog/living-with-the-opm) * May 26 — [We Don’t Know Sh–. You Don’t Know Sh–](https://securosis.com/blog/we-dont-know-sh-.-you-dont-know-sh) * May 4 — [RSAC wrap-up. Same as it ever was.](https://securosis.com/blog/rsac-wrap-up.-same-as-it-ever-was) ——– ##Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our [Heavy Feed via RSS](http://securosis.com/feeds/blog-complete/), with our content in all its unabridged glory. And you can get [all our research papers](https://securosis.com/research/research-reports) too. ###Evolving Encryption Key Management Best Practices * [Part 2](https://securosis.com/blog/evolving-encryption-key-management-best-practices-part-2) * [Introduction](https://securosis.com/blog/evolving-encryption-key-management-best-practices-introduction) ###Incident Response in the Cloud Age * [In Action](https://securosis.com/blog/incident-response-in-the-cloud-age-in-action) * [Addressing the Skills Gap](https://securosis.com/blog/incident-response-in-the-cloud-age-addressing-the-skills-gap) * [More Data, No Data, or Both?](https://securosis.com/blog/incident-response-in-the-cloud-age-more-data-no-data-or-both) * [Shifting Foundations](https://securosis.com/blog/incident-response-in-the-cloud-age-shifting-foundations) ###Understanding and Selecting RASP * [Integration](https://securosis.com/blog/understanding-and-selecting-rasp-integration) * [Use Cases](https://securosis.com/blog/understanding-and-selecting-rasp-use-cases) * [Technology Overview](https://securosis.com/blog/understanding-and-selecting-rasp-technology-overview) * [Introduction](https://securosis.com/blog/understanding-and-selecting-rasp-new-series) ###Maximizing WAF Value * [Management](https://securosis.com/blog/maximizing-waf-value-managing-your-waf) * [Deployment](https://securosis.com/blog/maximizing-waf-value-deploying-the-waf) * [Introduction](https://securosis.com/blog/maximizing-value-from-your-waf-new-series) ###Shadow Devices * [Seeing into the Shadows](https://securosis.com/blog/shining-a-light-on-shadow-devices-seeing-into-the-shadows) * [Attacks](https://securosis.com/blog/shining-a-light-on-shadow-devices-attacks) * [The Exponentially Expanding Attack Surface](https://securosis.com/blog/shadow-devices-the-exponentially-expanding-attack-surface) ###Recently Published Papers * [Building a Vendor (IT) Risk Management Program](https://securosis.com/research/papers/building-a-vendor-it-risk-management-program) * [SIEM Kung Fu](https://securosis.com/research/papers/siem-kung-fu) * [Securing Hadoop](https://securosis.com/research/papers/securing-hadoop-recommendations-for-hadoop-security) * [Threat Detection Evolution](https://securosis.com/research/publication/threat-detection-evolution) * [Building Security into DevOps](https://securosis.com/blog/building-security-into-devops-new-paper) * [Pragmatic Security for Cloud and Hybrid Networks](https://securosis.com/research/publication/pragmatic-security-for-cloud-and-hybrid-networks) * [EMV Migration and the Changing Payments Landscape](https://securosis.com/research/publication/emv-and-the-changing-payments-landscape) * [Applied Threat Intelligence](https://securosis.com/research/publication/applied-threat-intelligence) * [Endpoint Defense: Essential Practices](https://securosis.com/blog/new-paper-endpoint-defense-essential-practices) * [Monitoring the Hybrid Cloud](https://securosis.com/blog/new-paper-monitoring-the-hybrid-cloud) * [Best Practices for AWS Security](https://securosis.com/research/publication/security-best-practices-for-amazon-web-services) * [The Future of Security](https://securosis.com/blog/new-paper-the-future-of-security-the-trends-and-technologies-transforming-s) ———–

Share:
Read Post

Mr. Market Loves Ransomware

The old business rule is: when something works, do more of it. By that measure ransomware is clearly working. One indication is the number of new domains popping up which are associated with ransomware attacks. According to an Infoblox research report (and they provide DNS services, so they should know), there was a 35x increase in ransomware domains in Q1. You have also seen the reports of businesses getting popped when an unsuspecting employee falls prey to a ransomware attack; the ransomware is smart enough to find a file share and encrypt all those files too. And even when an organization pays, the fraudster is unlikely to just give them the key and go away. This is resulting in real losses to organizations – the FBI says organizations lost over $200 million in Q1 2016. Even if that number is inflated, it’s a real business, so you will see a lot more of it. The attackers follow Mr. Market’s lead, and clearly the ‘market’ loves ransomware right now. So what can you do? Besides continue to train employees not to click stuff? An article at NetworkWorld claims to have the answer for how to deal with ransomware. They mention strategies for trying to recover faster via “regular and consistent backups along with tested and verified restores.” This is pretty important – just be aware that you may be backing up encrypted files, so make sure you have backups from far enough back that you can recover the files before the attack. This is obvious in retrospect, but backup/recovery is a good practice regardless of whether you are trying to deal with malware, ransomware, or hardware failure that puts data at risk. Their other suggested defense is to prevent the infection. The article’s prescribed approach is application whitelisting (AWL). We are fans of AWL in specific use cases – here the ransomware wouldn’t be allowed to run on devices, because it’s not authorized. Of course the deployment issues with AWL, given how it can impact user experience, are well known. Though we do find whitelisting appropriate for devices that don’t change frequently or which hold particularly valuable information, so long as you can deal with the user resistance. They don’t mention other endpoint protection solutions, such as isolation on endpoint devices. We have discussed the various advanced endpoint defense strategies, and will be updating that research over the next couple of months. Adding to the confusion, every endpoint defense vendor seems to be shipping a ‘ransomware’ solution… which is really just their old stuff, rebranded. So what’s the bottom line? If you have an employee who falls prey to ransomware, you are going to lose data. The question is: How much? With advanced prevention technologies deployed, you may stop some of the attacks. With a solid backup strategy, you may minimize the amount of data you lose. But you won’t escape unscathed. Share:

Share:
Read Post

Building a Vendor (IT) Risk Management Program [New Paper]

In Building a Vendor (IT) Risk Management Program, we explain why you can no longer ignore the risk presented by third-party vendors and other business partners, including managing an expanded attack surface and new regulations demanding effective management of vendor risk. We then offer ideas for how to build a structured and systematic program to assess vendor (IT) risk, and take action when necessary. We would like to thank BitSight Technologies for licensing the content in this paper. Our unique Totally Transparent Research model allows us to perform objective and useful research without requiring paywalls or other such nonsense, which make it hard for the people who need our research to get it. A day doesn’t go by where we aren’t thankful to all the companies who license our research. You can get the paper from the landing page in our research library. Share:

Share:
Read Post

Evolving Encryption Key Management Best Practices: Part 2

This is the second in a four-part series on evolving encryption key management best practices. The first post is available here. This research is also posted at GitHub for public review and feedback. My thanks to Hewlett Packard Enterprise for licensing this research, in accordance with our strict Totally Transparent Research policy, which enables us to release our independent and objective research for free. Best Practices If there is one thread tying together all the current trends influencing data centers and how we build applications, it’s distribution. We have greater demand for encryption in more locations in our application stacks – which now span physical environments, virtual environments, and increasing barriers even within our traditional environments. Some of the best practices we will highlight have long been familiar to anyone responsible for enterprise encryption. Separation of duties, key rotation, and meeting compliance requirements have been on the checklist for a long time. Others are familiar, but have new importance thanks changes occurring in data centers. Providing key management as a service, and dispersing and integrating into required architectures aren’t technically new, but they are in much greater demand than before. Then there are the practices which might not make the list, such as supporting APIs and distributed architectures (potentially spanning physical and virtual appliances). As you will see, the name of the game is consolidation for consistency and control; simultaneous with distribution to support diverse encryption needs, architectures, and project requirements. But before we jump into recommendations, keep our focus in mind. This research is for enterprise data centers, including virtualization and cloud computing. There are plenty of other encryption use cases out there which don’t necessarily require everything we discuss, although you can likely still pick up a few good ideas. Build a key management service Supporting multiple projects with different needs can easily result in a bunch of key management silos using different tools and technologies, which become difficult to support. One for application data, another for databases, another for backup tapes, another for SANs, and possibly even multiple deployments for the same functions, as individual teams pick and choose their own preferred technologies. This is especially true in the project-based agile world of the cloud, microservices, and containers. There’s nothing inherently wrong with these silos, assuming they are all properly managed, but that is unfortunately rare. And overlapping technologies often increase costs. Overall we tend to recommend building centralized security services to support the organization, and this definitely applies to encryption. Let a smaller team of security and product pros manage what they are best at and support everyone else, rather than merely issuing policy requirements that slow down projects or drive them underground. For this to work the central service needs to be agile and responsive, ideally with internal Service Level Agreements to keep everyone accountable. Projects request encryption support; the team managing the central service determines the best way to integrate, and to meet security and compliance requirements; then they provide access and technical support to make it happen. This enables you to consolidate and better manage key management tools, while maintaining security and compliance requirements such as audit and separation of duties. Whatever tool(s) you select clearly need to support your various distributed requirements. The last thing you want to do is centralize but establish processes, tools, and requirements that interfere with projects meeting their own goals. And don’t focus so exclusively on new projects and technologies that you forget about what’s already in place. Our advice isn’t merely for projects based on microservices containers, and the cloud – it applies equally for backup tapes and SAN encryption. Centralize but disperse, and support distributed needs Once you establish a centralized service you need to support distributed access. There are two primary approaches, but we only recommend one for most organizations: Allow access from anywhere. In this model you position the key manager in a location accessible from wherever it might be needed. Typically organizations select this option when they want to only maintain a single key manager (or cluster). It was common in traditional data centers, but isn’t well-suited for the kinds of situations we increasingly see today. Distributed architecture. In this model you maintain a core “root of trust” key manager (which can, again, be a cluster), but then you position distributed key managers which tie back to the central service. These can be a mix of physical and virtual appliances or servers. Typically they only hold the keys for the local application, device, etc. that needs them (especially when using virtual appliances or software on a shared service). Rather than connecting back to complete every key operation, the local key manager handles those while synchronizing keys and configuration back to the central root of trust. Why distribute key managers which still need a connection back home? Because they enable you to support greater local administrative control and meet local performance requirements. This architecture also keeps applications and services up and running in case of a network outage or other problem accessing the central service. This model provides an excellent balance between security and performance. For example you could support a virtual appliance in a cloud project, physical appliances in backup data centers, and backup keys used within your cloud provider with their built-in encryption service. This way you can also support different technologies for distributed projects. The local key manager doesn’t necessarily need to be the exact same product as the central one, so long as they can communicate and both meet your security and compliance requirements. We have seen architectures where the central service is a cluster of Hardware Security Modules (appliances with key management features) supporting a distributed set of HSMs, virtual appliances, and even custom software. The biggest potential obstacle is providing safe, secure access back to the core. Architecturally you can usually manage this with some bastion systems to support key exchange, without opening the core to the Internet. There may still be use cases where you cannot tie everything together, but that should be your

Share:
Read Post

Incident Response in the Cloud Age: In Action

When we do a process-centric research project, it works best to wrap up the series with a scenario that really illuminates the concepts we’ve discussed throughout the series and make things a bit more tangible. In this situation, imagine you work for a mid-sized retailer that uses a mixture of in-house technology, SaaS, and has recently moved a key warehousing system into an IaaS provider upon rebuilding the application for cloud computing. You’ve got a modest sized security team of 10, which is not enough, but a bit more than many of your peers have. Senior management understands why security is important (to a point) and gives you decent leeway, especially relative to the new IaaS application. In fact, you were consulted during the IaaS architecture phase and provided some guidance (with some help from your friends at Securosis) as to building a Resilient Cloud Network Architecture and how to secure the cloud control plane. You also had the opportunity to integrate some orchestration and automation technology into the cloud technology stack. ##The Trigger You have your team on pretty high alert because a number of your competitors have recently been targeted by an organized crime ring that has gained a foothold with the competitors and proceeded to steal a ton of information about customers, pricing, and merchandising strategies. Given that this isn’t your first rodeo, you know when there is smoke there is usually fire, you decide to task one of your more talented security admins to do a little proactive _hunting_ in your environment. Just to make sure there isn’t anything going on. The admin starts to poke around by searching internal security data with some of the more recent samples of malware found in the attacks on the other retailers. The malware sample was provided by the retail industry’s ISAC (information sharing and analysis center). The analyst got a hit on one of the samples, confirming what your gut told you. You’ve got an active adversary on the network. So now you need to engage the incident response process. ##Job 1: Initial Triage Now that you know there is a _situation_, you assemble the response team. There aren’t a lot of you and half of the team has to pay attention to operational tasks, since taking down the systems wouldn’t make you popular with senior management or the investors. You also don’t want to jump the gun until you know what you’re dealing with, so you inform the senior team of the situation, but don’t take any systems offline. Yet. Since the adversary is active on the internal network, they most likely entered via a phishing or other social engineering attack. The admin’s searches showed 5 devices showing indications of the malware, so those devices are taken off the network immediately. Not shut down, but put on a separate network with Internet access to not tip off the adversary to your discovery of their presence on your network. Then you check the network forensics tool, looking for indications that data has been leaking. There are a few suspicious file transfers and luckily you integrated the egress filtering capability on the firewall with the network forensics tool. So once the firewall showed that some anomalous traffic was being sent to known bad sites (via a threat intelligence integration on the firewall), you started capturing the network traffic originating from the devices triggering the firewall alert. Automatically. That automation stuff sure makes things easier than having to manually do everything. As part of your initial triage, you’ve got endpoint telemetry telling you there are issues and network forensics data to get a clue as to what’s leaking. This is enough to know that you not only have an active adversary, but also that you more than likely have lost data. So you fire up the case management system, which will structure the investigation and then store all the artifacts of the investigation. The team is tasked with their responsibilities and sent on their way to get things done. You make the trek to the executive floor to keep senior management updated on the incident. ##Check the Cloud The attack seems to have started on the internal network, but you don’t want to take chances and need to make sure the new cloud-based application isn’t at risk. A quick check of the cloud console shows strange activity on one of the instances. A device within the presentation layer of the cloud stack was flagged by the monitoring system of the IaaS provider because there was an unauthorized change on that specific instance. Looks like the time you spent setting up the configuration monitoring service was time well spent. Since security was involved in the architecture of the cloud stack, you are in good shape. The application was built to be isolated. Even though it seems the presentation layer has been compromised, the adversaries can’t get to anything of value. And the clean-up has _already happened_. Once the IaaS monitoring system threw an alert, the instance in question was taken offline, and put into a special security group only accessible by the investigators. A forensic server was spun up and some other analysis was done. Another example of orchestration and automation really facilitating the incident response process. The presentation layer has large variances in traffic it needs to handle, so it was built using auto-scaling technology and immutable servers. Once the (potentially) compromised instance was removed from the group, another instance with a clean configuration was spun up and took on the workloads. But it’s not clear if this attack is related to the other incident, so you take the information about the cloud attack and pull it down to feed it into the case management system. But the reality is that this attack, even if related, isn’t presenting danger at this point, so it’s put to the side so you can focus on the internal attack and probably exfiltration. ##Building the Timeline Now that you’ve done the initial triage, it’s

Share:
Read Post

Summary: June 3, 2016

Adrian here. Unlike my business partners who have been logging thousands of air miles, speaking at conferences and with clients around the country, I have been at home. And with the mildest spring in Phoenix’s recored history, it’s been a blessing as we’re 45 days past the point we typically encounter 100 degree days. Bike rides. Hiking. Running. That is, when I get a chance to sneak outdoors and enjoy it. With our pivot there is _even more_ writing and research going on than normal, if that’s even possible. You will begin to see the results of this work within the next couple of weeks, and we are looking forward to putting a fresh face on the business. That launch will coincide with us posting lots more hands on advice for cloud security and migrations. And as a heads up, I’m going to be talking Big Data security over at SC Magazine on the 20th. I’ll tweet out a link (follow at @AdrianLane) next week if you’re interested. If you want to subscribe directly to the Friday Summary only list, just [click here](http://eepurl.com/bQfTPH). ​ ## Top Posts for the Week ​ * [Salesforce to Piggyback on Amazon’s Growing Cloud](http://www.morningstar.com/news/dow-jones/TDJNDN_2016052511417/in-400-million-deal-salesforce-to-piggyback-on-amazons-growing-cloud.html) * [Ex-VMWare CEO now EVP of GCP](http://techcrunch.com/2016/05/30/diane-greene-wants-to-put-the-enterprise-front-and-center-of-google-cloud-strategy/) * [Insights on Container Security with Azure Container Service (ACS)](https://blogs.msdn.microsoft.com/azuresecurity/2016/05/26/insights-on-container-security-with-azure-container-service-acs/) * [Comparing IAAS providers](http://fortycloud.com/iaas-security-state-of-the-industry/) * In ‘not cloud’ news, [Oracle accused of ‘improper accounting’ in attempt to pump-up cloud sales](http://www.computerworld.com/article/3078156/cloud-computing/oracle-employee-says-she-was-fired-for-refusing-to-fiddle-with-cloud-accounts.html). * [The Business Value of DevOps](http://devops.com/2016/06/02/devops-business-value/) ​ ## Tool of the Week “Server-less computing? What do you mean?” Rich and I were discussing cloud deployment options with one of the smartest engineering managers I know, and he was totally unaware server-less cloud computing architectures. If he was unaware of this capability, odds are lots of people are as well. So in this week’s ‘tool of the week’ section we will not discuss a single tool, but rather a functional paradigm offered by multiple cloud service vendors. What are they? Stealing from Google’s GCP page on the subject as they best capture the idea, essentially it’s a “lightweight, event-based, asynchronous solution that allows you to create small, single-purpose functions that respond to Cloud events without the need to manage a server or a runtime environment.” What Google did not mention is that these functions tend to be very fast, and you can run multiple copies in parallel to scale up capacity. It’s really the embodiment of micro-services. You can, in fact, construct and entire application from these functions. For example, take a stream of data and run it through a series of functions to process it. It could be audio or image file processing, or real time event data inspection, data transformation, data enrichment, data comparisons or any combination you can think of. The best part? There is _no server_. There is no OS to set up. No CPU or disk capacity to specify. No configuration files. No network ports to manage. It’s simply a logical function running out there in the ‘ether’ of your public cloud. Google’s version on GCP is called [cloud functions](https://cloud.google.com/functions/docs/). Amazon’s version on AWS is called (lambda functions](http://docs.aws.amazon.com/lambda/latest/dg/welcome.html). Microsofts version on Azure is simply called [functions](https://azure.microsoft.com/en-us/services/functions/). Check the API documents as they all work slightly differently, and some have specific storage requirements to act as endpoints, but the idea is the same. And the pricing for these services is pretty low; with lambda for example, the first million requests are free, and it’s 20 cents for every million requests thereafter. This feature is one of the many reasons we tell companies to reconsider application architectures when moving to cloud services. We’ll post some tidbits on security for these services in future blog posts. For now, we recommend you check it out! ​ ## Securosis Blog Posts this Week ​ * [Incident Response in the Cloud Age: In Action](https://securosis.com/blog/incident-response-in-the-cloud-age-in-action). * [Understanding and Selecting RASP: Integration](https://securosis.com/blog/understanding-and-selecting-rasp-integration). * [https://securosis.com/blog/firestarter-where-to-start](https://securosis.com/blog/firestarter-where-to-start). * [Incident Response in the Cloud Age: Addressing the Skills Gap](https://securosis.com/blog/incident-response-in-the-cloud-age-addressing-the-skills-gap). ​ ​ ## Training and Events ​ * We are running two classes at Black Hat USA: * [Black Hat USA 2016 | Cloud Security Hands-On (CCSK-Plus)](https://www.blackhat.com/us-16/training/cloud-security-hands-on-ccsk-plus.html) * [Black Hat USA 2016 | Advanced Cloud Security and Applied SecDevOps](https://www.blackhat.com/us-16/training/advanced-cloud-security-and-applied-secdevops.html) Share:

Share:
Read Post

Firestarter: Where to start?

It’s long past the day we need to convince you that cloud and DevOps is a thing. We all know it’s happening, but one of the biggest questions we get is “Where do I start?” In this episode we scratch the surface of how to start approaching the problem when you don’t get to join a hot unicorn startup and build everything from scratch with an infinite budget behind you. Watch or listen: Share:

Share:
Read Post

Understanding and Selecting RASP: Integration

This post will offer examples for how to integrate RASP into a development pipeline. We’ll cover both how RASP fits into the technology stack, and development processes used to deliver applications. We will close this post with a detailed discussion of how RASP differs from other security technologies, and discuss advantages and tradeoffs compared to other security technologies. As we mentioned in our introduction, our research into DevOps produced many questions on how RASP worked, and whether it is an effective security technology. The questions came from non-traditional buyers of security products: application developers and product managers. Their teams, by and large, were running Agile development processes. The majority were leveraging automation to provide Continuous Integration – essentially rebuilding and retesting the application repeatedly and automatically as new code was checked in. Some had gone as far as Continuous Deployment (CD) and DevOps. To address this development-centric perspective, we offer the diagram below to illustrate a modern Continuous Deployment / DevOps application build environment. Consider each arrow a script automating some portion of source code control, building, packaging, testing, or deployment of an application. Security tools that fit this model are actively being sought by development teams. They need granular API access to functions, quick production of test results, and delivery of status back to supporting services. Application Integration Installation: As we mentioned back in the technology overview, RASP products differ in how they embed within applications. They all offer APIs to script configuration and runtime policies, but how and where they fit in differ slightly between products. Servlet filters, plugins, and library replacement are performed as the application stack is assembled. These approaches augment an application or application ‘stack’ to perform detection and blocking. Virtualization and JVM replacement approaches augment run-time environments, modifying the subsystems that run your application modified to handle monitoring and detection. In all cases these, be it on-premise or as a cloud service, the process of installing RASP is pretty much identical to the build or deployment sequence you currently use. Rules & Policies: We found the majority of RASP offerings include canned rules to detect or block most known attacks. Typically this blacklist of attack profiles maps closely to the OWASP Top Ten application vulnerability classes. Protection against common variants of standard attacks, such as SQL injection and session mis-management, is included. Once these rules are installed they are immediately enforced. You can enable or disable individual rules as you see fit. Some vendors offer specific packages for critical attacks, mapped to specific CVEs such as Heartbleed. Bundles for specific threats, rather than by generic attack classes, help security and risk teams demonstrate policy compliance, and make it easier to understand which threats have been addressed. But when shopping for RASP technologies you need to evaluate the provided rules carefully. There are many ways to attack a site with SQL injection, and many to detect and block such attacks, so you need to verify the included rules cover most of the known attack variants you are concerned with. You will also want to verify that you can augment or add rules as you see fit – rule management is a challenge for most security products, and RASP is no different. Learning the application: Not all RASP technologies can learn how an application behaves, or offer whitelisting of application behaviors. Those that do vary greatly in how they function. Some behave like their WAF cousins, and need time to learn each application – whether by watching normal traffic over time, or by generating their own traffic to ‘crawl’ each application in a non-production environment. Some function similarly to white-box scanners, using application source to learn. Coverage capabilities: During our research we found uneven RASP coverage of common platforms. Some started with Java or .Net, and are iterating to cover Python, Ruby, Node.js, and others. Your search for RASP technologies may be strongly influenced by available platform support. We find that more and more, applications are built as collections of microservices across distributed architectures. Application developers mix and match languages, choosing what works best in different scenarios. If your application is built on Java you’ll have no trouble finding RASP technology to meet your needs. But for mixed environments you will need to carefully evaluate each product’s platform coverage. Development Process Integration Software development teams leverage many different tools to promote security within their overarching application development and delivery processes. The graphic below illustrates the major phases teams go through. The callouts map the common types of security tests at specific phases within an Agile, CI, and DevOps frameworks. Keep in mind that it is still early days for automated deployment and DevOps. Many security tools were built before rapid and automated deployment existed or were well known. Older products are typically too slow, some cannot focus their tests on new code, and others do not offer API support. So orchestration of security tools – basically what works where – is far from settled territory. The time each type of test takes to run, and the type of result it returns, drives where it fits best into the phases below. RASP is designed to be bundled into applications, so it is part of the application delivery process. RASP offers two distinct approaches to help tackle application security. The first is in the pre-release or pre-deployment phase, while the second is in production. Either way, deployment looks very similar. But usage can vary considerably depending on which is chosen. Pre-release testing: This is exactly what it sounds like: RASP is used when the application is fully constructed and going through final tests prior to being launched. Here RASP can be deployed in several ways. It can be deployed to monitor only, using application tests and instrumenting runtime behavior to learn how to protect the application. Alternatively RASP can monitor while security tests are invoked in an attempt to break the application, with RASP performing security analysis and transmitting its results. Development and Testing teams can learn whether

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.