Securosis

Research

Do We Have a Right to Security?

Don’t be distracted by the technical details. The model of phone, the method of encryption, the detailed description of the specific attack technique, and even feasibility are all irrelevant. Don’t be distracted by the legal wrangling. By the timing, the courts, or the laws in question. Nor by politicians, proposed legislation, Snowden, or speeches at think tanks or universities. Don’t be distracted by who is involved. Apple, the FBI, dead terrorists, or common drug dealers. Everything, all of it, boils down to a single question. Do we have a right to security? This isn’t the government vs. some technology companies. It’s the government vs. your right to fundamental security in the digital age. Vendors like Apple have hit the point where some of the products they make, for us, are so secure that it is nearly impossible, if not impossible, to crack them. As a lifetime security professional, this is what my entire industry has been dreaming of since the dawn of computers. Secure commerce, secure communications, secure data storage. A foundation to finally start reducing all those data breaches, to stop China, Russia, and others from wheedling their way into our critical infrastructure. To make phones so secure they almost aren’t worth stealing, since even the parts aren’t worth much. To build the secure foundation for the digital age that we so lack, and so desperately need. So an entire hospital isn’t held hostage because one person clicked on the wrong link. The FBI, DOJ, and others are debating whether secure products and services should be legal. They hide this in language around warrants and lawful access, and scream about terrorists and child pornographers. What they don’t say, what they never admit, is that it is impossible to build in back doors for law enforcement without creating security vulnerabilities. It simply can’t be done. If Apple, the government, or anyone else has master access to your device, to a service, or communications, that is a security flaw. It is impossible for them to guarantee that criminals or hostile governments won’t also gain such access. This isn’t paranoia, it’s a demonstrable fact. No company or government is completely secure. And this completely ignores the fact that if the US government makes security illegal here, that destroys any concept of security throughout the rest of the world, especially in repressive regimes. Say goodbye to any possibility of new democracies. Never mind the consequences here at home. Access to our phones and our communications these days isn’t like reading our mail or listening to our phone calls – it’s more like listening to whispers to our partners at home. Like tracking how we express our love to our children, or fight the demons in our own minds. The FBI wants this case to be about a single phone used by a single dead terrorist in San Bernadino to distract us from asking the real question. It will not stop at this one case – that isn’t how law works. They are also teaming with legislators to make encrypted, secure devices and services illegal. That isn’t conspiracy theory – it is the stated position of the Director of the FBI. Eventually they want systems to access any device or form of communications, at scale. As they already have with our phone system. Keep in mind that there is no way to limit this to consumer technologies, and it will have to apply to business systems as well, undermining corporate security. So ignore all of that and ask yourself, do we have a right to security? To secure devices, communications, and services? Devices secure from criminals, foreign governments, and yes, even our own? And by extension, do we have a right to privacy? Because privacy without security is impossible. Because that is what this fight is about, and there is no middle ground, mystery answer hiding in a research project, or compromise. I am a security expert. I have spent 25 years in public service and most definitely don’t consider myself a social activist. I am amused by conspiracy theories, but never take them seriously. But it would be unconscionable for me to remain silent when our fundamental rights are under assault by elements within our own government. Share:

Share:
Read Post

Building a Threat Intelligence Program: Gathering TI

[Note: We received some feedback on the series that prompted us to clarify what we meant by scale and context towards the end of the post. See? We do listen to feedback on the posts. – Mike] We started documenting how to build a Threat Intelligence program in our first post, so now it’s time to dig into the mechanics of thinking more strategically and systematically about how to benefit from the misfortune of others and make the best use of TI. It’s hard to use TI you don’t actually have yet, so the first step is to gather the TI you need. Defining TI Requirements A ton of external security data available. The threat intelligence market has exploded over the past year. Not only are dozens of emerging companies offering various kinds of security data, but many existing security vendors are trying to introduce TI services as well, to capitalize on the hype. We also see a number of new companies with offerings to help collect, aggregate, and analyze TI. But we aren’t interested in hype – what new products and services can improve your security posture? With no lack of options, how can you choose the most effective TI for you? As always, we suggest you start by defining your problem, and then identifying the offerings that would help you solve it most effectively. Start with your the primary use case for threat intel. Basically, what is the catalyst to spend money? That’s the place to start. Our research indicates this catalyst is typically one of a handful of issues: Attack prevention/detection: This is the primary use case for most TI investments. Basically you can’t keep pace with adversaries, so you need external security data to tell you what to look for (and possibly block). This budget tends to be associated with advanced attackers, so if there is concern about them within the executive suite, this is likely the best place to start. Forensics: If you have a successful compromise you will want TI to help narrow the focus of your investigation. This process is outlined in our Threat Intelligence + Incident Response research. Hunting: Some organizations have teams tasked to find evidence of adversary activity within the environment, even if existing alerting/detection technologies are not finding anything. These skilled practitioners can use new malware samples from a TI service effectively, then can also use the latest information about adversaries to look for them before they act overtly (and trigger traditional detection). Once you have identified primary and secondary use cases, you need to look at potential adversaries. Specific TI sources – both platform vendors and pure data providers – specialize in specific adversaries or target types. Take a similar approach with adversaries: understand who your primary attackers are likely to be, and find providers with expertise in tracking them. The last part of defining TI requirements is to decide how you will use the data. Will it trigger automated blocking on active controls, as described in Applied Threat Intelligence? Will data be pumped into your SIEM or other security monitors for alerting as described in Threat Intelligence and Security Monitoring? Will TI only be used by advanced adversary hunters? You need to answer these questions to understand how to integrate TI into your monitors and controls. When thinking about threat intelligence programmatically, think not just about how you can use TI today, but also what you want to do further down the line. Is automatic blocking based on TI realistic? If so that raises different considerations that just monitoring. This aspirational thinking can demand flexibility that gives you better options moving forward. You don’t want to be tied into a specific TI data source, and maybe not even to a specific aggregation platform. A TI program is about how to leverage data in your security program, not how to use today’s data services. That’s why we suggest focusing on your requirements first, and then finding optimal solutions. Budgeting After you define what you need from TI, how will you pay for it? We know, that’s a pesky detail, but it is important, as you set up a TI program, to figure out which executive sponsors will support it and whether that funding source is sustainable. When a breach happens, a ton of money gets spent on anything and everything to make it go away. There is no resistance to funding security projects, until there is – which tends to happen once the road rash heals a bit. So you need to line up support for using external data and ensure you have got a funding source that sees the value of investment now and in the future. Depending on your organization security may have its own budget to spend on key technologies; in that case you just build the cost into the security operations budget because TI is be sold on a subscription basis. If you need to associate specific spending with specific projects, you’ll need to find the right budget sources. We suggest you stay as close to advanced threat prevention/detection as you can because that’s the easiest case to make for TI. How much money do you need? Of course that depends on the size of your organization. At this point many TI data services are priced at a flat annual rate, which is great for a huge company which can leverage the data. If you have a smaller team you’ll need to work with the vendor on lower pricing or different pricing models, or look at lower cost alternatives. For TI platform expenditures, which we will discuss later in the series, you will probably be looking at a per-seat cost. As you are building out your program it makes sense to talk to some TI providers to get preliminary quotes on what their services cost. Don’t get these folks engaged in a sales cycle before you are ready, but you need a feel for current pricing – that is something any potential executive sponsor needs to know. While we are discussing money, this is a good point to start thinking about how to quantify the

Share:
Read Post

Summary: Law Enforcement and the Cloud

While the big story this week was the FBI vs. Apple, I’d like to highlight something a little more relevant to our focus on the cloud. You probably know about the DOJ vs. Microsoft. This is a critically important case where the US government wants to assert access on the foreign branch of a US company, putting it in conflict with local privacy laws. I highly recommend you take a look, and we will post updates here. Beyond that, I’m sick and shivering with a fever, so enough small talk and time to get to the links. Posting is slow for us right now because we are all cramming for RSA, but you are probably used to that. BTW – it’s hard to find good sources for cloud and DevOps news and tutorials. If you have links, please email them to <mailto::info@securosis.com>. If you want to subscribe directly to the Friday Summary only list, just click here. And don’t forget: The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead. Top Posts for the Week Huge HUGE vulnerability you need to start patching. Magnitude of glibc Vulnerability Coming to Light Cloud Security Alliance hackathon offers $10,000 prize This is for the Software Defined Perimeter project. Another great CloudAcademy post. This is something we work on in every single client engagement. Down the road we will detail our process and recommendations. Centralized Log Management with AWS CloudWatch: Part 1 of 3 We’ve posted a bit on this ourselves, and I talk about it a lot in presentations, but a very cogent view of some of the security advantages of the cloud. Bill Shinn and I will be going more in-depth in our RSA presentation. How the Cloud Simplifies Security Oops. VMware re-issues patch after vCenter fix fails to ‘completely’ fix bug Designed for mobile apps, but also has cloud implications: Tidas: a new service for building password-less apps Last week we talked about logging in our Tool of the Week. Here’s a slightly-older AWS post on building everything cloud-native. Personally, I’m still torn on which pattern I like better. I think it will largely come down to costs, because you can also build alerts based on Kinesis events. Tool of the Week This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, and if you have submissions please email them to info@securosis.com. One issue that comes up a lot in client engagements is the best “unit of deployment” to push applications into production. That’s a term I might have made up, but I’m an analyst, so we do that. Conceptually there are three main ways to push application code into production: Update code on running infrastructure. Typically using configuration management tools (Chef/Puppet/Ansible/Salt), code-specific deployment tools like Capistrano, or a cloud-provider specific tool like AWS CodeDeploy. The key is that a running server is updated. Deploy custom images, and use them to replace running instances. This is the very definition of immutable because you never log into or change a running server, you replace it. This relies heavily on auto scaling. It is a more secure option, but it can take time for the new instances to deploy depending on complexity and boot time. Containers. Create a new container image and push that. It’s similar to custom images, but containers tend to launch much more quickly. As you can guess, I prefer the second two options because I like locking down my instances and disabling any changes. That can really take security to the next level. Which brings us to our tool this week, Packer by HashiCorp. Packer is one of the best tools to automate creation of those images. It integrates with nearly everything, works on multiple cloud and container platforms, and even includes its own lightweight engine to run deployment scripts. Packer is an essential tool in the DevOps / cloud quiver, and can really enhance security because it enables you to adopt immutable infrastructure. Securosis Blog Posts this Week Firestarter: RSA Conference – the Good, Bad, and the Ugly. Securing Hadoop: Technical Recommendations. Securing Hadoop: Enterprise Security For NoSQL. Other Securosis News and Quotes I posted a piece at Macworld on the FBI vs. Apple that has gotten a lot of attention. It got linked all over the place and I did a bunch of interviews, but I won’t spam you with them. We are posting all our RSA Conference Guide posts over at the RSA Conference blog – here are the latest: Securosis Guide: Training Security Jedi Securosis Guide: The Beginning of the End(point) for the Empire Securosis Guide: Escape from Cloud City Training and Events We are giving multiple presentations at the RSA Conference: Rich and Mike are giving Cloud Security Accountability Tour Rich is co-presenting with Bill Shinn of AWS: Aspirin as a Service: Using the Cloud to Cure Security Headaches David Mortman is presenting: Learning from Unicorns While Living with Legacy Docker: Containing the Security Excitement Docker: Containing the Security Excitement (Focus-On) Leveraging Analytics for Data Protection Decisions Rich is giving a presentation on Rugged DevOps at Scale at DevOps Connect the Monday of RSAC We are running two classes at Black Hat USA: Cloud Security Hands-On (CCSK-Plus) Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Firestarter: RSA Conference—the Good, Bad, and the Ugly

Every year we focus a lot on the RSA Conference. Love it or hate it, it is the biggest event in our industry. As we do every year, we break down some of the improvements and disappointments we expect to see. Plus, we spend a few minutes talking about some of the big changes coming here at Securosis. We cover a possibly-insulting keynote, the improvements in the sessions, and how we personally use the event to improve our knowledge. Watch or listen: Share:

Share:
Read Post

Securing Hadoop: Technical Recommendations

Before we wrap up this series on securing Hadoop databases, I am happy to announce that Vormetric has asked to license this content, and Hortonworks is also evaluating a license as well. It’s community support that allows us to bring you this research free of charge. Also, I’ve received a couple email and twitter responses to the content; if you have more input to offer, now is the time to send it along to be evaluated with the rest of the feedback as we will assembled the final paper in the coming week. And with that, onto the recommendations. The following are our security recommendations to address security issues with Hadoop and NoSQL database clusters. The last time we made recommendations we joked that many security tools broke Hadoop scalability; you’re cluster was secure because it was likely no one would use it. Fast forward four years and both commercial and open source technologies have advanced considerably, not only addressing threats you’re worried about, but were designed specifically for Hadoop. This means the possibility a security tool will compromise cluster performance and scalability are low, and that integration hassles of old are mostly behind us. In fact, it’s because of the rapid technical advancements in the open source community that we have done an about-face on where to look for security capabilities. We are no longer focused on just 3rd party security tools, but largely the open source community, who helped close the major gaps in Hadoop security. That said, many of these capabilities are new, and like most new things, lack a degree of maturity. You still need to go through a tool selection process based upon your needs, and then do the integration and configuration work. Requirements As security in and around Hadoop is still relatively young, it is not a forgone conclusion that all security tools will work with a clustered NoSQL database. We still witness instances where vendors parade the same old products they offer for other back-office systems and relational databases. To ensure you are not duped by security vendors you still need to do your homework: Evaluate products to ensure they are architecturally and environmentally consistent with the cluster architecture — not in conflict with the essential characteristics of Hadoop. Any security control used for NoSQL must meet the following requirements: 1. It must not compromise the basic functionality of the cluster. 2. It should scale in the same manner as the cluster. 3. It should address a security threat to NoSQL databases or data stored within the cluster. Our Recommendations In the end, our big data security recommendations boil down to a handful of standard tools which can be effective in setting a secure baseline for Hadoop environments: Use Kerberos for node authentication: We believed – at the outset of this project – that we would no longer recommend Kerberos. Implementation and deployment challenges with Kerberos suggested customers would go in a different direction. We were 100% wrong. Our research showed that adoption has increased considerably over the last 24 months, specifically in response to the enterprise distributions of Hadoop have streamlined the integration of Kerberos, making it reasonably easy to deploy. Now, more than ever, Kerberos is being used as a cornerstone of cluster security. It remains effective for validating nodes and – for some – authenticating users. But other security controls piggy-back off Kerberos as well. Kerberos is one of the most effective security controls at our disposal, it’s built into the Hadoop infrastructure, and enterprise bundles make it accessible so we recommend you use it. Use file layer encryption: Simply stated, this is how you will protect data. File encryption protects against two attacker techniques for circumventing application security controls: Encryption protects data if malicious users or administrators gain access to data nodes and directly inspect files, and renders stolen files or copied disk images unreadable. Oh, and if you need to address compliance or data governance requirements, data encryption is not optional. While it may be tempting to rely upon encrypted SAN/NAS storage devices, they don’t provide protection from credentialed user access, granular protection of files or multi-key support. And file layer encryption provides consistent protection across different platforms regardless of OS/platform/storage type, with some products even protecting encryption operations in memory. Just as important, encryption meets our requirements for big data security — it is transparent to both Hadoop and calling applications, and scales out as the cluster grows. But you have a choice to make: Use open source HDFS encryption, or a third party commercial product. Open source products are freely available, and has open source key management support. But keep in mind that HDFS encryption engine only protects data on HDFS, leaving other types of files exposed. Commercial variants that work at the file system layer cover all files. Second, they lack some support for external key management, trusted binaries, and full support that commercial products do. Free is always nice, but for many of those we polled, complete coverage and support tilted the balance for enterprise customers. Regardless of which option you choose, this is a mandatory security control. Use key management: File layer encryption is not effective if an attacker can access encryption keys. Many big data cluster administrators store keys on local disk drives because it’s quick and easy, but it’s also insecure as keys can be collected by the platform administrator or an attacker. And we are seeing Keytab file sitting around unprotected in file systems. Use key management service to distribute keys and certificates; and manage different keys for each group, application, and user. This requires additional setup and possibly commercial key management products to scale with your big data environment, but it’s critical. Most of the encryption controls we recommend depend on key/certificate security. Use Apache Ranger: In the original version of this research we were most worried about the use of a dozen modules with Hadoop, all deployed with ad-hoc configuration, hidden within the complexities of the cluster, each offering up a unique attack surface to potential attackers. Deployment validation

Share:
Read Post

Securing Hadoop: Enterprise Security For NoSQL

Hadoop is now enterprise software. There, I said it. I know lots of readers in the IT space still look at Hadoop as an interloper, or worse, part of the rogue IT problem. But better than 50% of the enterprises we spoke with are running Hadoop somewhere within the organization. A small percentage are running Mongo, Cassandra or Riak in parallel with Hadoop, for specific projects. Discussions on what ‘big data’ is, if it is a viable technology, or even if open source can be considered ‘enterprise software’ are long past. What began as proof of concept projects have matured into critical application services. And with that change, IT teams are now tasked with getting a handle on Hadoop security, to which they response with questions like “How do I secure Hadoop?” and “How do I map existing data governance policies to NoSQL databases?” Security vendors will tell you both attacks on corporate IT systems and data breaches are prevalent, so with gobs of data under management, Hadoop provides a tempting target for ‘Hackers’. All of which is true, but as of today, there really have not been major data breaches where Hadoop play a part of the story. As such this sort of ‘FUD’ carries little weight with IT operations. But make no mistake, security is a requirement! As sensitive information, customer data, medical histories, intellectual property and just about every type of data used in enterprise computing is now commonly used in Hadoop clusters, the ‘C’ word (i.e.: Compliance) has become part of their daily vocabulary. One of the big changes we’ve seen in the last couple of years with Hadoop becoming business critical infrastructure, and another – directly cause by the first – is IT is being tasked with bringing existing clusters in line with enterprise compliance requirements. This is somewhat challenging as a fresh install of Hadoop suffers all the same weak points traditional IT systems have, so it takes work to get security set up and the reports being created. For clusters that are already up and running, no need to choose technologies and a deployment roadmap that does not upset ongoing operations. On top of that, there is the additional challenge that the in-house tools you use to secure things like SAP, or the SIEM infrastructure you use for compliance reporting, may not be suitable when it comes to NoSQL. Building security into the cluster The number of security solutions that are compatible – if not outright built for – Hadoop is the biggest change since 2012. All of the major security pillars – authentication, authorization, encryption, key management and configuration management – are covered and the tools are viable. Most of the advancement have come from the firms that provide enterprise distributions of Hadoop. They have built, and in many cases contributed back to the open source community, security tools that accomplish the basics of cluster security. When you look at the threat-response models introduced in the previous two posts, every compensating security control is now available. Better still, they have done a lot of the integration legwork for services like Kerberos, taking a lot of the pain out of deployments. Here are some of the components and functions that were not available – or not viable – in 2012. LDAP/AD Integration – Technically AD and LDAP integration were available in 2012, but these services have both been advanced, and are easier to integrate than before. In fact, this area has received the most attention, and integration is as simple as a setup wizard with some of the commercial platforms. The benefits are obvious, as firms can leverage existing access and authorization schemes, and defer user and role management to external sources. Apache Ranger – Ranger is one of the more interesting technologies to come available, and it closes the biggest gap: Module security policies and configuration management. It provides a tool for cluster administrators to set policies for different modules like Hive, Kafka, HBase or Yarn. What’s more, those policies are in context to the module, so it sets policies for files and directories when in HDSF, SQL policies when in Hive, and so on. This helps with data governance and compliance as administrators set how a cluster should be used, or how data is to be accessed, in ways that simple role based access controls cannot. Apache Knox – You can think of Knox in it’s simplest form as a Hadoop firewall. More correctly, it is an API gateway. It handles HTTP and REST-ful requests, enforcing authentication and usage policies of inbound requests, and blocking everything else. Knox can be used as a virtual moat’ around a cluster, or used with network segmentation to further reduce network attack surface. Apache Atlas – Atlas is a proposed open source governence framework for Hadoop. It allows for annotation of files and tables, set relationships between data sets, and even import meta-data from other sources. These features are helpful for reporting, data discovery and for controlling access. Atlas is new and we expect it to see significant maturing in coming years, but for now it offers some valuable tools for basic data governance and reporting. Apache Ambari – Ambari is a facility for provisioning and managing Hadoop clusters. It helps admins set configurations and propagate changes to the entire cluster. During our interviews we we only spoke to two firms using this capability, but we received positive feedback by both. Additionally we spoke with a handful of companies who had written their own configuration and launch scripts, with pre-deployment validation checks, usually for cloud and virtual machine deployments. This later approach was more time consuming to create, but offered greater capabilities, with each function orchestrated within IT operational processes (e.g.: continuous deployment, failure recovery, DevOps). For most, Ambari’s ability to get you up and running quickly and provide consistent cluster management is a big win and a suitable choice. Monitoring – Hive, PIQL, Impala, Spark SQL and similar modules offer SQL or pseudo-SQL syntax. This means that the activity monitoring, dynamic masking, redaction and tokenization technologies originally developed for

Share:
Read Post

The Summary is dead. Long live the Summary!

As part of our changes at Securosis this year, it’s time to say goodbye to the old Friday Summary, and hello to the new one. Adrian and I started the Summary way back before Mike joined the company, as our own version of his weekly Security Incite. Our objective was to review the highlights of the week, both our work and things we found on the Internet, typically with an introduction based on events in our personal lives. As we look at growing and changing our focus this year, it’s time for a different format. Mike’s Incite (usually released on Wednesdays) does a great job highlighting important security stories, or whatever we find interesting. The Summary has always overlapped a bit. We also developed a tendency to overstuff it with links. Moving forward we are switching gears, and the Summary will now focus on our main coverage areas: cloud, DevOps, and automation security. The new sections will be more tightly curated and prioritized, to better fit a weekly newsletter format for folks who don’t have time to keep up on everything. We plan to keep the Incite our source for general security industry analysis, with the revised Summary targeting our new focus areas. We are also changing our email list provider from Aweber to MailChimp due to an ongoing technical issue. As part of that switch we will soon offer more email subscription options, which we used to have. You can pick the daily digest of all our posts, the weekly Incite, and/or the weekly Summary. If you want to subscribe directly to the Friday Summary only, just click here. If you have any feedback, as always please feel free to leave a comment or email us at //info@securosis.com. And don’t forget: The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead. Top Posts for the Week We missed it when it was released, but Google now has limited management plane logging support. It still isn’t up to CloudTrail, and it’s still in beta, but this is one of the most critical security capabilities enterprises need from a cloud provider. Rumor is Microsoft also has it in beta. This is another good example of using AWS capabilities for security functionality. This is the sort of thing that is built into most WAFs (including cloud WAFs) but we like this post more for showing how you can automate and wire things together than for its particular use case. How to Configure Rate-Based Blacklisting with AWS WAF and AWS Lambda A good non-security perspective on Continuous Delivery. We see a lot of organizations throw the term (along with DevOps) around without focusing on some of the foundational things you need to make it work. Are you ready for Continuous Delivery? GitHub posted a good incident report. This can serve as a decent model for both security and non-security incidents: January 28th Incident Report Node is really popular, but still gives us the security willies at times. This good piece lays out some of the issues: The battle for Node.js security has only begun CloudFormation and other immutable infrastructure tools often have gaps, especially when new products are released. Here’s how to use Python to deal with them, using a security example: Customizing CloudFormation with Python Props to Amazon for this one: AWS’ exhaustive terms of service covers zombie outbreaks Tool of the Week This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us; if you have submissions please email them to //info@securosis.com. We are still looking at how we want to handle logging as we rearchitect securosis.com. Our friend Matt J. recommended I look at the fluentd open source log collector. It looks like a good replacement for Logstash, which is pretty heavy and can be hard to configure. You can pump everything into fluentd in an instance, container, or auto-scaled cluster if you need it. It can perform analysis right there, plus you can send them down the chain to things like ElasticSearch/Kibana, AWS Kinesis, or different kinds of storage. What I really like is how it normalizes data into JSON as much as possible, which is great because that’s how we are structuring all our Trinity application logs. Our plan is to use fluentd with some basic rules for securosis.com, pushing the logs into AWS hosted ElasticSearch (to reduce management overhead), and then Kibana to roll our own SIEM. We see a bunch of clients following a similar approach. This also fits well into cloud logging architectures where you collect the logs locally and only send alerts back to the SOC. Especially with S3 support, that can really reduce overall costs. Securosis Blog Posts this Week Securing Hadoop: Operational Security Issues. Other Securosis News and Quotes Cloud Security: Software Defined. Event Driven. Awesome. We are posting our RSA Conference Guide on the RSA Conference blog – here are the latest posts: The Securosis Guide to the RSA Conference 2016: The FUD Awakens! Securosis Guide: Threat Intelligence & Bothan Spies Securosis Guide: R2DevOps Securosis Guide: Escape from Cloud City Training and Events We are giving multiple presentations at the RSA Conference. Rich and Mike are presenting Cloud Security Accountability Tour. Rich is co-presenting with Bill Shinn of AWS: Aspirin as a Service: Using the Cloud to Cure Security Headaches. David Mortman is presenting: Learning from Unicorns While Living with Legacy Docker: Containing the Security Excitement Docker: Containing the Security Excitement (Focus-On) Leveraging Analytics for Data Protection Decisions Rich is presenting on Rugged DevOps at Scale at DevOps Connect the Monday of RSAC We are running two classes at Black Hat USA. Cloud Security Hands-On (CCSK-Plus) Advanced Cloud Security and Applied SecDevOps Share:

Share:
Read Post

Securing Hadoop: Operational Security Issues

Beyond the architectural security issues endemic to Hadoop and NoSQL platforms discussed in the last post, IT teams expect some common security processes and supporting tools familiar from other data management platforms. That includes “turning the dials” on configuration management, vulnerability assessment, and maintaining patch levels across a complex assembly of supporting modules. The day-to-day processes IT managers follow to ensure typical application platforms are properly configured have evolved over years – core platform capabilities, community contributions, and commercial third-party support to fill in gaps. Best practices, checklists, and validation tools to verify things like admin rights are sufficiently tight, and that nodes are patched against known and perhaps even unknown vulnerabilities. Hadoop security has come a long way in just a few years, but it still lacks the maturity in day to day operational security offerings, and it is here that we find most firms continue to struggle. The following is an overview of the most common threats to data management systems, where operational controls offer preventative security measures to close off most common attacks. Again we will discuss the challenges, then map them to mitigation options. Authentication and authorization: Identity and authentication are central to any security effort – without them we cannot determine who should get access to data. Fortunately the greatest gains in NoSQL security have been in identity and access management. This is largely thanks to providers of enterprise Hadoop distributions, who have performed much of the integration and setup work. We have evolved from simple in-database authentication and crude platform identity management to much better integrated LDAP, Active Directory, Kerberos, and X.509 based authentication options. Leveraging those capabilities we can use established roles for authorization mapping, and sometimes extend to fine-grained authorization services with Apache Sentry, or custom authorization mapping controlled from within the calling application the database. Administrative data access: Most organizations have platform administrators and NoSQL database administrators, both with access to the cluster’s files. To provide separation of duties – to ensure administrators cannot view content – a facility is needed to segregate administrative roles and keep unwanted access to a minimum. Direct access to files or data is commonly addressed through a combination of role based-authorization, access control lists, file permissions, and segregation of administrative roles – such as with separate administrative accounts, bearing different roles and credentials. This provides basic protection, but cannot protect archived or snapshotted content. Stronger security requires a combination of data encryption and key management services, with unique keys for each application or cluster. This prevents different tenants (applications) in a shared cluster from viewing each other’s data. Configuration and Patch Management: With a cluster of servers, which may have hundreds of nodes, it is common to run different configurations and patch levels at one time. As nodes are added we see configuration skew. Keeping track of revisions is difficult. Existing configuration management tools can cover the underlying platforms, and HDFS Federation will help with cluster management, but they both leave a lot to be desired – including issuing encryption keys, avoiding ad hoc configuration changes, ensuring file permissions are set correctly, and ensuring TLS is correctly configured. NoSQL systems do not yet have counterparts for the configuration management tools available for relational platforms, and even commercial Hadoop distributions offer scant advice on recommended configurations and pre-deployment checklists. But administrators still need to ensure configuration scripts, patches, and open source code revisions are consistent. So we see NoSQL databases deployed on virtual servers and cloud instances, with home-grown pre-deployment scripts. Alternatively a “golden master” node may embody extensive configuration and validation, propagated automatically to new nodes before they can be added into the cluster. Software Bundles: The application and Hadoop stacks are assembled from many different components. Underlying platforms and file systems also vary – with their own configuration settings, ownership rights, and patch levels. We see organizations increasingly using source code control systems to handle open source version management and application stack management. Container technologies also help developers bundle up consistent application deployments. Authentication of applications and nodes: If an attacker can add a new node they control to the cluster, they can exfiltrate data from the cluster. To authenticate nodes (rather than users) before they can join a cluster, most firms we spoke with either employ X.509 certificates or Kerberos. Both can authenticate users as well, but we draw this distinction to underscore the threat of rogue applications or nodes being added to the cluster. Deployment of these services brings risks as well. For example if a Kerberos keytab file can be accessed or duplicated – perhaps using credentials extracted from virtual image files or snapshots – a node’s identity can be forged. Certificate-based identity options implicitly complicate setup and deployment, but properly deployed they can provide strong authentication and stronger security. Audit and Logging: If you suspect someone has breached your cluster, can you detect it, or trace back to the root cause? You need an activity record, which is usually provided by event logging. A variety of add-on logging capabilities are available, both open source and commercial. Scribe and LogStash are open source tools which integrate into most big data environments, as do a number of commercial products. You can leverage the existing cluster to store logs, build an independent cluster, or even leverage other dedicated platforms like a SIEM or Splunk. That said, some logging options do not provide an auditor sufficient information to determine exactly what actions occurred. You will need to verify that your logs are capturing both the correct event types and user actions. A user ID and IP address are insufficient – you also need to know what queries were issued. Monitoring, filtering, and blocking: There are no built-in monitoring tools to detect misuse or block malicious queries. There isn’t even yet a consensus on what a malicious big data query looks like – aside from crappy MapReduce scripts written by bad programmers. We are just seeing the first viable releases of Hadoop activity monitoring tools. No longer the “after-market speed regulators” they once were, current tools typically embedded into a

Share:
Read Post

Summary: Die Blah, Die!!

Rich here. I was a little burnt out when the start of this year rolled around. Not “security burnout” – just one of the regular downs that hit everyone in life from time to time. Some of it was due to our weird year with the company, a bunch of it was due to travel and impending deadlines, plus there was all the extra stress of trying to train for a marathon while injured (and working a ton). Oh yeah, and I have kids. Two of whom are in school. With homework. And I thought being a paramedic or infosec professional was stressful?!? Even finishing the marathon (did I mention that enough?) didn’t pull me out of my funk. Even starting the planning for Securosis 2.0 only mildly engaged my enthusiasm. I wasn’t depressed by any means – my life is too awesome for that – but I think many of you know what I mean. Just a… temporary lack of motivation. But last week it all faded away. All it took was a break from airplanes, putting some new tech skills into practice, and rebuilding the entire company. A break from work travel is kind of like the reverse of a vacation. The best vacations are a month long – a week to clear the head, two weeks to enjoy the vacation, a week to let the real world back in. A gap in work travel does the same thing, except instead of enjoying vacation you get to enjoy hitting deadlines. It’s kind of the same. Then I spent time on a pet technical project and built the code to show how event-driven security can work. I had to re-learn Python while learning two new Amazon services. It was a cool challenge, and rewarding to build something that worked like I hoped. At the same time I was picking up other new skills for my other RSA Conference demos. The best part was starting to rebuild the company itself. We’re pretty serious about calling this our “Securosis 2.0 pivot”. The past couple weeks we have been planning the structure and products, building out initial collateral, and redesigning the website (don’t worry – with our design firm). I’ve been working with our contractors to build new infrastructure, evaluating new products and platforms, and firming up some partnerships. Not alone – Mike and Adrian are also hard at work – but I think my pieces are a lot more fun because I get the technical parts. It’s one thing to build a demo or write a technical blog post, but it’s totally different to be building your future. And that was the final nail in the blah’s coffin. A month home. Learning new technical skills to build new things. Rebuilding the company to redefine my future. It turns out all that is a pretty motivating combination, especially with some good beer and workouts in the mix, and another trip to see Star Wars (3D IMAX with the kids this time). Now the real challenge: seeing if it can survive the homeowner’s association meeting I need to attend tonight. If I can make it through that, I can survive anything. Photo credit: Blah from pinterest And now on to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted in CSO Online: Credit card security has no silver bullet Mort quoted on container security: Containers: Security Minefield – or Channel Goldmine? Me on ridiculous travel security: Podcast 492: How to travel like an international superspy A piece I wrote over at TidBITS on government, encryption, and back doors. Also relevant to the Securosis audience: Why Apple Defends Encryption. Securosis Posts Incite 2/3/2016: Courage. Event-Driven AWS Security: A Practical Example. Securing Hadoop: Architectural Security Issues. Securing Hadoop: Architecture and Composition. Securing Hadoop: Security Recommendations for NoSQL platforms [New Series]. The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead. Security is Changing. So is Securosis. Incite 1/20/2016 – Ch-ch-ch-ch-changes. Research Reports and Presentations Threat Detection Evolution. Pragmatic Security for Cloud and Hybrid Networks. EMV Migration and the Changing Payments Landscape. Network-based Threat Detection. Applied Threat Intelligence. Endpoint Defense: Essential Practices. Cracking the Confusion: Encryption and Tokenization for Data Centers, Servers, and Applications. Security and Privacy on the Encrypted Network. Monitoring the Hybrid Cloud: Evolving to the CloudSOC. Security Best Practices for Amazon Web Services. Top News and Posts Why lost phones keep pointing at this Atlanta couple’s house This is a really important case: Security firm sued for filing “woefully inadequate” forensics report Chromodo browser disables key web security. Note to security vendors: put your customers first, not marketing. Severe and unpatched eBay vulnerability allows attackers to distribute malware. Not going to be patched, seriously? Software Security Ideas Ahead of Their Time New Technologies Give Government Ample Means to Track Suspects, Study Finds Friendly Fire. This is a really great post on the role of red teams. Congress to investigate US involvement in Juniper’s backdoor. Blog Comment of the Week This week’s best comment goes to Andy, in response to Event-Driven AWS Security: A Practical Example. Cool post. We could consider the above as a solution to an out of band modification of a security group. If the creation and modification of all security groups is via Cloudformation scripts, a DevOps SDLC could be implemented to ensure only approved changes are pushed through in the first place. Another question is how does the above trigger know the modification is unwanted?! It’s a wee bugbear I have with AWS that there’s not currently a mechanism to reference rule functions or change controls. My response: I actually have some techniques to handle out of band approvals, but it gets more advanced pretty quickly (plan is to throw some of them into Trinity once we start letting anyone use it). One quick example… build a workflow that kicks off a notification for approval, then the approval modifies something in Dynamo or S3, then that is one of the conditionals to check. E.g. have your change management system save down a token in S3 in a different account, then the Lambda

Share:
Read Post

Incite 2/3/2016: Courage

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world. In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage. I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs. Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path. Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes. Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that. Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich –Mike Photo credit: “Courage” from bfick It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Securing Hadoop Architectural Security Issues Architecture and Composition Security Recommendations for NoSQL platforms SIEM Kung Fu Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security * The Future of Security Incite 4 U Evolution visually: Wade Baker posted a really awesome

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.