Login  |  Register  |  Contact
Friday, February 05, 2016

Summary: Die Blah, Die!!

By Rich

Rich here.

I was a little burnt out when the start of this year rolled around. Not “security burnout” – just one of the regular downs that hit everyone in life from time to time. Some of it was due to our weird year with the company, a bunch of it was due to travel and impending deadlines, plus there was all the extra stress of trying to train for a marathon while injured (and working a ton).

Oh yeah, and I have kids. Two of whom are in school. With homework. And I thought being a paramedic or infosec professional was stressful?!?

Even finishing the marathon (did I mention that enough?) didn’t pull me out of my funk. Even starting the planning for Securosis 2.0 only mildly engaged my enthusiasm. I wasn’t depressed by any means – my life is too awesome for that – but I think many of you know what I mean. Just a… temporary lack of motivation.

But last week it all faded away. All it took was a break from airplanes, putting some new tech skills into practice, and rebuilding the entire company.

A break from work travel is kind of like the reverse of a vacation. The best vacations are a month long – a week to clear the head, two weeks to enjoy the vacation, a week to let the real world back in. A gap in work travel does the same thing, except instead of enjoying vacation you get to enjoy hitting deadlines. It’s kind of the same.

Then I spent time on a pet technical project and built the code to show how event-driven security can work. I had to re-learn Python while learning two new Amazon services. It was a cool challenge, and rewarding to build something that worked like I hoped. At the same time I was picking up other new skills for my other RSA Conference demos.

The best part was starting to rebuild the company itself. We’re pretty serious about calling this our “Securosis 2.0 pivot”. The past couple weeks we have been planning the structure and products, building out initial collateral, and redesigning the website (don’t worry – with our design firm). I’ve been working with our contractors to build new infrastructure, evaluating new products and platforms, and firming up some partnerships. Not alone – Mike and Adrian are also hard at work – but I think my pieces are a lot more fun because I get the technical parts.

It’s one thing to build a demo or write a technical blog post, but it’s totally different to be building your future. And that was the final nail in the blah’s coffin.

A month home. Learning new technical skills to build new things. Rebuilding the company to redefine my future. It turns out all that is a pretty motivating combination, especially with some good beer and workouts in the mix, and another trip to see Star Wars (3D IMAX with the kids this time).

Now the real challenge: seeing if it can survive the homeowner’s association meeting I need to attend tonight. If I can make it through that, I can survive anything.

Photo credit: Blah from pinterest

And now on to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Andy, in response to Event-Driven AWS Security: A Practical Example.

Cool post. We could consider the above as a solution to an out of band modification of a security group. If the creation and modification of all security groups is via Cloudformation scripts, a DevOps SDLC could be implemented to ensure only approved changes are pushed through in the first place. Another question is how does the above trigger know the modification is unwanted?! It’s a wee bugbear I have with AWS that there’s not currently a mechanism to reference rule functions or change controls.

My response:

I actually have some techniques to handle out of band approvals, but it gets more advanced pretty quickly (plan is to throw some of them into Trinity once we start letting anyone use it).

One quick example… build a workflow that kicks off a notification for approval, then the approval modifies something in Dynamo or S3, then that is one of the conditionals to check. E.g. have your change management system save down a token in S3 in a different account, then the Lambda function checks that.

You get to use cross-account access for separation of duties. Gets complicated quickly, which is why we figure we need a platform to manage it all.


Wednesday, February 03, 2016

Incite 2/3/2016: Courage

By Mike Rothman

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.

In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.


I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.

Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.

Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.

Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.

Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich


Photo credit: “Courage” from bfick

It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

* The Future of Security

Incite 4 U

  1. Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR

  2. Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL

  3. Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR

  4. Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL

—Mike Rothman

Monday, February 01, 2016

Event-Driven AWS Security: A Practical Example

By Rich

Would you like the ability to revert unapproved security group (firewall) changes in Amazon Web Services in 10 seconds, without external tools? That’s about 10-20 minutes faster than is typically possible with a SIEM or other external tools. If that got your attention, then read on…

If you follow me on Twitter, you might have noticed I went a bit nuts when Amazon Web Services announced their new CloudWatch events a couple weeks ago. I saw them as an incredibly powerful too for event driven security. I will post about the underlying concepts tomorrow, but right now I think it’s better to just show you how it works first. This entire thing took about 4 hours to put together, and it was my first time writing a Lambda function or using Python in 10 years.

This example configures an AWS account to automatically revert any Security Group (firewall) changes without human interaction, using nothing but native AWS capabilities. No security tools, no servers, nada. Just wiring together things already built into AWS. In my limited testing it’s effective in 10 seconds or less, and it’s only 100 lines of code – including comments. Yes, this post is much longer than the code to make it all work.

I will walk you through setting it up manually, but in production you would want to automate this configuration so you can manage it across multiple AWS accounts. That’s what we use Trinity for, and I’ll talk more about automating automation at the end of the post. Also, this is Amazon specific because no other providers yet expose the needed capabilities.

For background it might help to read the AWS CloudWatch events launch post. The short version is that you can instrument a large portion of AWS, and trigger actions based on a wide set of very granular events. Yes, this is an example of the kind of research we are focusing on as part of our cloud pivot.

This might look long, but if you follow my instructions you can set it all up in 10-15 minutes. Tops.

Prep Work: Turn on CloudTrail

If you use AWS you should have CloudTrail set up already; if not you need to activate it and feed the logs to CloudWatch using these instructions. This only takes a minute or two if you accept all the defaults.

Step 1: Configure IAM

To make life easier I put all my code up on the Securosis public GitHub repository. You don’t need to pull that code – you will copy and paste everything into your AWS console.

Your first step is to configure an IAM policy for your workflow; then create a role that Lambda can assume when running the code. Lambda is an AWS service that allows you to store and run code based on triggers. Lambda code runs in a container, but doesn’t require you to manage containers or servers for it. You load the code, and then it executes when triggered. You can build entirely serverless architectures with Lambda, which is useful if you want to eliminate most of your attack surface, but that’s a discussion for another day.

IAM in Amazon Web Services is how you manage who can do what in your account, including the capabilities of Amazon services themselves. It is ridiculously granular and powerful, and so the most critical security tool for protecting AWS accounts.

  • Log into the AWS console. Got to the Identity and Access Management (IAM) dashboard.
  • Click on Policies, then Create Policy.
  • Choose Create Your Own Policy.
  • Name it lambda_revert_security_group. Enter a description, then copy and paste my policy from GitHub. My policy allows the Lambda function to access CloudWatch logs, write to the log, view security group information, and revoke ingress or egress statements (but not create new ones). Damn, I love granular policies!

IAM Policy

  • Once the policy is set you need to Create New Role. This is the role which the Lambda function will assume when it runs.
    • Name it lambda_revert_security_group, assign it an AWS Lambda role type, then attach the lambda_revert_security_group policy you just created.

choose lambda role type

That’s it for the IAM changes. Next you need to set up the Lambda function and the CloudWatch event.

Step 2: Create the Lambda function

First make sure you know which AWS region you are working in. I prefer us-west-2 (Oregon) for lab work because it is up to date and tends to support new capabilities early. us-east-1 is the granddaddy of regions, but my lab account has so much cruft after 6+ years that things don’t always work right for me there.

  • Go to Lambda (under Compute on the main services page) and Create a Lambda function.
  • Don’t pick a blueprint – hit the Skip button to advance to the next page.
  • Name your function revertSecurityGroup. Enter a description, and pick Python for the runtime. Then paste my code into the main window. After that pick the lambda_revert_security_group IAM role the function will use. Then click Next, then Create function.

Configure the Lambda function

A few points on Lambda. You aren’t billed until the function triggers; then you are billed per request and runtime. Lambda is very good for quick tasks, but it does have a timeout (I think an hour these days), and the longer you run a function the less sense it makes compared to a dedicated server. I actually looked at migrating Trinity to Lambda because we could offload our workflows, but at that time it had a 5-minute timeout, and running hour-long workflows at scale would likely have killed us financially.

Now some notes on my code.

  • The main function handler includes a bunch of conditional statements you can use to only trigger reverting security group changes based on things like who requested the change, which security group was changed, whether the security group is in a specified VPC, and whehter the security group has a particular tag. None of those lines will work for you, because they refer to specific identifiers in my account – you need to change them to work in your account.
  • By default, the function will revert any security group change in your account. You need to cut and paste the line “revert_security_group(event)” into a conditional block to run only on matching conditions.
  • The function only works for inbound rule changes. It is trivial to modify for egress rule changes, or to restrict both ingress and egress. The IAM policy we set will work for both – you just need to change the code.
  • This only works for EC2-VPC. EC2-Classic works differently, and my code cannot parse the EC2-Classic API.
  • The code pulls the event details, finds the changes (which could be multiple changes submitted at the same time), and reverses them.
  • There may be ways around this. I ran through it over the weekend and tested multiple ways of making an EC2-VPC security group change, and my reversion always worked, but there might be a way I don’t know about that would change the log format enough that my code wouldn’t work. I plan to update it to work with EC2-Classic, but neither I nor Securosis ever uses that EC2-Classic, and we advise clients not to use it, so that is a low priority. If you find a hole, please drop me a line.
  • This works for internal (security group to security group) changes as well as external or internal IP address based rules.

Step 3: Configure the CloudWatch Event trigger

CloudWatch is Amazon’s built-in logging service. You cannot turn it off because it is the tool AWS uses to monitor and manage the performance of your instances and services. CloudWatch Logs is a newer feature you can use to store various log streams, including CloudTrail, the service that records all API calls in your account (including internal AWS calls).

  • Go to CloudWatch, then Events, then Create rule.
  • In the Event selector > Select event source, pick AWS API call. This only works with CloudTrail turned on.
  • Pick EC2 as the Service name. Then click Specific operation(s), then AuthorizeSecurityGroupIngress. You can also add egress if you want.
  • For Targets, pick Add target then Lambda function, and then select the one you just created. If you have a notification function you could add it here to get a text message or email whenever it runs, or send an alert to your SIEM.
  • Then name it. It’s active by default.

Create the event rule

Now test it. Go into the console, make a security group change, wait about 10 seconds, and refresh the console. Your changes should be gone. You can also check the CloudWatch log to see what happened, the details of the API call, and how the function executed.

Automating for Scale

This might only take 10-15 minutes once you have the code and know the process, but imagine configuring all this on hundreds or thousands of accounts at a time – which is typical for a mid-size or large organization with many projects.

To scale this up you need to create a new account deployment package. That’s what we use Trinity for (okay, that’s what I’m currently coding into Trinity, for our internal use right now). The idea is that when you provision an account you hook into it and blast out all the configurations, settings, Lambda functions, etc. using automation code.

In last year’s Black Hat training we demonstrated that, with demo code to configure alerts on IAM changes via CloudTrail and CloudWatch. We plan to go into more detail in our new Advanced Cloud Security and Applied DevOps class this summer.

It isn’t really all that complicated. Once you spend time on your cloud platform of choice and learn some basic coding via the APIs, the rest is pretty easy. It’s just basic check-a-setting, make-a-change stuff – no complex math or crazy decision trees needed (for the most part).

This is seriously exciting stuff – we security professionals can now directly manage, monitor, and manipulate our infrastructure using the exact same tools as development and operations. The infrastructure itself can identify and fix configuration and other issues – including security issues – faster than a person or (most) external tools.

Try it out. It’s easy to get started, and with minimal work you can make my sample code work for a whole host of different situations beyond basic firewalling.


Securing Hadoop: Architectural Security Issues

By Adrian Lane

Now that we have sketched out the elements a Hadoop cluster, and what one looks like, let’s talk threats to the databases. We want to consider both the database infrastructure itself, as well as the data under management. Given the complexity of a Hadoop cluster, the task is closer to securing an entire data center than a typical relational database. All the features that provide flexibility, scalability, performance, and openness, create specific security challenges. The following are some specific threats to clustered databases.

  • Data access & ownership: Role-based access is central to most database security schemes, and NoSQL is no different. Relational and quasi-relational platforms include roles, groups, schemas, label security, and various other facilities for limiting user access to subsets of available data. Most big data environments now offer integration with identity stores, along with role-based facilities to divide up data access between groups of users. That said, authentication and authorization require cooperation between the application designer and the IT team managing the cluster. Leveraging existing Active Directory or LDAP services helps tremendously with defining user identities, and pre-defined roles may be available for limiting access to sensitive data.
  • Data at rest protection: The standard for protecting data at rest is encryption, which protects against attempts to access data outside established application interfaces. With Hadoop systems we worry about people stealing archives or directly reading files from disk. Encrypted files are protected against access by users without encryption keys. Replication effectively replaces backups for big data, but beware a rogue administrator or cloud service manager creating their own backups. Encryption limits how data can be copied from the cluster. Unlike 2012, where the lack of suitable encryption was a serious issue. Apache offers HDFS encryption as an option; this is a major advance, but remember that you can only encrypt HDFS, and you’ll need to fill the gaps with key management and key storage. Several commercial Hadoop vendors offer transparent encryption, and third parties have advanced the state of the art, with transparent encryption options for both both HDFS and non-HDFS on-disk formats, especially coupled with parallel progress in key management.
  • Inter-node communication: Hadoop and the vast majority of distributions (Cassandra, MongoDB, Couchbase, etc.) don’t communicate securely by default – they use unencrypted RPC over TCP/IP. TLS and SSL are bundled in big data distributions, but not typically used between applications and databases – and almost never for inter-node communication. This leaves data in transit, and application queries, accessible for inspection and tampering.
  • Client interaction: Clients interact with resource managers and nodes. While gateway services can be created to load data, clients communicate directly with both resource managers and individual data nodes. Compromised clients can send malicious data or links to either service. This facilitates efficient communication but makes it difficult to protect nodes from clients, clients from nodes, and even name servers from nodes. Worse, the distribution of self-organizing nodes is a poor fit for security tools such as gateways, firewalls, and monitors. Many security tools are designed to require a choke-point or span port, which may not be available in a peer-to-peer mesh cluster.
  • Distributed nodes: One of the reasons big data makes sense is an old truism: “moving computation is cheaper than moving data”. Data is processed wherever resources are available, enabling massively parallel computation. Unfortunately this produces complicated environments with lots of attack surface. With so many moving parts, it is difficult to verify consistency or security across a highly distributed cluster of (possibly heterogeneous) platforms. Patching, configuration management, node identity, and data at rest protection – and consistent deployment of each – are all issues.

Threat-response models

One or more security countermeasures are available to mitigate each threat identified above. The following diagram shows which specific options you have at your disposal to help you choose a ‘preventative’ security measure.

Arch Threat-Response

We don’t have room to go into much detail on the tradeoffs of each response – each area really deserves its own paper. But we do want to mention a couple areas where we have seen the most change since our original research four years ago.

If your goal is to protect session privacy – either between clients and data nodes, or for inter-node communication – Transport Layer Security (TLS) is your first choice. This was unheard of in 2012, but since then about 25% of the companies we spoke with have implemented SSL or TLS for inter-node communication – not just between applications and name servers. Transport encryption protects all communications from access or modification by attackers. Some firms instead use network segmentation and firewalls to ensure that attackers cannot access network traffic. This approach is less robust but much easier to implement. Some clusters were deployed to third-party cloud services, where virtualized network services make sniffing nearly impossible; these companies typically chose not to encrypt internal cluster communications.

Enforcing data usage is one of the areas where we have seen the most progress, thanks to database links into existing Active Directory and LDAP identity stores. This seems obvious now but was a rarity in 2012, when data architects were focused on scalability and getting basic analytics up and running. Fortunately support for linking identity stores to Hadoop clusters has advanced considerably, making it much easier to leverage existing roles and management infrastructure. But we also have other tools at our disposal. We don’t see it often, but a handful of organizations encrypt sensitive data elements at the application layer, so information is stored as encrypted elements. This way the application manages decryption and key management functions, and can offer additional controls over who can see which information. This is very secure, but must be designed in during application design and coded into the application from the beginning. Retrofitting application-layer encryption into an existing application and database stack is highly challenging at beast, which is why we also see wide usage of masking and redaction technologies – from both enterprise Hadoop vendors and third-party security vendors. These technologies offer fine control over which data is displayed to which users, and can be easily built into existing clusters to enforce security and compliance.

If you need deeper technical analysis, we have published much more information on technologies above – specifically Understanding Database Encryption which covers both NoSQL clusters and relational stores, Understanding Data Masking, and Understanding and Selecting a Key Management Solution.

Our goal here is to ensure you are aware of the risks, and to point out that you have choices to address each specific threat. Each option offers different advantages and costs; the costs will drive our recommendations later.

Up next: a look at how and where to embed security into day-to-day operations.

—Adrian Lane

Friday, January 29, 2016

Securing Hadoop: Architecture and Composition

By Adrian Lane

Our goal for this post is to succinctly outline what Hadoop (and most NoSQL) clusters look like, how they are assembled, and how they are used. This provides better understanding of the security challenges, and what sort of protections need to be leveraged to secure them. Developers and data scientists continue to stretch system performance and scalability, using customized combinations of open source and commercial products, so there is really no such thing as a ‘standard’ Hadoop deployment. With these considerations in mind, it is time to map out threats to the cluster.

NoSQL databases enable companies to collect, manage, and analyze incredibly large data sets. Thousands of firms are working on big data projects, from small startups to large enterprises. Since our original paper in 2012 the rate of adoption has only increased; platforms such as Hadoop, Cassandra, Mongo, and RIAK are now commonplace, with some firms supporting multiple installations. In just a couple years they went from “rogue IT” to “core systems”. Most firms recognized the value of “big data”, acknowledged these platforms are essential, and tasked IT teams with bringing them “under IT governance”. Most firms today are taking their first steps to retrofit security and governance controls onto Hadoop.

Let’s dig into how all the pieces fit together:

Architecture and Data Flow

Hadoop has been wildly successful because it scales well, can be configured to handle a wide variety of use cases, and is very inexpensive compared to relational and data warehouse alternatives. Which is all another way of saying it’s cheap, fast, and flexible. To show why and how it scales, let’s take a look at a Hadoop cluster architecture:

Hadoop Architecture

There are several things to note here. The architecture promotes scaling and performance. It provides parallel processing, and additional nodes provide ‘horizontal’ scalability. This architecture is also inherently multi-tenant, supporting multiple applications across one or more file groups. But there are a lot of moving parts; each node communicates with its peers to ensure that data is properly replicated, nodes are on-line and functional, storage is optimized, and application requests are being processed. We’ll dig into specific threats to Hadoop clusters later in this series.

Hadoop Stack

To appreciate Hadoop’s flexibility, you need to understand that a cluster can be fully customized. It is useful to think about the Hadoop framework as a ‘stack’, much like a LAMP stack, but much less standardized. While Pig and Hive are commonly used, the ability to mix and match components makes deployments much more diverse. For example, Sqoop and Yarn are alternative data access services. You can select different big data environments to support columnar, graph, document, XML, or multidimensional data. And over the last couple years MapReduce has largely given way to SQL query engines – with Spark, Drill, Impala, and Hive all accommodating increasing use of SQL-style queries. This modularity offers great flexibility to assemble and tailor clusters to behave and perform exactly as desired. But it also makes security more difficult – each option brings its own security options and issues.

Hadoop Stack

The beauty part is that you can set up a cluster to satisfy your usability, scalability, and performance goals. You can tailor it to specific types of data, or add modules to facilitate analysis of certain data sets. But that flexibility brings complexity. Each module runs a specific version of code, has its own configuration, and may require independent authentication to work in the cluster. Many pieces must work in tandem here to process data, so each requires its own security review.

Some of you reading this are already familiar with the architecture and component stack of a Hadoop cluster. You may be asking, “Why we are we going through these basics?”. To understand threats and appropriate responses, you need to first understand how all the pieces of the cluster work together. Each component interface is a trust relationship, and each relationships is a target. Each component offers attacker a specific set of potential exploits, and defenders have a corresponding set of options for attack detection and prevention. Understanding architecture and cluster composition is the first step to putting together your security strategy.

Our next post will present several strategies used to secure big data. Each model includes basic benefits and requires supplementary security tools. After selecting a strategy, you can put together a collection of security controls to meet your objectives.

—Adrian Lane

Monday, January 25, 2016

Securing Hadoop: Security Recommendations for NoSQL platforms [New Series]

By Adrian Lane

It’s been three and a half years since we published our research paper on Securing Big Data. That research paper has been one of the more popular papers we’ve ever written. And it’s no wonder as NoSQL adoption was faster than we expected; we see hundreds of new projects popping up, leveraging the scale, analytics and low cost of these platforms. It’s not hyperbole to claim it has revolutionized the database market over the last 5 years, and community support behind these platforms – and especially Hadoop – is staggering.

At the time we wrote the last paper security, Hadoop – much less the other platforms – was something of a barren wasteland. They did not include basic controls for data protection, most third party tools could not scale along with NoSQL and thus were of little use to developers, and leaders of NoSQL firms directed resources to improving performance and scalability, not security. Heck, in 2012 the version of Hadoop I evaluated did not even require and administrative password!

But when it comes to NoSQL security, and Hadoop specifically, things have changed dramatically. As we advise clients on how to implement security controls, there are many new options to consider. And while there remains some gaps in monitoring and assessment capabilities, Hadoop has (mostly) reached security parity with the relational platforms of old. We can’t call it a barren wasteland any longer, so to accurately advise people on approaches and tools to leverage, we can no longer refer them back to that original paper.

So we are kicking off a new research series to refresh this paper. Most of the content will be new. And this time we will do this a little bit differently that the last time. First, we are going to provide less background on what makes NoSQL different than relational databases, as most people in IT are now comfortable with the architectural and functional distinctions between the two. Second, most of our recommendations will still apply to NoSQL platforms in general, but this research will be more focused on Hadoop as we get a majority of questions on Hadoop security despite dozens of alternatives. Finally, as there are lots more aspects to talk about, we’ll weave preventative and detective controls into a more operational (i.e.: day to day management) model for both data and database infrastructure.

Here is how we are laying out the series:

Hadoop Architecture and Assembly — The goal with this post is to succinctly outline what Hadoop and similar styles of NoSQL clusters look like, how they are assembled and how they are used. In this light we get a better idea of the security challenges and what sort of protections need to be leveraged. As developers and data scientists stretch systems from a performance and scalability standpoint, and custom assemblage of open source and commercial products, there really is no such thing as a standard Hadoop deployment. So with these considerations in mind we will map out threats to the cluster.

Use Cases & Security Architectures — This post will discuss the strategic considerations for deploying security for big data. Depending upon which model you choose, you change where certain types of threats are addressed, and consequentially what tools you will rely upon to provide security. Or stated another way, the security model you choose will dictate what security technologies you need to prevent and detect threats. There are several approaches that organizations take to secure Hadoop and other NoSQL clusters. These range from securing the network around the cluster, Identity Management, to maintaining security controls on each node within the cluster, or even taking a data centric approach to security. We’ll go over the major trends we see today, and discuss the advantages and pitfalls of each approach.

Building Security Into the Cluster — Here is where we discuss how all of the pieces fit together. There are many security controls available, and each address a specific threat vector an attacker may employ. We’ll focus on security controls you want to build into your cluster from the start: identity, authorization, transport layer security, application security and data encryption. This will focus on the base security controls that allow you to define how the cluster should be used from a security standpoint.

Operational Security — Here we will focus on the day to day security controls for monitoring ongoing security and discovering user behavior and ongoing security operations. Aspects like configuration management, patching, logging, monitoring, and node validation. We’ll even discuss integrating a DevOps approach to cluster administration to improve speed and consistency.

Commercial Hadoop and NoSQL variants — Hadoop is the dominant flavor of ‘big data’ in use today. In this section we will discuss what the commercial Hadoop platform vendors are doing to promote security for their customers with a blend of open source, home grown and 3rd party security product support. There is no reason to roll you’re own security out of necessity as commercial variants often add on their own products or provide bundles for you. Each offers unique capabilities and each has a vision of what their customers should focus on, so we will cover some of the current offerings. We will also offer some advice on the application of security to non-Hadoop platforms. While Hadoop is the most commonly used platform, there are specialized flavors of NoSQL that are eminently appropriate for certain business challenges and are in wide use. Some even use HDFS or other Hadoop components that allow the use of the same security controls across different clusters. We will close out this section discussing where the security controls we have already discussed can be deployed in non-Hadoop environments where appropriate.

As with our original paper, this is not intended to be an exhaustive look at all potential security options, but to get the IT and development teams who run these clusters basic security controls in place.

Up next, Hadoop Architecture and Assembly.

—Adrian Lane

The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead

By Mike Rothman

DRB 2016

Once again Securosis and friends are hosting our RSA Conference Disaster Recovery Breakfast. It’s really hard to believe this is the eighth year for this event. Regardless of San Francisco’s February weather, we expect to be seeing clouds all week. But we’re happy to help you cut through the fog to grab some grub, drinks, and bacon.

Kidding aside, we are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the show that is now the RSAC. By Thursday we’re all disasters, and it’s very nice to have a place to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Did we mention there will be bacon?

With the continued support of Kulesa Faul, we’re honored to bring two new supporters in this year. If you don’t know our friends at CHEN PR and LaunchTech, you’ll have a great opportunity to say hello and thank them for helping support your habits.

As always the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open – Mike has acquired a taste for Bailey’s in his coffee.

Please remember what the DR Breakfast is all about. No marketing, no spin, no t-shirts, and no flashing sunglasses – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, we are confident you will enjoy the DRB as much as we do.

See you there.

To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.

—Mike Rothman

Security is Changing. So is Securosis.

By Rich

Last week Rich sent around Cockroaches Versus Unicorns: The Golden Age Of Cybersecurity Startups, by Mahendra Ramsinghani over at TechCrunch, for us to read. It isn’t an article every security professional needs to read, but it is certainly mandatory reading for anyone who makes buying decisions, tracks the security market, or is on the investment or startup side.

It also nearly perfectly describes what we are going through as a company.

His premise is that ‘unicorns’ are rare in the security industry. There are very few billion-dollar market cap companies, relative to the overall size of the market. But security companies are better suited to survive downturns and other challenging times. We are basically ‘cockroaches’, which persist through every tech Armageddon, often due to our ability to fall back on services.

Many security startups are not unicorns; rather, they are cockroaches – they rarely die, and  in tough times, they can switch into a frugal/consulting mode. Like cockroaches, they can survive long nuclear winters. Security companies can be capital-efficient, and typically consume ~$40 million to reach break-even. This gives them a survival edge – but VCs are looking for a “growth edge.”

The security market also appears much smaller than it should be considering the market dynamics, although it is very possible that is changing thanks to the hostile world out there. The article also postulates that the entire environment is shifting, with carriers and managed services providers jumping into acquisitions while large established players struggle.

Yet most of the startups VCs see are just more of the same, fail to differentiate, and rely far too much on really poor FUD-based sales dynamics.

With increasing hacks, the CISO’s life has just become a lot messier. One CISO told me, “Between my HVAC vendor and my board of directors, I am stretched. And everyday I get a hundred LinkedIn requests from vendors. Their FUD approach to security sales is exhausting.”


“I have seen at least 40 FireEye killers in the past 12 months,” one Palo Alto-based VC told me. Clearly he was exhausted. Some sub-sectors are overheated and investors are treading cautiously.

We certainly see the same thing. How many threat intel and security analytics startups does the industry need? We get a few briefing requests a week, from another new company doing exactly the same things. And all our CISO friends hate vendor sales techniques. These senior security folks get upwards of 500 emails and 100 phone calls a week from sales people trying to get meetings. All this security crap looks the same.

This combination inevitably leads to a contraction of seed capital, and that is where our story starts.


Most of you have noticed that over the past few years our research has skewed strongly toward cloud security, automation, and DevOps. This started with our initial partnership with the Cloud Security Alliance to build out the CCSK training class around 6 years ago. Rich had to create all the hands-on labs, which augered him down the rabbit hole of Amazon Web Services, OpenStack, Azure, and all the supporting tools.

As analysts we like to think it’s our job to have a good sense of what’s coming down the road. We made a bet on the cloud and it paid off, transitioning from a hobby to generate beer money to a major source of ongoing revenue. It also opened us up to a wider client base, especially among end-user organizations.

Three years ago Rich realized that in all his cloud security engagements, and all the classes we taught, we heard the same problems over and over. The biggest unsolved problem seemed to be cloud security automation. The next year was spent writing some proof-of-concept code merely to support conference presentations because there were no vendor examples, but at every talk attendees kept asking for “more… faster”.

This demand became too great to ignore, and nearly 2 years ago we decided to start building our own platform. And we did … we built our own cloud security platform. Don’t worry, we don’t have anything to sell you – this is where Ramsinghani’s article comes in.


Our initial plan was to self fund development (Securosis is an awesome business) until we had a solid demo/prototype. Then we assumed it would be easy to get seed cash from some of our successful friends and build a new company in parallel with Securosis to focus on the product. We didn’t just want to start up a software company and jettison Securosis because our research is an essential driver to maintain differentiation, and we wanted to build the company without going the traditional VC route.

We also have some practical limitations on how we can do things. We are older, have families to support, and have deep roots where we live that preclude relocation. The analogy we use is that we can’t go back to eating ramen for dinner every night in a coding flophouse. The demo killed when we showed it to people, we are really smart, and people like us. Our future was bright.

Then we got hit with the reality clue bat. Everything was looking awesome last year at RSA when we started showing people and talking to investors. By summer all our options fell apart. We didn’t fit the usual model. We weren’t going to move to the Bay Area. We couldn’t take pay cuts to ‘normal’ founder levels and still support our families. And to be honest, we still didn’t want to go the normal VC route. We just weren’t going to play that game, given the road rash both Mike and Adrian have from earlier in their careers.

Just like the article said, we couldn’t find seed funding. At least not the way we wanted to build the company. We even had a near-miss on an acquisition, but we couldn’t line everything up to hit everyone’s goals and expectations.

Yet while this all went on, the Securosis business you see every day continued to boom. We increased revenue despite all the distractions and opportunity cost of running a second company. Our services and research continued to drive toward the cloud and automation, exactly as Ramsinghani described. Even the product platform continued to come together well, despite our super limited resources.

Securosis 2.0

We weren’t going to talk about any of this yet, but that article struck too close to home. It described exactly what we have been seeing on the analyst side, and also experiencing as we tried to build a separate company.

First of all we aren’t discarding our core business or customers, but we are most definitely changing direction. Our biggest area of growth has been our cloud security workshops, training, and project/architecture assessments. We barely even talk about them, but they sell like crazy. We’ve spent 6+ years working hands-on in the cloud, and it’s paying off. We spent 3 years focusing on automation and DevOps, and that is also now part of almost all our engagements.

So that’s going to be our new focus. Cloud security and supporting automation, and DevOps tools and techniques. But there are only 24 hours in a day, so we are backing off some of our other research to focus.

We don’t know exactly what this will look like or how quickly we will be able to shift our focus, but we should have our first pass of the new workshops ready to reveal pretty soon, plus another major partnership. We are also looking at options for local events and a new membership program, and have already started new kinds of research. We aren’t changing our spots. A lot of our research will remain free; some will probably be tied into one of our other projects. Nothing changes for existing customers. We will also rebrand to reflect the new focus. But we will keep the Securosis name in some form – we’re attached to it.

We’ll use our automation and orchestration platform, Trinity, as a research tool to test our hare-brained ideas about how cloud security and automation should happen. As we continue to build out its capabilities (we need them for some of our projects), we hope Trinity will interest our research clients in some capacity. It’s not the first time a security services shop has built a product to help them deliver better services cheaper and faster. We call that operation “Securosis Labs” for now.

We have been the security research shop which has been most vociferous and aggressive about how the cloud is going to change everything we know about the technology business and securing it. We’re putting our money where our mouth is because this is so clearly where the world is headed, we would be idiots not to jump on it.

Now is the time. It’s time to grow the company beyond what 3 guys in coffee shops can deliver. It’s time to put into practice everything we have learned about the new world order. It’s time to lead organizations through what will be a turbulent ride into the clouds. It’s time for Securosis 2.0. We’re very fired up, and we ask you to stay tuned as we figure out and announce what this will look like over the next few months.


Wednesday, January 20, 2016

Incite 1/20/2016 — Ch-ch-ch-ch-changes

By Mike Rothman

I have always gotten great meaning from music. I can point back to times in my life when certain songs totally resonate. Like when I was a geeky teen and Rush’s Signals spoke to me. I saw myself as the awkward kid in Subdivisions who had a hard time fitting in. Then I went through my Pink Floyd stage in college, where “The Wall” dredged up many emotions from a challenging childhood and the resulting distance I kept from people. Then Guns ‘n Roses spoke to me when I was partying and raging, and to this day I remain shocked I escaped largely unscathed (though my liver may not agree).

But I never really understood David Bowie. I certainly appreciated his music. And his theatrical nature was entertaining, but his music never spoke to me. In fact I’m listening to his final album (Blackstar) right now and I don’t get it. When Bowie passed away last week, I did what most people my age did. I busted out the Ziggy Stardust album (OK, I searched for it on Apple Music and played it) and once again gained a great appreciation for Bowie the musician.

Bowie Changes

Then I queued up one of the dozens of Bowie Greatest Hits albums. I really enjoyed reconnecting with Space Oddity, Rebel Rebel, and even some of the songs from “Let’s Dance”, if only for nostalgia’s sake. Then Changes came on. I started paying attention to the lyrics.

Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Don’t want to be a richer man Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Just gonna have to be a different man Time may change me But I can’t trace time – David Bowie, “Changes”

I felt the wave of meaning wash over me. Changes resonates for me at this moment in time. I mean really resonates. I’ve alluded that I have been going through many changes in my life the past few years. A few years ago I reached a crossroads. I remembered there are people who stay on shore, and others who set sail without any idea what lies ahead. Being an explorer, I jumped aboard the SS Uncertain, and embarked upon the next phase of my life.

Yet I leave shore today a different man than 20 years ago. As the song says, time has changed me. I have more experience, but I’m less jaded. I’m far more aware of my emotions, and much less judgmental about the choices others make. I have things I want to achieve, but no attachment to achieving them. I choose to see the beauty in the world, and search for opportunities to connect with people of varied backgrounds and interests, rather than hiding behind self-imposed walls. I am happy, but not satisfied, because there is always another place to explore, more experiences to have, and additional opportunities for growth and connection.

Bowie is right. I can’t trace time and I can’t change what has already happened. I’ve made mistakes, but I have few regrets. I have learned from it all, and I take those lessons with me as I move forward. I do find it interesting that as I complete my personal transformation, it’s time to evolve Securosis. You’ll learn more about that next week, but it underscores the same concept. Ch-ch-ch-ch-changes. Nothing stays the same. Not me. Not you. Nothing. You can turn and face the strange, or you can rue for days gone by from your chair on the shore.

You know how I choose.


Photo credit: “Chchchange” from Cole Henley

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

SIEM Kung Fu

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Everyone is an insider: Since advanced threat detection is still very shiny, it’s not a surprise that attention has swung back to the insider threat. It seems that every 4-5 years people remember that insiders have privileged access and can steal things if they so desire. About the same time, some new technology appears that promises to identify those malicious employees and save your bacon. Then it turns out finding the insiders is hard and everyone focuses on the latest shiny attack vector. Of course, the reality is that regardless of whether the attack starts externally or internally to your network, at some point the adversary will gain presence in your environment. Therefore they are an insider, regardless of whether they are on your payroll or not. This NetworkWorld Insider (no pun intended and the article requires registration) does a decent job of giving you some stuff to look for when trying to find insider attacks. But to be clear, these are good indicators of any kind of attack. Not sure to track insiders. Looking for DNS traffic anomalies, data flows around key assets, and tracking endpoint activity are good tips. And things you should already be doing… – MR

  2. Scarecrow has a brain: On first review, Gary McGraw’s recent post on 7 Myths of Software Security best practices set off my analyst BS detector. Gary is about as knowledgable as anyone in the application security space, but the ‘Myths’ struck me as straw man arguments; these are not the questions customers are asking. But when you dig in, you realize that the ‘Myths’ accurately reflect how companies act. All too often IT departments fail to comprehend security requirements and software developers taking their first missteps in security fall into these traps. They focus on one aspect of a software security program – maybe a pen test – not understanding that security needs to touch every facet of development. Application security is not a bolt-on ‘thing’, but a systemic commitment to delivering of secure software as a whole. If you’re starting a software security program, this is recommended reading. – AL

  3. What’s next, the Triceratops Attack? Yes, I’m poking fun at steganography, but pretty much every sophisticated attack (and a lot of unsophisticated ones) entails hiding malicious code in seemingly innocuous files through this technique. So you might as well learn a bit about it, right? This pretty good overview by Nick Lewis on SearchSecurity (registration required here too, ugh) describes how steganography has become commonplace. With an infinite number of places to hide malicious code, we always come back to the need to monitor devices and activity to find signs of attack. Sure, you should try to prevent attacks. But, as we’ve been saying for years, it’s also critical to increase investment in detection, because attackers are getting better at hiding attacks in plain sight. – MR

  4. Winning: Jeremiah Grossman has a good succinct account of the ad-blocking wars, capturing the back and forth between ad-tech and personal blocker technologies. He also nails the problem people outside security are not fully aware of, that “the ad tech industry behaves quite similarly to the malware industry, with both the techniques and delivery” and – just like malware – advertisers want to pwn your browser. I guess you could make a case that most endpoint security packages are rootkits, but I digress. Although I disagree with his conclusion that “ad tech” will win. Many of us are fine with not getting content that requires registration, having our personal data siphoned off and sold, or paying for crap. With so many voices on the Internet you can usually find the same (or better) content elsewhere. Trackers and scripts are just another indication that a site does not have your best interests at heart. So yes, you can win… if you choose to. – AL

  5. Increasing the security of your (Mac): As a long-time security person, I kind of forget the basics. Sure I write about fundamentals from time to time on the blog, but what about the simple stuff we do by habit? That’s the stuff that our friends and family need to do and see. Some understand because they have been around folks like us for years. Others depend on you to configure and protect their devices. Being the family IT person is OK, but it can get tiring. So you can thank Constin Raiu for documenting some good consumer hygiene tactics on the Kaspersky blog. Yes, this is obvious stuff, but probably only to you. Yes, it’s allegedly Mac-focused. But the tactics apply to Windows PCs as well. And we can debate how useful so-called security solutions are. Yet that’s nitpicking. You can’t stop every attack (duh!), but you (and the people you care about) don’t need to be low-hanging fruit for attackers either. – MR

—Mike Rothman

Friday, January 15, 2016

Summary: Impossible

By Rich

Rich here.

When I hurt my knee running right before Thanksgiving everyone glanced at my brace and felt absolutely compelled to tell me how much “getting old sucks”. Hell, even my doctor commiserated as he discussed his recent soccer injury.

The only problem is I first hurt me knee around junior high, and in many way’s it’s been better since I hit my 40’s than any other time I can remember.

As a kid my mom didn’t want me playing football because of my knees (I tried soccer for a year in 10th grade, hurt it worse, then swapped to football to finish up high school). I wore a soft brace for most of my martial arts career. I’ve been in physical therapy so many times over the past three decades that I could write a book on the changing treatment modalities of chondromalacia patellae. I had surgery once, but it didn’t help.

As a lifetime competitive athlete, running has always been part of my training, but distance running was always a problem. For a long time I thought a 10K race was my physical limit. Training for more than that really stressed the knee. Then I swapped triathlon for martial arts, and realized the knee did much better when it wasn’t smashing into things nearly every day.

marathon medal

Around that time my girlfriend (now wife) signed us up for a half-marathon (13.1 miles). I nearly died, but I made it. Over the subsequent decade I’ve run more of them and shaved 45 minutes off my PR. The older I get, the better my times for anything over a couple miles, and the longer distances I can run. But there’s one goal that seemed impossible – the full marathon. 26.2 miles of knee pounding awesomesauce. Twice as far as the longest race I ever ran.

My first attempt, last year, didn’t go so well. Deep into my training program I developed plantar fasciitis, which is a fancy way of saying “my foot was f-ed up”. So I pushed my plans back to a later race, rehabilitated my foot… and got stomach flu the week before the last race of the year before Phoenix weather went “face of the sun” hot. A seriously disheartening setback after 6 months training. I made up for it with beer. Easier on the foot.

A few months later an email popped up in my inbox letting me know registration for the Walt Disney World Marathon opened the next day. My wife and I looked at it, looked at each other, and signed up before the realistic parts of our brains could stop us. Besides, the race was only a month after we would be there with the kids, so we felt justified leaving them at home for the long weekend.

I built up a better base and then started a 15-week custom program. Halfway through, on a relatively modest 8-mile run in new shoes, I injured my achilles tendon and had to swap to the bike for a couple weeks. Near the peak of my program, on a short 2-mile run and stretch day, I angled my knee just the wrong way, and proceeded to enjoy the pleasure of reliving my childhood pain.

Three weeks later the knee wasn’t better, but I could at least run again. But now I was training in full-on panic mode, trying to make up for missing some of the most important weeks of my program. My goal time went out the window, and I geared down into a survival mindset. Yes, by the time I lined up at the race start I had missed 5 of 15 weeks of my training program. Even my wife missed a few weeks thanks to strep throat (which I also caught). To add insult to injury, it was nearly 70F with 100% humidity. In December. At 5:35am.

You know what happened next? We ran a friggin’ marathon. Yes, at times things hurt. I got one nasty blister I patched up at an aid station. My headphones crapped out. I stopped at every single water station thanks to the humidity, and probably should have worn a bathing suit instead of running shorts. But overall it wasn’t bad. Heck, I enjoyed most of the race. I didn’t really start hurting until mile 17, and my pace didn’t fully crack until mile 22. Disney puts on a hell of a race, with distracting entertainment along the entire course.

Thanks to the humidity it was the slowest Disney marathon in the 23-year history of the event. Even then, my time wasn’t embarrassing, and I finished in the top 20% or so (at a time that isn’t even close to getting into Boston or New York). I didn’t feel terrible. My wife also finished up in the front third of the pack, and we spent the afternoon walking around Disney World (slowly). We felt really good the next day, other than my darn knee. The one that held up for all 26.2 miles. The one that will be better in a week or two.

I checked off a bucket list item and completed something I thought was impossible. Something I told myself my entire life I couldn’t do.

There is nothing more satisfying than proving yourself wrong. Except, perhaps, doing it again.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Favorite Outside Posts

Research Reports and Presentations

Top News and Posts


Wednesday, January 13, 2016

Incite 1/13/2016: Permitted

By Mike Rothman

I’m not sure how it happened, but XX1 turned 15 in November and got her driver’s permit. Wait, what?!?! That little girl can now drive. Like, legally? WTF? Clearly it is now January, and I am still in shock that 15 years has passed by in the blink of an eye.

Now it’s on me to teach her to drive. She’ll take a driver’s ed course in February, so that will help and give her some practical experience with someone who actually drives with teenagers for a living. Is that on the list of worst jobs? Second to elephant cage cleaner at the zoo, driving with inexperienced drivers seems like my version of hell on earth.

Then I remembered back to when I learned to drive. My Dad had a ‘72 Bug for me that he drove around. He picked me up and drove me to the local town pool parking lot. He taught me how to balance the clutch (yes, it was a stick shift) and start, stop, drive in a straight line, and turn. I recall him being extraordinarily patient as I smoked the clutch and stalled out 10 times. But after a while I got the hang of it.

drivers permit

Then he said, “OK Mike. Drive home.” WHAT? I was kind of in shock. It was maybe 3 miles to my house, but it was 3 miles of real road. Road with other drivers on it. I almost crapped my pants, but we got home in one piece. Dad would let me drive most places after that, even on the highway and on bridges. He remained incredibly patient, even when I stalled 10 times on a slight incline with about 50 cars behind me sitting on their horns. Yup, crapped my pants that time too. I remember that like it was yesterday, but it was 31 years ago. Damn.

So before winter break I took XX1 out to the parking lot of the library. She got into the driver’s seat and I almost crapped my pants. You getting the recurring theme here? She had no idea what she was doing. I have an automatic transmission, so she didn’t have to worry about the clutch, but turning the car is a learned skill, and stopping without giving me whiplash was challenging for a little while. She did get the hang of it, but seeing her discomfort behind the wheel convinced me that my plan of having her drive home (like my Dad did to me) wouldn’t be a great idea. Neither for her self-esteem nor my blood pressure.

She’ll get the hang of it, and I have to remember that she’s different than me and I’m a different teacher than my Dad. We’ll get her driving at her pace. After she takes the driver’s ed class I’ll have her start driving when she’s with me. Before we know it, she’ll have 25-30 hours behind the wheel.

But I’m not taking any chances. I plan on sending her to an advanced driving school. My cousin sent me a link to this great program in NC called B.R.A.K.E.S, which provides a 4-hour defensive driving workshop specifically for teens. I’m also going to take her to a Skip Barber racing class or something similar, so she can learn how to really handle the car. Sure it’s expensive, but she’s important cargo, commanding a two-ton vehicle, so I want to make sure she’s prepared.

But I have to understand this is a metaphor for the rest of her life. As parents we can prepare her to the best of our ability. Then we need to let her loose to have her own experiences and learn her lessons. She can count on our support through the inevitable ups and downs. My little girl is growing up.


Photo credit: “International Driving Permit” from Tony Webster

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

SIEM Kung Fu

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Security as a business problem: The more things change, the more they stay the same. NetworkWorld’s Overcoming stubborn execs for security sake took me back to 2006, right before I wrote the Pragmatic CSO. Senior management doesn’t get it? Yup. Mid-managers want to circumvent the rules? Yup. On and on it goes, and we run on the hamster wheel for a decade, ending up right back in the same place. Welcome to the rest of your security career. The fact is that as high-profile as security has become to senior management and the Audit Committee, what’s a lot more important to them is making the numbers and hitting their objectives. So how can you get them to understand? You can’t. Not fully anyway. But you can make sure you discuss security in business terms, and that will at least provide some common ground for discussion. The article does a good job of discussing those tactics. – MR

  2. Shoot the messenger: Every year some legitimate tool – security or otherwise – gets labeled as a security threat. It’s not just nmap or Metasploit – even Google’s web crawlers can detect certain vulnerabilities and catalog the results (and do), and are therefore called a “hacker tool”, especially after con talks that explain how to use Google to hack. This time the Shodan web crawler was called a threat, as a recent advisory from Checkpoint noted what appeared to be Shodan scans prior to data breaches. The advisory itself is a good thing, but advice to block Shodan scans to deter hacking made the Twitterverse erupt in controversy. Thankfully social media has set everyone straight and the issue is resolved, right? Honestly, there is nothing wrong with blocking external Shodan scans while you address the vulnerabilities, but those pesky skeptics in the security community know blocking will be the ‘solution’ – not merely a starting point. Exactly like last time. – AL

  3. 4 tips for IR? Obviously there are more steps an incident response. So this quick post by the CrowdStrike folks was interesting, but I think they did a decent job making a few critical points. First, you have to start with a damage assessment and an understanding of whether the adversary is still active in your environment. Next try to corral the devices in question, and data at risk, in some segmented and monitored environment, being careful to keep systems up to avoid either alerting the adversary or destroying evidence. Then call in the Forensicators. Given the shortage of those folks, and the level of demand, that is a non-trivial effort. But unless you are a Fortune-class enterprise with a group of incident responders you’ll need to work with an external firm. Then you need to notify affected stakeholders, and return systems to a healthy state. Obviously there are dozens of activities behind each of those tips, but they are good things to keep in mind.– MR

  4. Down in front: When Firefox stopped connecting via HTTPS to many web sites, some of you might have been frustrated enough to switch to a new browser. Firefox’s latest version stopped accepting SHA-1 signed certificates because the algorithm has been deprecated. But if your company uses DLP or a web security product that performs a ‘man-in-the-middle’ intercept to inspect content, odds are likely it still issues SHA-1 signed certificates. That makes Firefox barf, so you can’t connect. Too bad, so sad. You can use another browser if you choose, but as your requests are already being filtered (thanks, web proxy!), you can configure FF to accept those SHA-1 certificates without concern for degraded privacy or security. But you should ask your security vendor to up their game. – AL

  5. Can you change your mindset? This isn’t security related, but interesting enough to mention. There has been a ton of research on growth vs. set mindsets. Psychology Today has a quick article covering the research highlights. People with set mindsets are good with the status quo, and don’t think intelligence changes. Those with growth mindsets believe they can grow intelligence as they push out of their comfort zones and try new things. If you tend toward ‘set’, can you ‘grow’? Or are these fixed aspects of your personality that aren’t easy to change? The article makes it sound like you just decide to grow. Is it that easy? Maybe it should be, but I have my doubts about whether folks can fundamentally change their mindsets. – MR

—Mike Rothman

Tuesday, January 12, 2016

SIEM Kung Fu: Fundamentals [New Series]

By Mike Rothman

Another SIEM blog series? Really? Why are we still talking about SIEM? Isn’t that old technology? Hasn’t it been subsumed by new and shiny security analytics products and services? Be honest – those thoughts crossed your mind, especially because we have published a lot of SIEM related research over the past few years. We previously worked through the basics of the technology and how to choose the right SIEM for your needs. A bit over a year ago we looked into how to monitor hybrid cloud environments.

The fact is SIEM has become somewhat of a dirty word, but that’s ridiculous. Security monitoring needs to be a core, fundamental, aspect of every security program. SIEM – in various flavors, using different technologies and deployment architectures – is how you do security monitoring. So it’s not about getting rid of the technology – it’s more about how to get the most out of your existing investment, and ensuring you can handle the advanced threats facing organizations today.

But we understand how SIEM got its bad name. Early versions of the technology were hard to use, and required significant integration just to get up and running. You needed to know what attacks you were looking for, and unfortunately most adversaries don’t send attack playbooks ahead of time. Operating an early SIEM required a ninja DBA, and even then queries could take hours (or days for full reports) to complete. Adding a new use case with additional searches and correlations required an act of Congress and a truckload of consultants. It’s not surprising organizations lost their patience with SIEM. So the technology was relegated to generating compliance reports and some very simple alerts, while other tools were used to do ‘real’ security monitoring.

But as with most other areas of security technology, SIEM has evolved. Security monitoring platforms now support a bunch of additional data types, including network packets. The architectures have evolved to scale more efficiently and have integrated fancy new ‘Big Data’ analytics engines to improve detection accuracy, even for attacks you haven’t seen before. Threat intelligence is integrated into the SIEM directly, so you can look for attacks on other organizations before they are launched at you.

So our new SIEM Kung Fu series will streamline our research to focus on what you need to know to get the most out of your SIEM, and solve the problems you face today by increasing your capabilities (the promised Kung Fu). But first let’s revisit the key use cases for SIEM and what is typically available out of the box with SIEM tools.

kung fu


The original use case for SIEM was security alert reduction. IDS and firewall devices were pumping out too many alerts, and you needed a way to figure out which of them required attention. That worked for a little while, but then adversaries got a lot better and learned to evade many of the simple correlations available with first-generation SIEM. Getting actionable alerts from your SIEM is the most important use case for the technology.

Many different techniques are used to detect these attacks. You can hunt for anomalies that kinda-sorta look like they could be an attack or you can do very sophisticated analytics on a wide variety of data sources to detect known attack patterns. What you cannot do any more is depend on simple file-based detection, because modern attacks are far more complicated. You need to analyze inbound network traffic (to find reconnaissance), device activity (for signs of compromise), and outbound network traffic (for command and control / botnet communications) as well. And that’s a simplified view of how a multi-faceted attack works. Sophisticated attacks require sophisticated analysis to detect and verify.

Out of the box a SIEM offer a number of different patterns to detect attacks. These run the gamut from simple privilege escalation to more sophisticated botnet activity and lateral movement. Of course these built-in detections are generic and need to be tuned to your specific environment, but they can give you a head start for finding malicious activity in your environment. This provides the quick win which has historically eluding many SIEM projects, and builds momentum for continued investment in SIEM technology.

SIEM technology has advanced to the point where it can find many attacks without a lot of integration and customization. But to detect advanced and targeted attacks by sophisticated adversaries, a tool can only get you so far. You need to evolve how you use security monitoring tools. You cannot just put a shiny new tool in place and expect advanced adversaries to go away. That will be our area of focus for the later posts in this series.


Once you have determined an attack is under way – or more accurately, once you have detected one of the many attacks happening in your environment – you need to investigate the attack and figure out the extent of the damage. We have documented the incident response process, especially within the context of integrating threat intelligence, and SIEM is a critical tool to aggregate data and provide a platform for search and investigation.

Out of the box a SIEM will enable responders to search through aggregated security data. Some tools offer visualizations to help users see anomalous activity, and figure out where certain events happened in the timeline. But you will still need a talented responder to really dig into an attack and figure out what’s happening. No tool can take an incident response from cradle to grave. So the SIEM is not going to be the only tool your incident responders use. But in terms of efficiently figuring out what’s been compromised, the extent of the damage, and an initial damage assessment, the SIEM should be a keystone of your process. Especially given the ability of a SIEM to capture and analyze network packets, providing more granularity and the ability to build a timeline of what really happened during the attack.


Finally, the SIEM remains instrumental for generating compliance reports, which are still a necessary evil to substantiate the controls you have in place. This distinctly unsexy requirement seems old hat, but you don’t want to go back to the days of preparing for your assessments by wading through reams of log printouts and assembling data in Excel, do you? So SIEM tools ship with dozens of reports to show the controls in place and map them to compliance requirements, so you don’t need to do this manually.

Another reason the compliance use case is still important is the skills gap every security team struggles with. If you have valuable and scarce security talent generating reports to make an auditor go away, they aren’t verifying and triaging alerts, tuning detections to find new attacks, or investigating incidents. So automating as much of the compliance process as possible remains an important SIEM use case.

As we have mentioned in earlier SIEM research, a lot of these basic use cases can (and should) be implemented during a PoC process. That way you can have the vendor’s sales engineers help kickstart your efforts and get you up and running with the out-of-box capabilities. But a sophisticated attacker targeting your organization will not be detected by basic SIEM correlation. Through the rest of this series we will dig into more complicated use cases, including advanced threat detection and user behavior analysis, which require pushing the boundaries of what SIEM does and how you use it.

—Mike Rothman

Wednesday, January 06, 2016

Incite 1/6/2016 — Recharging

By Mike Rothman

The last time I took 2 weeks off was probably 20 years ago. As I write that down, it makes me sad. I’ve been been running pretty hard for a long time. Even when I had some forced vacations (okay, when I got fired), I took maybe a couple days off before I started focusing on the next thing. Whether it was a new business or a job, I got consumed by what was next almost immediately. I didn’t give myself any time to recharge and heal from the road rash that accumulated from one crappy job after another.

Even when things are great, like the past 6 years working with Rich and Adrian, I didn’t take a block of time off. I was engaged and focused and I couldn’t wait to jump into the next thing. So I would. I spent day after day during the winter holidays as the only person banging away at their laptop at the coffee shop while everyone else was enjoying catching up with friends over Peppermint Mocha lattes.


I rationalized that I could be more productive because my phone wasn’t ringing off the hook and I wasn’t getting my normal flow of email. There wasn’t much news being announced and my buddies weren’t blogging at all. So I could just bang away at the projects I didn’t have time for during the year. Turns out that was nonsense. I was largely unproductive during winter break. I read a lot, spent time thinking, and it was fine. But it didn’t give me a chance to recharge because there was no separation.

The truth is I didn’t know how to relax. Maybe I was worried I wouldn’t be able to start back up again if I took that much time away. It turns out the projects that didn’t get done during the year didn’t get done over break because I didn’t want to do them. So they predictably dragged on through winter break and then into the next year.

That changed this year. I’m just back from two weeks pretty much off the grid. I took a week away with my kids. We went to Florida and checked out a Falcons game in Jacksonville, the Kennedy Space Center in Cape Canaveral, and Universal Studios in Orlando. We were able to work in some family time in South Florida for Xmas before heading back to Atlanta. I stayed on top of email, but only to respond to the most urgent requests. All two of them. I didn’t bring my laptop, so if I couldn’t take care of it on my iPad, it wasn’t getting done.

Then I took a week of adult R&R on the beach in Belize. I’m too cheap to pay for international cellular roaming, so my connectivity was restricted to when I could connect to crappy WiFi service. It was hard to check email or hang out in our Slack room during a snorkeling trip or an excursion down the Monkey River. So I didn’t. And the world didn’t end. The projects that dragged through the year didn’t get done. But they weren’t going to get done anyway and it was a hell of a lot more fun to be in Belize than a crappy coffee shop pretending to work.

I came back from the time off recharged and ready to dive into 2016. We’ve got a lot of strategic decisions to make as the technology business evolves towards cloud-everything and we have to adapt with it. I don’t spend a lot of time looking backwards and refuse to judge myself for not unplugging for all those years. But I’ll tell you, there will be more than one period of time where I’ll be totally unplugged in 2016. And I’ll be a hell of a lot more focused and productive when I return.


Photo credit: “Recharging Danbo Power” from Takashi Hososhima

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Cloud vs. on-prem. Idiotic discussions continue: Do me a favor and don’t read this article trying to get to the bottom of whether the public cloud or on-prem is more secure. It’s an idiotic comparison because it depends on way too many factors to make a crass generalization. Period. You can architect a public cloud environment that is more secure than an environment built on-prem. But for a different use case you could make a case for the converse. It’s not about the environment an application and technology stack is built and run in, it’s about how it’s architected and how it takes advantage of the native capabilities of each option. We believe (and are making pretty significant corporate bets) that a public-cloud environment can be more secure than something built on-prem. But it depends, and we cannot wait until everyone is doing their innovative work in the cloud, and then discuss how to make the public cloud as secure as possible, instead of whether it’s more secure than something else. – MR

  2. In front of our eyes: Volkswagen was discovered to have modified diesel vehicles engine management software to reduce emissions temporarily, during the emissions testing process. Think about it for a minute: millions of vehicles were tested each year, by trained techs with tools and software designed to audit vehicle emissions, and yet software designed to circumvent the audits went undetected for years. While that story has nothing to do with security per se, the ‘attack’ used to bypass the test (and therefore the certification process), and the third-party discovery, is a story we see played out over and over with IT breaches. When you have a sophisticated and motivated adversary, they will be aware of (and work around) your defenses and assessment techniques. A single static test with an unquestioned binary response does not cut it. Think about that the next time you are looking to catch fraud or look for compromised systems in a complicated environment. – AL

  3. The invisible malware: With all of the innovation happening around malware detection, it’s getting easier to detect attacks, right? Yeah, not so much. Turns out it’s getting harder. As Dark Reading described, the newly discovered Latentbot uses so much obfuscation it’s largely invisible to current-generation detection tools. It’s a good thing China isn’t hacking so much (according to FireEye’s last earnings call anyway) because that gives researchers plenty of time to find cool botnets. And it’s interesting to learn how this new malware injects code multiple times, never stays installed for too long, and exploits device at multiple levels to ensure persistent access and control over them. Yeah, it is clear you can’t stop attacks like this, so focusing on detecting lateral movement and exfiltration are your best options for finding pwned devices. – MR

  4. Banking on irrelevance: SSL and (to a lesser extent) TLS 1.0 have a handful of known vulnerabilities and weaknesses, depending on how they are deployed. The PCI Council previously required firms to update before the end of 2015, but recently the Council pushed its mandatatory migration date from SSL to TLS out to June 2018. Because, well, the big retailers pulling the PCI-DSS strings couldn’t get there in time. Attackers have bags full of tricks for attacking these older protocols and accessing the network sessions they were designed to protect. It’s not clear how the Council decided pushing back the date two and a half years made any sense, but since they don’t mandate end-to-end encryption and pass card data in clear text, you are probably thinking “What is the point?” And from a PCI assessment perspective, if Apple Pay, Samsung Pay, and the like continue to gain acceptance, in three years payment tokens will likely make most of current PCI compliance irrelevant. But sometimes compliance drives needed change, and migrating to TLS 1.2 will be beneficial to data security. At some point, if it ever happens. – AL

  5. The few, the proud, the cyber: It’s good to see the military continuing to invest in cyber capabilities. The Army National Guard is standing up new cyber units to help do surveillance and recon for the nation’s adversaries. Ho hum, right? Actually it’s interesting because the National Guard may be able to get access to security professionals otherwise gainfully employed by commercial entities. It’s a big sacrifice to do security for military pay, when commercial organizations have totally different pay scales. But being able to help out (via the National Guard) could be a good alternative for patriotic folks who want commercial jobs. – MR

—Mike Rothman

Wednesday, December 16, 2015

Incite 12/15/2015: Looking Forward

By Mike Rothman

In last week’s Incite I looked backwards at 2015. As we close out this year (this will be the last Incite in 2015), let me take a look forward at what’s in store for 2016.

Basically I don’t have any clue.

I could lie to you and say I’ve got it all figured out, but I don’t. I fly by the seat of my pants pretty much every day of my life. And any time I think I have things figured out, I get a reminder (usually pretty harsh) that I don’t know squat. One thing I’m comfortable predicting is that things will be changing. Because they always do. Some years the change is very significant, like in 2015. Other years less so. But all the same, change is constant in my world.

looking forward

We’re going to do some different things at Securosis next year. We are very pleased with how we have focused our research toward cloud security, and plan to double down on that in 2016. We’ll roll out some new offerings, though I’m not exactly sure when or what they’ll be. We have a ton of ideas, and now we have to figure out which of them make the most sense, because we have more ideas than time or resources. Rich, Adrian, and I will get together in January and make those decisions – and it will involve beer.

Personally, I’ll continue my path of growth because well, growth. That includes trying new things, traveling to new places, and making new friends. I’m not going to set any goals besides that I want to wake up every morning, maintain my physical health, and continue my meditation and spiritual practices. My kids are at an age where they need my presence and guidance, even though they will likely not listen, because teenagers know everything. Which basically means I’ll also need to be there to pick them up when they screw things up (and they will), and try to not say I told you so too many times.

I’ll also tell my story of transformation through the year. I’m not ready to do that yet, but I will because it’s an interesting story and I think it will resonate with some of you. It also ensures that I will remember as time marches on. I spent some time earlier in the year reading through old Incites and it was a great reminder of my journey.

Overall I’m very excited about 2016 and continuing to live with a view toward potential and not limitations. I’m focused on making sure those I love know they are special every single day. I’m committed to being happy where I am, grateful for how I got here, and excited for what is to come. I’ll ring in the New Year in a tropical paradise, and play the rest by ear.

All of us at Securosis are grateful for your support, and we wish you a healthy and happy 2016.


Photo credit: “looking forward to” from Elizabeth M

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.

Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.

Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers

Incite 4 U

  1. Good deed for the holidays: You too can help make software security better! OWASP, the Open Web Application Security Project, is developing a new set of secure coding guidelines for software developers. This document will be a great aid to developers who want to get up to speed on secure coding. It offers a succinct set of code examples – in most of the widely used programming languages – which address the top ten security coding flaws. And what developer doesn’t love easy to understand code examples? But wait, there’s more! This effort is truly open, so you get to participate in building the guidelines: the document I referenced is open for public comments and direct editing! So if you think the document is missing something, or there are better examples to be offered, or you think something is wrong, you can improve it. Do a good deed for the holidays and contribute. – AL

  2. Happy Holidays. Let’s make some crap up… It’s the holiday season. So obviously we will be subjected to everyone’s predictions of what’s in store for 2016. As you can tell from our last FireStarter of the year, we don’t buy into predictions. But the IDC folks don’t have any issue making things up. Their cousins at NetworkWorld (both have the same corporate parent IDG) have some bait posted about an upcoming IDC predictions webcast, and one of their predictions is that by 2020 data breaches will affect 25% of the world’s population. What does that even mean? How could you tell if it’s right? And who cares anyway? How will that prediction do anything to change what you are doing on a daily basis? Right, it won’t, because odds are you have already been affected by a data breach. So this is the worst kind of prediction. It can’t be proven or disproven, and it’s not relevant to your daily activity. Bravo IDC. I hope the others are a little better, but I won’t know, because I have better stuff to do than listen to nonsense. – MR

  3. Black Friday, Cyber Monday, and Liability Tuesday: As I have been out and about a lot this month, showing relatives around Arizona, my credit cards have gotten a lot of use. Restaurants, gift shops, museums, pet stores, big box retail, national parks, and even a place called “The Hippie Emporium” (don’t ask). And you know what I have seen? Outside Target, not a single merchant had adopted EMV. EMV-ready PoS devices are in place, but the EMV functionality is not operational. Got that? All that hype about merchant liability and almost zero adoption. A couple weeks back Branden Williams asked (paraphrasing) will sucky and slow EMV chip readers will cause people stay home and shop at Amazon or other online retailers. To which I respond ‘No’: they are not in wide enough use to have a detrimental effect. Amazon is getting a ton of new traffic this year, and I hear so are Etsy and even the ecommerce sites of traditional brick-and-mortar stores. It’s not because of EMV readers – it’s just getting easier to shop online, and more people are comfortable with it. But it does mean we are going to see the effects of the liability shift soon – ‘tis the season for credit card scams and fraud, and we will see some merchants get hammered. – AL

  4. Step by step malvertising: I enjoy blow-by-blow descriptions of recent attacks, so thanks to the Malwarebytes folks, who posted a detailed analysis of a recent malvertising campaign targeting Xfinity. What’s interesting is how this attack combines malvertising, an exploit kit, phishing (to collect personal data), and then a tech support scam. Now that’s leverage. Of course there are clues it’s a scam, including a different domain for the first linked site. Malwarebytes also posted a set of indicators so you can be ready for this kind of attack if your employees or family tend to click. – MR

—Mike Rothman

Tuesday, December 15, 2015

Building a TI Program: Success and Sharing

By Mike Rothman

To wrap up our series on Building a Threat Intelligence Program (Introduction; Gathering TI; Using TI), we need to jump back to the beginning for a bit. How do you define success of the program? More importantly, how can you kickstart the program with a fairly high-profile success to show the value of integrating external data into your defenses, and improve your security posture? That involves getting a quick win and then publicizing it.

Quick Win

The lowest-hanging fruit in threat intel is using it to find an adversary already in your environment who you didn’t know about. Of course it would be better if you didn’t have an active adversary within your defenses, but that is frankly an unlikely scenario. The reality is that some devices in your environment are already compromised – it’s a question of whether you know about them.

You are already doing security monitoring (you can thank compliance for that), so it’s just a matter of searching your existing repository of security data for indicators from your threat feeds. Any log aggregation or SIEM platform will perform a search like this. Of course it’s a manual process, and that’s fine for right now – you’re just looking for a quick win.

Once you complete the search one of two things happens. Perhaps you found an active adversary you didn’t know about. You can drop the proverbial mic at this point – you have proven the value of external threat intel clearly. But before you spend a lot of time congratulating yourself, you have an incident response to get moving. Obviously you’ll document it, and be able to tell a compelling story of how TI was instrumental in identifying the attack earlier than you would have discovered it otherwise.

If you don’t find a smoking gun you’ll need to be a little more creative. We suggest loading up a list of known bad IP addresses into your egress firewall and looking for the inevitable traffic to those sites, which may indicate C&C nodes or other malicious activity. The value isn’t as pronounced as finding an active adversary, but it illustrates your new ability to find malicious traffic sooner using a TI feed.

Keep in mind that the Quick Win is just that. It’s shows short-term value for an investment in threat intel. This can (and should) take place within any proof of concept you run with TI vendors during procurement. If you aren’t getting immediate value, either you are using the wrong data source and/or tool, or you already had a strong security posture and will likely get better short-term value from another project.

Sustained Success

We didn’t call this series “Getting a Quick Win with TI”, so we need to expand our aperture a bit and focus on turning the quick win into sustainable success. Of course you accomplish this by examining your process from a process-centric perspective. There are three main aspects of building out the program from the success of a quick win:

  1. Operationalizing TI: We covered this in depth in our last post on Using TI. We suggest starting by integrating the TI into your security monitoring environment. Once that is operational you can add additional use cases, such as integrating into your perimeter gateways and egress filters for proactive blocking, as well as leveraging the data within your incident response process.
  2. Evaluating TI Sources: This is a key aspect of optimizing your program. You cannot just assume the data source(s) you selected now will provide the same impact over time. Things change, including adversaries and TI providers. You are under constant scrutiny for how your security program is performing, so your TI vendors (actually all your vendors) will be under similar scrutiny. You should be able to close the loop by tracking TI, to alerts, to blocked or identified attacks, by instrumenting your security environment to track this data. Some commercial TI platforms offer this information directly, but alternately you could build it into your SIEM or other controls.
  3. Selling the Value: Senior executives, including your CIO, have a lot of things to deal with every day. You cannot count on them remembering much beyond the latest fire to appear in the inbox today. So you need to systematically produce reports that show the value of TI. This should be straightforward, usings your instrumentation for evaluating TI sources. This is another topic to cover in your periodic meetings with senior management. Especially when the renewal is coming up and you need to keep the funding.

Executing on a successful security program requires significant planning and consistent execution. You cannot afford to focus only on the latest attack or incident (although you also need to do some of that), but must also also think and act strategically; here a programmatic approach offers huge dividends. If you really want to magnify your impact, you’ll need to move beyond tactical day-to-day security battles, and implement a program for both TI and security activities in general.


The success of threat intelligence hinges upon organizations sharing information about adversaries and tactics, so everyone can benefit from surviving attacks. For years this information sharing seemed like an unnatural act to enterprises. A number of threat intelligence vendors emerged to fill the gap, gathering data from a variety of open and proprietary sources. But we see a gradual growth in willingness of organizations to share information with other organizations of similar size or within an industry. Of course threat information can be sensitive, so sharing with care and diligence are critical aspects of a threat intelligence program.

The first decision point for sharing is to define the constituency to share information with. This can be a variety of organizations, including:

  1. ISAC: Many the larger industries are standing up their own Information Sharing and Analysis Centers (ISAC), either as part of an industry association or funded by the larger companies in the industry. These ISACs are objective and exist to provide a safe place to collect and share industry threat information, and also offer value-added data analysis. If there is an ISAC for your industry, we recommend you participate.
  2. Commercial vendors: We increasingly see vendors of threat detection products and services asking customers to share information about what they see, which makes their service more accurate and appealing. This is usually opt-in (though you should ask if it is not specifically mentioned) and we see very little risk in sharing data with vendors. Not because we actually enjoy the idea of a vendor monetizing your data without compensation, but it helps make their product or service better, and you benefit from others doing the same.
  3. Trading partners: If your industry doesn’t have a formal ISAC, or you cannot afford to participate, you will likely need some kind of semi-formal means of sharing information. This can be challenging due to both the technology platform requirements (threat information must be shared securely) and legal agreements required to establish a sharing partnership (lawyers are fun!). That doesn’t mean you won’t do it, but understand that it’s not easy and requires a 1:1 set of agreements between all your trading partners.
  4. Informal contacts: (water cooler TI) Many security practitioners share information informally with friends and colleagues. If you are plugged into your local community, you probably send a note to a buddy when you find something interesting and vice-versa. It’s a bit like hanging out at the water cooler and sharing indicators with your pals. As long as there isn’t anything proprietary or possibly damaging to your organization, sharing with these contacts can provide excellent value on both sides. But this requires a lot more manual processing because you don’t get a machine-readable feed. Unless your pals talk STIX and TAXXI, and yes, that was a TI joke.

Sharing Securely

Once you figure out who you will share threat intelligence with, you need to figure out how you’ll do it securely. Each of the various types of TI can be useful when shared, so there are plenty of data types in play. You will likely want some kind of platform you can use to aggregate threat intel and provide secure access. Perhaps it will be handled through a secure web service that ensures only authorized partners have access. Or you might be able to host a subset of your threat intelligence where a trading partner can access it directly from your TI platform.

Either way there are a couple key aspects to this kind of sharing, and below are a few questions to answer before embarking on a sharing initiative. Yes, they do read like Security 101.

  1. Authentication: Who will access the system? How will you manage entitlements? Is this something you need to use your existing identity and access management systems to provide? Will you require multi-factor authentication? What about machine to machine sharing, via APIs and/or standard protocols? What is the process to deprovision a trading partner and remove access?
  2. Authorization: What types of data/TI sources can each partner access? How will you manage entitlements, including new partners and changes in authorization?
  3. Data protection: How is your data anonymized and/or protected? Is it encrypted so unauthenticated or unauthorized users cannot access it?
  4. Logging activity: How will you keep track of which partner looked at what information? You want to be able to track which partners contributed content to make sure you have some balance – especially in an informal situation.

As you see, sharing information securely between trading partners can be complicated, so make sure you ask the right questions before starting to share information. As with the rest of developing a TI program, it is critical to develop feedback loops and a mechanism for evaluating the value of the sharing partnerships. You should have objective criteria for deciding whether sharing threat intelligence makes sense for your organization over time, regardless of whether you are paying for it or not.

And with that we wrap up our Building a Threat Intelligence Program blog series. If you have comments or believe we missed something, please let us know in the comments below or via social media (@securosis on Twitter).

—Mike Rothman