Securosis

Research

Event-Driven AWS Security: A Practical Example

Would you like the ability to revert unapproved security group (firewall) changes in Amazon Web Services in 10 seconds, without external tools? That’s about 10-20 minutes faster than is typically possible with a SIEM or other external tools. If that got your attention, then read on
 If you follow me on Twitter, you might have noticed I went a bit nuts when Amazon Web Services announced their new CloudWatch events a couple weeks ago. I saw them as an incredibly powerful too for event driven security. I will post about the underlying concepts tomorrow, but right now I think it’s better to just show you how it works first. This entire thing took about 4 hours to put together, and it was my first time writing a Lambda function or using Python in 10 years. This example configures an AWS account to automatically revert any Security Group (firewall) changes without human interaction, using nothing but native AWS capabilities. No security tools, no servers, nada. Just wiring together things already built into AWS. In my limited testing it’s effective in 10 seconds or less, and it’s only 100 lines of code – including comments. Yes, this post is much longer than the code to make it all work. I will walk you through setting it up manually, but in production you would want to automate this configuration so you can manage it across multiple AWS accounts. That’s what we use Trinity for, and I’ll talk more about automating automation at the end of the post. Also, this is Amazon specific because no other providers yet expose the needed capabilities. For background it might help to read the AWS CloudWatch events launch post. The short version is that you can instrument a large portion of AWS, and trigger actions based on a wide set of very granular events. Yes, this is an example of the kind of research we are focusing on as part of our cloud pivot. This might look long, but if you follow my instructions you can set it all up in 10-15 minutes. Tops. Prep Work: Turn on CloudTrail If you use AWS you should have CloudTrail set up already; if not you need to activate it and feed the logs to CloudWatch using these instructions. This only takes a minute or two if you accept all the defaults. Step 1: Configure IAM To make life easier I put all my code up on the Securosis public GitHub repository. You don’t need to pull that code – you will copy and paste everything into your AWS console. Your first step is to configure an IAM policy for your workflow; then create a role that Lambda can assume when running the code. Lambda is an AWS service that allows you to store and run code based on triggers. Lambda code runs in a container, but doesn’t require you to manage containers or servers for it. You load the code, and then it executes when triggered. You can build entirely serverless architectures with Lambda, which is useful if you want to eliminate most of your attack surface, but that’s a discussion for another day. IAM in Amazon Web Services is how you manage who can do what in your account, including the capabilities of Amazon services themselves. It is ridiculously granular and powerful, and so the most critical security tool for protecting AWS accounts. Log into the AWS console. Got to the Identity and Access Management (IAM) dashboard. Click on Policies, then Create Policy. Choose Create Your Own Policy. Name it lambda_revert_security_group. Enter a description, then copy and paste my policy from GitHub. My policy allows the Lambda function to access CloudWatch logs, write to the log, view security group information, and revoke ingress or egress statements (but not create new ones). Damn, I love granular policies! Once the policy is set you need to Create New Role. This is the role which the Lambda function will assume when it runs. Name it lambda_revert_security_group, assign it an AWS Lambda role type, then attach the lambda_revert_security_group policy you just created. That’s it for the IAM changes. Next you need to set up the Lambda function and the CloudWatch event. Step 2: Create the Lambda function First make sure you know which AWS region you are working in. I prefer us-west-2 (Oregon) for lab work because it is up to date and tends to support new capabilities early. us-east-1 is the granddaddy of regions, but my lab account has so much cruft after 6+ years that things don’t always work right for me there. Go to Lambda (under Compute on the main services page) and Create a Lambda function. Don’t pick a blueprint – hit the Skip button to advance to the next page. Name your function revertSecurityGroup. Enter a description, and pick Python for the runtime. Then paste my code into the main window. After that pick the lambda_revert_security_group IAM role the function will use. Then click Next, then Create function. A few points on Lambda. You aren’t billed until the function triggers; then you are billed per request and runtime. Lambda is very good for quick tasks, but it does have a timeout (I think an hour these days), and the longer you run a function the less sense it makes compared to a dedicated server. I actually looked at migrating Trinity to Lambda because we could offload our workflows, but at that time it had a 5-minute timeout, and running hour-long workflows at scale would likely have killed us financially. Now some notes on my code. The main function handler includes a bunch of conditional statements you can use to only trigger reverting security group changes based on things like who requested the change, which security group was changed, whether the security group is in a specified VPC, and whehter the security group has a particular tag. None of those lines will work for you, because they refer to specific identifiers in my account – you need to change them to work in your account. By default, the function will revert any security group change in your account. You need to cut and paste the line “revert_security_group(event)” into a conditional block to run only on matching conditions. The function only works for inbound rule changes. It

Share:
Read Post

Securing Hadoop: Architectural Security Issues

Now that we have sketched out the elements a Hadoop cluster, and what one looks like, let’s talk threats to the databases. We want to consider both the database infrastructure itself, as well as the data under management. Given the complexity of a Hadoop cluster, the task is closer to securing an entire data center than a typical relational database. All the features that provide flexibility, scalability, performance, and openness, create specific security challenges. The following are some specific threats to clustered databases. Data access & ownership: Role-based access is central to most database security schemes, and NoSQL is no different. Relational and quasi-relational platforms include roles, groups, schemas, label security, and various other facilities for limiting user access to subsets of available data. Most big data environments now offer integration with identity stores, along with role-based facilities to divide up data access between groups of users. That said, authentication and authorization require cooperation between the application designer and the IT team managing the cluster. Leveraging existing Active Directory or LDAP services helps tremendously with defining user identities, and pre-defined roles may be available for limiting access to sensitive data. Data at rest protection: The standard for protecting data at rest is encryption, which protects against attempts to access data outside established application interfaces. With Hadoop systems we worry about people stealing archives or directly reading files from disk. Encrypted files are protected against access by users without encryption keys. Replication effectively replaces backups for big data, but beware a rogue administrator or cloud service manager creating their own backups. Encryption limits how data can be copied from the cluster. Unlike 2012, where the lack of suitable encryption was a serious issue. Apache offers HDFS encryption as an option; this is a major advance, but remember that you can only encrypt HDFS, and you’ll need to fill the gaps with key management and key storage. Several commercial Hadoop vendors offer transparent encryption, and third parties have advanced the state of the art, with transparent encryption options for both both HDFS and non-HDFS on-disk formats, especially coupled with parallel progress in key management. Inter-node communication: Hadoop and the vast majority of distributions (Cassandra, MongoDB, Couchbase, etc.) don’t communicate securely by default – they use unencrypted RPC over TCP/IP. TLS and SSL are bundled in big data distributions, but not typically used between applications and databases – and almost never for inter-node communication. This leaves data in transit, and application queries, accessible for inspection and tampering. Client interaction: Clients interact with resource managers and nodes. While gateway services can be created to load data, clients communicate directly with both resource managers and individual data nodes. Compromised clients can send malicious data or links to either service. This facilitates efficient communication but makes it difficult to protect nodes from clients, clients from nodes, and even name servers from nodes. Worse, the distribution of self-organizing nodes is a poor fit for security tools such as gateways, firewalls, and monitors. Many security tools are designed to require a choke-point or span port, which may not be available in a peer-to-peer mesh cluster. Distributed nodes: One of the reasons big data makes sense is an old truism: “moving computation is cheaper than moving data”. Data is processed wherever resources are available, enabling massively parallel computation. Unfortunately this produces complicated environments with lots of attack surface. With so many moving parts, it is difficult to verify consistency or security across a highly distributed cluster of (possibly heterogeneous) platforms. Patching, configuration management, node identity, and data at rest protection – and consistent deployment of each – are all issues. Threat-response models One or more security countermeasures are available to mitigate each threat identified above. The following diagram shows which specific options you have at your disposal to help you choose a ‘preventative’ security measure. We don’t have room to go into much detail on the tradeoffs of each response – each area really deserves its own paper. But we do want to mention a couple areas where we have seen the most change since our original research four years ago. If your goal is to protect session privacy – either between clients and data nodes, or for inter-node communication – Transport Layer Security (TLS) is your first choice. This was unheard of in 2012, but since then about 25% of the companies we spoke with have implemented SSL or TLS for inter-node communication – not just between applications and name servers. Transport encryption protects all communications from access or modification by attackers. Some firms instead use network segmentation and firewalls to ensure that attackers cannot access network traffic. This approach is less robust but much easier to implement. Some clusters were deployed to third-party cloud services, where virtualized network services make sniffing nearly impossible; these companies typically chose not to encrypt internal cluster communications. Enforcing data usage is one of the areas where we have seen the most progress, thanks to database links into existing Active Directory and LDAP identity stores. This seems obvious now but was a rarity in 2012, when data architects were focused on scalability and getting basic analytics up and running. Fortunately support for linking identity stores to Hadoop clusters has advanced considerably, making it much easier to leverage existing roles and management infrastructure. But we also have other tools at our disposal. We don’t see it often, but a handful of organizations encrypt sensitive data elements at the application layer, so information is stored as encrypted elements. This way the application manages decryption and key management functions, and can offer additional controls over who can see which information. This is very secure, but must be designed in during application design and coded into the application from the beginning. Retrofitting application-layer encryption into an existing application and database stack is highly challenging at beast, which is why we also see wide usage of masking and redaction technologies – from both enterprise Hadoop vendors and third-party security vendors. These technologies offer fine control over which data is displayed to which users, and can be easily built into existing clusters to

Share:
Read Post

Securing Hadoop: Architecture and Composition

Our goal for this post is to succinctly outline what Hadoop (and most NoSQL) clusters look like, how they are assembled, and how they are used. This provides better understanding of the security challenges, and what sort of protections need to be leveraged to secure them. Developers and data scientists continue to stretch system performance and scalability, using customized combinations of open source and commercial products, so there is really no such thing as a ‘standard’ Hadoop deployment. With these considerations in mind, it is time to map out threats to the cluster. NoSQL databases enable companies to collect, manage, and analyze incredibly large data sets. Thousands of firms are working on big data projects, from small startups to large enterprises. Since our original paper in 2012 the rate of adoption has only increased; platforms such as Hadoop, Cassandra, Mongo, and RIAK are now commonplace, with some firms supporting multiple installations. In just a couple years they went from “rogue IT” to “core systems”. Most firms recognized the value of “big data”, acknowledged these platforms are essential, and tasked IT teams with bringing them “under IT governance”. Most firms today are taking their first steps to retrofit security and governance controls onto Hadoop. Let’s dig into how all the pieces fit together: Architecture and Data Flow Hadoop has been wildly successful because it scales well, can be configured to handle a wide variety of use cases, and is very inexpensive compared to relational and data warehouse alternatives. Which is all another way of saying it’s cheap, fast, and flexible. To show why and how it scales, let’s take a look at a Hadoop cluster architecture: There are several things to note here. The architecture promotes scaling and performance. It provides parallel processing, and additional nodes provide ‘horizontal’ scalability. This architecture is also inherently multi-tenant, supporting multiple applications across one or more file groups. But there are a lot of moving parts; each node communicates with its peers to ensure that data is properly replicated, nodes are on-line and functional, storage is optimized, and application requests are being processed. We’ll dig into specific threats to Hadoop clusters later in this series. Hadoop Stack To appreciate Hadoop’s flexibility, you need to understand that a cluster can be fully customized. It is useful to think about the Hadoop framework as a ‘stack’, much like a LAMP stack, but much less standardized. While Pig and Hive are commonly used, the ability to mix and match components makes deployments much more diverse. For example, Sqoop and Yarn are alternative data access services. You can select different big data environments to support columnar, graph, document, XML, or multidimensional data. And over the last couple years MapReduce has largely given way to SQL query engines – with Spark, Drill, Impala, and Hive all accommodating increasing use of SQL-style queries. This modularity offers great flexibility to assemble and tailor clusters to behave and perform exactly as desired. But it also makes security more difficult – each option brings its own security options and issues. The beauty part is that you can set up a cluster to satisfy your usability, scalability, and performance goals. You can tailor it to specific types of data, or add modules to facilitate analysis of certain data sets. But that flexibility brings complexity. Each module runs a specific version of code, has its own configuration, and may require independent authentication to work in the cluster. Many pieces must work in tandem here to process data, so each requires its own security review. Some of you reading this are already familiar with the architecture and component stack of a Hadoop cluster. You may be asking, “Why we are we going through these basics?”. To understand threats and appropriate responses, you need to first understand how all the pieces of the cluster work together. Each component interface is a trust relationship, and each relationships is a target. Each component offers attacker a specific set of potential exploits, and defenders have a corresponding set of options for attack detection and prevention. Understanding architecture and cluster composition is the first step to putting together your security strategy. Our next post will present several strategies used to secure big data. Each model includes basic benefits and requires supplementary security tools. After selecting a strategy, you can put together a collection of security controls to meet your objectives. Share:

Share:
Read Post

Securing Hadoop: Security Recommendations for NoSQL platforms [New Series]

It’s been three and a half years since we published our research paper on Securing Big Data. That research paper has been one of the more popular papers we’ve ever written. And it’s no wonder as NoSQL adoption was faster than we expected; we see hundreds of new projects popping up, leveraging the scale, analytics and low cost of these platforms. It’s not hyperbole to claim it has revolutionized the database market over the last 5 years, and community support behind these platforms – and especially Hadoop – is staggering. At the time we wrote the last paper security, Hadoop – much less the other platforms – was something of a barren wasteland. They did not include basic controls for data protection, most third party tools could not scale along with NoSQL and thus were of little use to developers, and leaders of NoSQL firms directed resources to improving performance and scalability, not security. Heck, in 2012 the version of Hadoop I evaluated did not even require and administrative password! But when it comes to NoSQL security, and Hadoop specifically, things have changed dramatically. As we advise clients on how to implement security controls, there are many new options to consider. And while there remains some gaps in monitoring and assessment capabilities, Hadoop has (mostly) reached security parity with the relational platforms of old. We can’t call it a barren wasteland any longer, so to accurately advise people on approaches and tools to leverage, we can no longer refer them back to that original paper. So we are kicking off a new research series to refresh this paper. Most of the content will be new. And this time we will do this a little bit differently that the last time. First, we are going to provide less background on what makes NoSQL different than relational databases, as most people in IT are now comfortable with the architectural and functional distinctions between the two. Second, most of our recommendations will still apply to NoSQL platforms in general, but this research will be more focused on Hadoop as we get a majority of questions on Hadoop security despite dozens of alternatives. Finally, as there are lots more aspects to talk about, we’ll weave preventative and detective controls into a more operational (i.e.: day to day management) model for both data and database infrastructure. Here is how we are laying out the series: Hadoop Architecture and Assembly — The goal with this post is to succinctly outline what Hadoop and similar styles of NoSQL clusters look like, how they are assembled and how they are used. In this light we get a better idea of the security challenges and what sort of protections need to be leveraged. As developers and data scientists stretch systems from a performance and scalability standpoint, and custom assemblage of open source and commercial products, there really is no such thing as a standard Hadoop deployment. So with these considerations in mind we will map out threats to the cluster. Use Cases & Security Architectures — This post will discuss the strategic considerations for deploying security for big data. Depending upon which model you choose, you change where certain types of threats are addressed, and consequentially what tools you will rely upon to provide security. Or stated another way, the security model you choose will dictate what security technologies you need to prevent and detect threats. There are several approaches that organizations take to secure Hadoop and other NoSQL clusters. These range from securing the network around the cluster, Identity Management, to maintaining security controls on each node within the cluster, or even taking a data centric approach to security. We’ll go over the major trends we see today, and discuss the advantages and pitfalls of each approach. Building Security Into the Cluster — Here is where we discuss how all of the pieces fit together. There are many security controls available, and each address a specific threat vector an attacker may employ. We’ll focus on security controls you want to build into your cluster from the start: identity, authorization, transport layer security, application security and data encryption. This will focus on the base security controls that allow you to define how the cluster should be used from a security standpoint. Operational Security — Here we will focus on the day to day security controls for monitoring ongoing security and discovering user behavior and ongoing security operations. Aspects like configuration management, patching, logging, monitoring, and node validation. We’ll even discuss integrating a DevOps approach to cluster administration to improve speed and consistency. Commercial Hadoop and NoSQL variants — Hadoop is the dominant flavor of ‘big data’ in use today. In this section we will discuss what the commercial Hadoop platform vendors are doing to promote security for their customers with a blend of open source, home grown and 3rd party security product support. There is no reason to roll you’re own security out of necessity as commercial variants often add on their own products or provide bundles for you. Each offers unique capabilities and each has a vision of what their customers should focus on, so we will cover some of the current offerings. We will also offer some advice on the application of security to non-Hadoop platforms. While Hadoop is the most commonly used platform, there are specialized flavors of NoSQL that are eminently appropriate for certain business challenges and are in wide use. Some even use HDFS or other Hadoop components that allow the use of the same security controls across different clusters. We will close out this section discussing where the security controls we have already discussed can be deployed in non-Hadoop environments where appropriate. As with our original paper, this is not intended to be an exhaustive look at all potential security options, but to get the IT and development teams who run these clusters basic security controls in place. Up next, Hadoop Architecture and Assembly. Share:

Share:
Read Post

The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead

Once again Securosis and friends are hosting our RSA Conference Disaster Recovery Breakfast. It’s really hard to believe this is the eighth year for this event. Regardless of San Francisco’s February weather, we expect to be seeing clouds all week. But we’re happy to help you cut through the fog to grab some grub, drinks, and bacon. Kidding aside, we are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the show that is now the RSAC. By Thursday we’re all disasters, and it’s very nice to have a place to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Did we mention there will be bacon? With the continued support of Kulesa Faul, we’re honored to bring two new supporters in this year. If you don’t know our friends at CHEN PR and LaunchTech, you’ll have a great opportunity to say hello and thank them for helping support your habits. As always the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open – Mike has acquired a taste for Bailey’s in his coffee. Please remember what the DR Breakfast is all about. No marketing, no spin, no t-shirts, and no flashing sunglasses – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, we are confident you will enjoy the DRB as much as we do. See you there. To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com. Share:

Share:
Read Post

Security is Changing. So is Securosis.

Last week Rich sent around Cockroaches Versus Unicorns: The Golden Age Of Cybersecurity Startups, by Mahendra Ramsinghani over at TechCrunch, for us to read. It isn’t an article every security professional needs to read, but it is certainly mandatory reading for anyone who makes buying decisions, tracks the security market, or is on the investment or startup side. It also nearly perfectly describes what we are going through as a company. His premise is that ‘unicorns’ are rare in the security industry. There are very few billion-dollar market cap companies, relative to the overall size of the market. But security companies are better suited to survive downturns and other challenging times. We are basically ‘cockroaches’, which persist through every tech Armageddon, often due to our ability to fall back on services. Many security startups are not unicorns; rather, they are cockroaches – they rarely die, and  in tough times, they can switch into a frugal/consulting mode. Like cockroaches, they can survive long nuclear winters. Security companies can be capital-efficient, and typically consume ~$40 million to reach break-even. This gives them a survival edge – but VCs are looking for a “growth edge.” The security market also appears much smaller than it should be considering the market dynamics, although it is very possible that is changing thanks to the hostile world out there. The article also postulates that the entire environment is shifting, with carriers and managed services providers jumping into acquisitions while large established players struggle. Yet most of the startups VCs see are just more of the same, fail to differentiate, and rely far too much on really poor FUD-based sales dynamics. With increasing hacks, the CISO’s life has just become a lot messier. One CISO told me, “Between my HVAC vendor and my board of directors, I am stretched. And everyday I get a hundred LinkedIn requests from vendors. Their FUD approach to security sales is exhausting.” And “I have seen at least 40 FireEye killers in the past 12 months,” one Palo Alto-based VC told me. Clearly he was exhausted. Some sub-sectors are overheated and investors are treading cautiously. We certainly see the same thing. How many threat intel and security analytics startups does the industry need? We get a few briefing requests a week, from another new company doing exactly the same things. And all our CISO friends hate vendor sales techniques. These senior security folks get upwards of 500 emails and 100 phone calls a week from sales people trying to get meetings. All this security crap looks the same. This combination inevitably leads to a contraction of seed capital, and that is where our story starts. DisruptOPS Most of you have noticed that over the past few years our research has skewed strongly toward cloud security, automation, and DevOps. This started with our initial partnership with the Cloud Security Alliance to build out the CCSK training class around 6 years ago. Rich had to create all the hands-on labs, which augered him down the rabbit hole of Amazon Web Services, OpenStack, Azure, and all the supporting tools. As analysts we like to think it’s our job to have a good sense of what’s coming down the road. We made a bet on the cloud and it paid off, transitioning from a hobby to generate beer money to a major source of ongoing revenue. It also opened us up to a wider client base, especially among end-user organizations. Three years ago Rich realized that in all his cloud security engagements, and all the classes we taught, we heard the same problems over and over. The biggest unsolved problem seemed to be cloud security automation. The next year was spent writing some proof-of-concept code merely to support conference presentations because there were no vendor examples, but at every talk attendees kept asking for “more
 faster”. This demand became too great to ignore, and nearly 2 years ago we decided to start building our own platform. And we did 
 we built our own cloud security platform. Don’t worry, we don’t have anything to sell you – this is where Ramsinghani’s article comes in.   Our initial plan was to self fund development (Securosis is an awesome business) until we had a solid demo/prototype. Then we assumed it would be easy to get seed cash from some of our successful friends and build a new company in parallel with Securosis to focus on the product. We didn’t just want to start up a software company and jettison Securosis because our research is an essential driver to maintain differentiation, and we wanted to build the company without going the traditional VC route. We also have some practical limitations on how we can do things. We are older, have families to support, and have deep roots where we live that preclude relocation. The analogy we use is that we can’t go back to eating ramen for dinner every night in a coding flophouse. The demo killed when we showed it to people, we are really smart, and people like us. Our future was bright. Then we got hit with the reality clue bat. Everything was looking awesome last year at RSA when we started showing people and talking to investors. By summer all our options fell apart. We didn’t fit the usual model. We weren’t going to move to the Bay Area. We couldn’t take pay cuts to ‘normal’ founder levels and still support our families. And to be honest, we still didn’t want to go the normal VC route. We just weren’t going to play that game, given the road rash both Mike and Adrian have from earlier in their careers. Just like the article said, we couldn’t find seed funding. At least not the way we wanted to build the company. We even had a near-miss on an acquisition, but we couldn’t line everything up to hit everyone’s goals and expectations. Yet while this all went on, the Securosis business you see every day continued to boom. We

Share:
Read Post

Incite 1/20/2016 — Ch-ch-ch-ch-changes

I have always gotten great meaning from music. I can point back to times in my life when certain songs totally resonate. Like when I was a geeky teen and Rush’s Signals spoke to me. I saw myself as the awkward kid in Subdivisions who had a hard time fitting in. Then I went through my Pink Floyd stage in college, where “The Wall” dredged up many emotions from a challenging childhood and the resulting distance I kept from people. Then Guns ‘n Roses spoke to me when I was partying and raging, and to this day I remain shocked I escaped largely unscathed (though my liver may not agree). But I never really understood David Bowie. I certainly appreciated his music. And his theatrical nature was entertaining, but his music never spoke to me. In fact I’m listening to his final album (Blackstar) right now and I don’t get it. When Bowie passed away last week, I did what most people my age did. I busted out the Ziggy Stardust album (OK, I searched for it on Apple Music and played it) and once again gained a great appreciation for Bowie the musician. Bowie Changes Then I queued up one of the dozens of Bowie Greatest Hits albums. I really enjoyed reconnecting with Space Oddity, Rebel Rebel, and even some of the songs from “Let’s Dance”, if only for nostalgia’s sake. Then Changes came on. I started paying attention to the lyrics. Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Don’t want to be a richer man Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Just gonna have to be a different man Time may change me But I can’t trace time – David Bowie, “Changes” I felt the wave of meaning wash over me. Changes resonates for me at this moment in time. I mean really resonates. I’ve alluded that I have been going through many changes in my life the past few years. A few years ago I reached a crossroads. I remembered there are people who stay on shore, and others who set sail without any idea what lies ahead. Being an explorer, I jumped aboard the SS Uncertain, and embarked upon the next phase of my life. Yet I leave shore today a different man than 20 years ago. As the song says, time has changed me. I have more experience, but I’m less jaded. I’m far more aware of my emotions, and much less judgmental about the choices others make. I have things I want to achieve, but no attachment to achieving them. I choose to see the beauty in the world, and search for opportunities to connect with people of varied backgrounds and interests, rather than hiding behind self-imposed walls. I am happy, but not satisfied, because there is always another place to explore, more experiences to have, and additional opportunities for growth and connection. Bowie is right. I can’t trace time and I can’t change what has already happened. I’ve made mistakes, but I have few regrets. I have learned from it all, and I take those lessons with me as I move forward. I do find it interesting that as I complete my personal transformation, it’s time to evolve Securosis. You’ll learn more about that next week, but it underscores the same concept. Ch-ch-ch-ch-changes. Nothing stays the same. Not me. Not you. Nothing. You can turn and face the strange, or you can rue for days gone by from your chair on the shore. You know how I choose. –Mike Photo credit: “Chchchange” from Cole Henley The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and
 hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. SIEM Kung Fu Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Everyone is an insider: Since advanced threat detection is still very shiny, it’s not a surprise that attention has swung back to the insider threat. It seems that every 4-5 years people remember that insiders have privileged access and can steal things if they so desire. About the same time, some new technology appears that promises to identify those malicious employees and save your bacon. Then it turns out finding the insiders is hard and everyone focuses on the latest shiny attack

Share:
Read Post

Summary: Impossible

Rich here. When I hurt my knee running right before Thanksgiving everyone glanced at my brace and felt absolutely compelled to tell me how much “getting old sucks”. Hell, even my doctor commiserated as he discussed his recent soccer injury. The only problem is I first hurt me knee around junior high, and in many way’s it’s been better since I hit my 40’s than any other time I can remember. As a kid my mom didn’t want me playing football because of my knees (I tried soccer for a year in 10th grade, hurt it worse, then swapped to football to finish up high school). I wore a soft brace for most of my martial arts career. I’ve been in physical therapy so many times over the past three decades that I could write a book on the changing treatment modalities of chondromalacia patellae. I had surgery once, but it didn’t help. As a lifetime competitive athlete, running has always been part of my training, but distance running was always a problem. For a long time I thought a 10K race was my physical limit. Training for more than that really stressed the knee. Then I swapped triathlon for martial arts, and realized the knee did much better when it wasn’t smashing into things nearly every day.   Around that time my girlfriend (now wife) signed us up for a half-marathon (13.1 miles). I nearly died, but I made it. Over the subsequent decade I’ve run more of them and shaved 45 minutes off my PR. The older I get, the better my times for anything over a couple miles, and the longer distances I can run. But there’s one goal that seemed impossible – the full marathon. 26.2 miles of knee pounding awesomesauce. Twice as far as the longest race I ever ran. My first attempt, last year, didn’t go so well. Deep into my training program I developed plantar fasciitis, which is a fancy way of saying “my foot was f-ed up”. So I pushed my plans back to a later race, rehabilitated my foot
 and got stomach flu the week before the last race of the year before Phoenix weather went “face of the sun” hot. A seriously disheartening setback after 6 months training. I made up for it with beer. Easier on the foot. A few months later an email popped up in my inbox letting me know registration for the Walt Disney World Marathon opened the next day. My wife and I looked at it, looked at each other, and signed up before the realistic parts of our brains could stop us. Besides, the race was only a month after we would be there with the kids, so we felt justified leaving them at home for the long weekend. I built up a better base and then started a 15-week custom program. Halfway through, on a relatively modest 8-mile run in new shoes, I injured my achilles tendon and had to swap to the bike for a couple weeks. Near the peak of my program, on a short 2-mile run and stretch day, I angled my knee just the wrong way, and proceeded to enjoy the pleasure of reliving my childhood pain. Three weeks later the knee wasn’t better, but I could at least run again. But now I was training in full-on panic mode, trying to make up for missing some of the most important weeks of my program. My goal time went out the window, and I geared down into a survival mindset. Yes, by the time I lined up at the race start I had missed 5 of 15 weeks of my training program. Even my wife missed a few weeks thanks to strep throat (which I also caught). To add insult to injury, it was nearly 70F with 100% humidity. In December. At 5:35am. You know what happened next? We ran a friggin’ marathon. Yes, at times things hurt. I got one nasty blister I patched up at an aid station. My headphones crapped out. I stopped at every single water station thanks to the humidity, and probably should have worn a bathing suit instead of running shorts. But overall it wasn’t bad. Heck, I enjoyed most of the race. I didn’t really start hurting until mile 17, and my pace didn’t fully crack until mile 22. Disney puts on a hell of a race, with distracting entertainment along the entire course. Thanks to the humidity it was the slowest Disney marathon in the 23-year history of the event. Even then, my time wasn’t embarrassing, and I finished in the top 20% or so (at a time that isn’t even close to getting into Boston or New York). I didn’t feel terrible. My wife also finished up in the front third of the pack, and we spent the afternoon walking around Disney World (slowly). We felt really good the next day, other than my darn knee. The one that held up for all 26.2 miles. The one that will be better in a week or two. I checked off a bucket list item and completed something I thought was impossible. Something I told myself my entire life I couldn’t do. There is nothing more satisfying than proving yourself wrong. Except, perhaps, doing it again. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences It isn’t security related, but Rich participated in Apple in 2015: The Six Colors report card. Securosis Posts Incite 1/13/2016: Permitted. SIEM Kung Fu: Fundamentals [New Series]. Incite 1/6/2016 – Recharging. Incite 12/15/2015: Looking Forward. Building a TI Program: Success and Sharing. Threat Detection Evolution [New Paper]. Building Security Into DevOps [New Paper]. Favorite Outside Posts Rich: How Hackers Took Down a Power Grid. A well-balanced article that points to the Ukraine as another canary in a coal mine. Mike: Dave Barry’s 2015 Year in Review: Dave Barry has a pretty good gig. Write one column a year, and it better be funny. Good thing it always is, and the 2015 edition

Share:
Read Post

Incite 1/13/2016: Permitted

I’m not sure how it happened, but XX1 turned 15 in November and got her driver’s permit. Wait, what?!?! That little girl can now drive. Like, legally? WTF? Clearly it is now January, and I am still in shock that 15 years has passed by in the blink of an eye. Now it’s on me to teach her to drive. She’ll take a driver’s ed course in February, so that will help and give her some practical experience with someone who actually drives with teenagers for a living. Is that on the list of worst jobs? Second to elephant cage cleaner at the zoo, driving with inexperienced drivers seems like my version of hell on earth. Then I remembered back to when I learned to drive. My Dad had a ‘72 Bug for me that he drove around. He picked me up and drove me to the local town pool parking lot. He taught me how to balance the clutch (yes, it was a stick shift) and start, stop, drive in a straight line, and turn. I recall him being extraordinarily patient as I smoked the clutch and stalled out 10 times. But after a while I got the hang of it.   Then he said, “OK Mike. Drive home.” WHAT? I was kind of in shock. It was maybe 3 miles to my house, but it was 3 miles of real road. Road with other drivers on it. I almost crapped my pants, but we got home in one piece. Dad would let me drive most places after that, even on the highway and on bridges. He remained incredibly patient, even when I stalled 10 times on a slight incline with about 50 cars behind me sitting on their horns. Yup, crapped my pants that time too. I remember that like it was yesterday, but it was 31 years ago. Damn. So before winter break I took XX1 out to the parking lot of the library. She got into the driver’s seat and I almost crapped my pants. You getting the recurring theme here? She had no idea what she was doing. I have an automatic transmission, so she didn’t have to worry about the clutch, but turning the car is a learned skill, and stopping without giving me whiplash was challenging for a little while. She did get the hang of it, but seeing her discomfort behind the wheel convinced me that my plan of having her drive home (like my Dad did to me) wouldn’t be a great idea. Neither for her self-esteem nor my blood pressure. She’ll get the hang of it, and I have to remember that she’s different than me and I’m a different teacher than my Dad. We’ll get her driving at her pace. After she takes the driver’s ed class I’ll have her start driving when she’s with me. Before we know it, she’ll have 25-30 hours behind the wheel. But I’m not taking any chances. I plan on sending her to an advanced driving school. My cousin sent me a link to this great program in NC called B.R.A.K.E.S, which provides a 4-hour defensive driving workshop specifically for teens. I’m also going to take her to a Skip Barber racing class or something similar, so she can learn how to really handle the car. Sure it’s expensive, but she’s important cargo, commanding a two-ton vehicle, so I want to make sure she’s prepared. But I have to understand this is a metaphor for the rest of her life. As parents we can prepare her to the best of our ability. Then we need to let her loose to have her own experiences and learn her lessons. She can count on our support through the inevitable ups and downs. My little girl is growing up. –Mike Photo credit: “International Driving Permit” from Tony Webster The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and
 hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Dec 8 – 2015 Wrap Up and 2016 Non-Predictions Nov 16 – The Blame Game Nov 3 – Get Your Marshmallows Oct 19 – re:Invent Yourself (or else) Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. SIEM Kung Fu Fundamentals Building a Threat Intelligence Program Success and Sharing Using TI Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers Threat Detection Evolution Building Security into DevOps Pragmatic Security for Cloud and Hybrid Networks EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security The Future of Security Incite 4 U Security as a business problem: The more things change, the more they stay the same. NetworkWorld’s Overcoming stubborn execs for security sake took me back to 2006, right before I wrote the Pragmatic CSO. Senior management doesn’t get it? Yup. Mid-managers want to circumvent the rules? Yup. On and on it goes, and we run on the hamster wheel for

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

“Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.”

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.