Login  |  Register  |  Contact
Friday, February 19, 2016

Summary: Law Enforcement and the Cloud

By Rich

While the big story this week was the FBI vs. Apple, I’d like to highlight something a little more relevant to our focus on the cloud. You probably know about the DOJ vs. Microsoft. This is a critically important case where the US government wants to assert access on the foreign branch of a US company, putting it in conflict with local privacy laws. I highly recommend you take a look, and we will post updates here.

Beyond that, I’m sick and shivering with a fever, so enough small talk and time to get to the links. Posting is slow for us right now because we are all cramming for RSA, but you are probably used to that.

BTW – it’s hard to find good sources for cloud and DevOps news and tutorials. If you have links, please email them to <mailto::info@securosis.com>.

If you want to subscribe directly to the Friday Summary only list, just click here.

And don’t forget:

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us, and if you have submissions please email them to info@securosis.com.

One issue that comes up a lot in client engagements is the best “unit of deployment” to push applications into production. That’s a term I might have made up, but I’m an analyst, so we do that. Conceptually there are three main ways to push application code into production:

  1. Update code on running infrastructure. Typically using configuration management tools (Chef/Puppet/Ansible/Salt), code-specific deployment tools like Capistrano, or a cloud-provider specific tool like AWS CodeDeploy. The key is that a running server is updated.
  2. Deploy custom images, and use them to replace running instances. This is the very definition of immutable because you never log into or change a running server, you replace it. This relies heavily on auto scaling. It is a more secure option, but it can take time for the new instances to deploy depending on complexity and boot time.
  3. Containers. Create a new container image and push that. It’s similar to custom images, but containers tend to launch much more quickly.

As you can guess, I prefer the second two options because I like locking down my instances and disabling any changes. That can really take security to the next level. Which brings us to our tool this week, Packer by HashiCorp. Packer is one of the best tools to automate creation of those images. It integrates with nearly everything, works on multiple cloud and container platforms, and even includes its own lightweight engine to run deployment scripts.

Packer is an essential tool in the DevOps / cloud quiver, and can really enhance security because it enables you to adopt immutable infrastructure.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting all our RSA Conference Guide posts over at the RSA Conference blog – here are the latest:

Training and Events

—Rich

Wednesday, February 17, 2016

Firestarter: RSA Conference—the Good, Bad, and the Ugly

By Rich

Every year we focus a lot on the RSA Conference. Love it or hate it, it is the biggest event in our industry. As we do every year, we break down some of the improvements and disappointments we expect to see. Plus, we spend a few minutes talking about some of the big changes coming here at Securosis. We cover a possibly-insulting keynote, the improvements in the sessions, and how we personally use the event to improve our knowledge.

Watch or listen:


—Rich

Tuesday, February 16, 2016

Securing Hadoop: Technical Recommendations

By Adrian Lane

Before we wrap up this series on securing Hadoop databases, I am happy to announce that Vormetric has asked to license this content, and Hortonworks is also evaluating a license as well. It’s community support that allows us to bring you this research free of charge. Also, I’ve received a couple email and twitter responses to the content; if you have more input to offer, now is the time to send it along to be evaluated with the rest of the feedback as we will assembled the final paper in the coming week. And with that, onto the recommendations.

The following are our security recommendations to address security issues with Hadoop and NoSQL database clusters. The last time we made recommendations we joked that many security tools broke Hadoop scalability; you’re cluster was secure because it was likely no one would use it. Fast forward four years and both commercial and open source technologies have advanced considerably, not only addressing threats you’re worried about, but were designed specifically for Hadoop. This means the possibility a security tool will compromise cluster performance and scalability are low, and that integration hassles of old are mostly behind us.

In fact, it’s because of the rapid technical advancements in the open source community that we have done an about-face on where to look for security capabilities. We are no longer focused on just 3rd party security tools, but largely the open source community, who helped close the major gaps in Hadoop security. That said, many of these capabilities are new, and like most new things, lack a degree of maturity. You still need to go through a tool selection process based upon your needs, and then do the integration and configuration work.

Requirements

As security in and around Hadoop is still relatively young, it is not a forgone conclusion that all security tools will work with a clustered NoSQL database. We still witness instances where vendors parade the same old products they offer for other back-office systems and relational databases. To ensure you are not duped by security vendors you still need to do your homework: Evaluate products to ensure they are architecturally and environmentally consistent with the cluster architecture — not in conflict with the essential characteristics of Hadoop.

Any security control used for NoSQL must meet the following requirements: 1. It must not compromise the basic functionality of the cluster. 2. It should scale in the same manner as the cluster. 3. It should address a security threat to NoSQL databases or data stored within the cluster.

Our Recommendations

In the end, our big data security recommendations boil down to a handful of standard tools which can be effective in setting a secure baseline for Hadoop environments:

  1. Use Kerberos for node authentication: We believed – at the outset of this project – that we would no longer recommend Kerberos. Implementation and deployment challenges with Kerberos suggested customers would go in a different direction. We were 100% wrong. Our research showed that adoption has increased considerably over the last 24 months, specifically in response to the enterprise distributions of Hadoop have streamlined the integration of Kerberos, making it reasonably easy to deploy. Now, more than ever, Kerberos is being used as a cornerstone of cluster security. It remains effective for validating nodes and – for some – authenticating users. But other security controls piggy-back off Kerberos as well. Kerberos is one of the most effective security controls at our disposal, it’s built into the Hadoop infrastructure, and enterprise bundles make it accessible so we recommend you use it.
  2. Use file layer encryption: Simply stated, this is how you will protect data. File encryption protects against two attacker techniques for circumventing application security controls: Encryption protects data if malicious users or administrators gain access to data nodes and directly inspect files, and renders stolen files or copied disk images unreadable. Oh, and if you need to address compliance or data governance requirements, data encryption is not optional. While it may be tempting to rely upon encrypted SAN/NAS storage devices, they don’t provide protection from credentialed user access, granular protection of files or multi-key support. And file layer encryption provides consistent protection across different platforms regardless of OS/platform/storage type, with some products even protecting encryption operations in memory. Just as important, encryption meets our requirements for big data security — it is transparent to both Hadoop and calling applications, and scales out as the cluster grows. But you have a choice to make: Use open source HDFS encryption, or a third party commercial product. Open source products are freely available, and has open source key management support. But keep in mind that HDFS encryption engine only protects data on HDFS, leaving other types of files exposed. Commercial variants that work at the file system layer cover all files. Second, they lack some support for external key management, trusted binaries, and full support that commercial products do. Free is always nice, but for many of those we polled, complete coverage and support tilted the balance for enterprise customers. Regardless of which option you choose, this is a mandatory security control.
  3. Use key management: File layer encryption is not effective if an attacker can access encryption keys. Many big data cluster administrators store keys on local disk drives because it’s quick and easy, but it’s also insecure as keys can be collected by the platform administrator or an attacker. And we are seeing Keytab file sitting around unprotected in file systems. Use key management service to distribute keys and certificates; and manage different keys for each group, application, and user. This requires additional setup and possibly commercial key management products to scale with your big data environment, but it’s critical. Most of the encryption controls we recommend depend on key/certificate security.
  4. Use Apache Ranger: In the original version of this research we were most worried about the use of a dozen modules with Hadoop, all deployed with ad-hoc configuration, hidden within the complexities of the cluster, each offering up a unique attack surface to potential attackers. Deployment validation remains at the top of our list of concerns, but Apache Ranger provides a consistent management plane for setting configurations and usage policies to protect data within the cluster. You’ll still need to address issues of patching the Hadoop stack, application configuration, managing trusted machine images, and platform discrepancies. Some of those interviewed used automation scripts and source code control, others leveraged more tradition patch management systems to keep track of revisions, while still others have a management nightmare on their hands. We also recommend use of automation tools, such as Chef and Puppet, to orchestrate pre-deployment tasks in configuration, assembling from trusted images, patching, issuing keys and even running tools like vulnerability scanners prior to deployment. Building the scripts and setting up these services takes time up front, but pays for itself in reduced management time and effort later, and ensures that each node comes online with baseline security in place.
  5. Use Logging and monitoring: To perform forensic analysis, diagnose failures, or investigate unusual behavior, you need a record of activity. You can leverage built in functions of Hadoop to create event logs, and even leverage the cluster itself to store events. Tools like LogStash, Log4J and Kafka help with streaming, management and searching. And there are plug-ins available to stream standardized syslog feeds to supporting SIEM platforms or even Splunk. We also recommend the use of more context aware monitoring tools; in 2012 none of the activity monitoring tools worked with big data platforms, but now they are. These capabilities usually plug into supporting modules like Hive and collect the query, parameters and specifics about the user/application that issues the query. This approach goes beyond basic logging as it can detect misuse and even alter the results a user sees, These also can feed events into native logs, SIEM or even Database Activity Monitoring tools.
  6. Use secure communication: Implement secure communication between nodes, and between nodes and applications. This requires an SSL/TLS implementation that actually protects all network communications rather than just a subset. This imposes a small performance penalty on transfer of large data sets around the cluster, but the burden is shared across all nodes. The real issue for most is setup, certificate issuance and configuration.

The the use of encryption, authentication and platform management tools will greatly improve the security of Hadoop clusters, and close off all of the easy paths attackers use to steal information or compromise functions. For some challenges, such as authentication, Hadoop provides excellent integration with Active Directory and LDAP services. For authorization, there are modules and services that support fine-grained control over data access, thankfully moving beyond simple role based access controls and making application developers jobs far easier. The Hadoop community has largely embraced security, and done so far faster than we imagined possible on 2012.

On security implementation strategies, when we speak with Hadoop architects and IT managers, we still hear that their most popular security model is to hide the entire cluster with network segmentation, and hope attackers can’t get by the application. But that’s not such a bad thing as almost every one we spoke with has continuously evolved other areas of security within their cluster. And much like Hadoop itself, administrators and cluster architects are getting far more sophisticated with security. Most we spoke have road-mapped all of these recommended controls, and are taking that next step to fulfill compliance obligations. Consider the recommendations above a minimum set of preventative security measures. These are easy to recommend — they are simple, cost-effective, and scalable, and they addresses real security deficiencies with big data clusters. Nothing suggested here harms performance, scalability, or functionality. Yes, they are more work to set up, but relatively simple to manage and maintain.

We hope you find this paper useful. If you have any questions or want to discuss specifics of your situation, feel free to send us a note at info@securosis.com

—Adrian Lane

Monday, February 15, 2016

Securing Hadoop: Enterprise Security For NoSQL

By Adrian Lane

Hadoop is now enterprise software.

There, I said it. I know lots of readers in the IT space still look at Hadoop as an interloper, or worse, part of the rogue IT problem. But better than 50% of the enterprises we spoke with are running Hadoop somewhere within the organization. A small percentage are running Mongo, Cassandra or Riak in parallel with Hadoop, for specific projects. Discussions on what ‘big data’ is, if it is a viable technology, or even if open source can be considered ‘enterprise software’ are long past. What began as proof of concept projects have matured into critical application services. And with that change, IT teams are now tasked with getting a handle on Hadoop security, to which they response with questions like “How do I secure Hadoop?” and “How do I map existing data governance policies to NoSQL databases?”

Security vendors will tell you both attacks on corporate IT systems and data breaches are prevalent, so with gobs of data under management, Hadoop provides a tempting target for ‘Hackers’. All of which is true, but as of today, there really have not been major data breaches where Hadoop play a part of the story. As such this sort of ‘FUD’ carries little weight with IT operations. But make no mistake, security is a requirement! As sensitive information, customer data, medical histories, intellectual property and just about every type of data used in enterprise computing is now commonly used in Hadoop clusters, the ‘C’ word (i.e.: Compliance) has become part of their daily vocabulary. One of the big changes we’ve seen in the last couple of years with Hadoop becoming business critical infrastructure, and another – directly cause by the first – is IT is being tasked with bringing existing clusters in line with enterprise compliance requirements.

This is somewhat challenging as a fresh install of Hadoop suffers all the same weak points traditional IT systems have, so it takes work to get security set up and the reports being created. For clusters that are already up and running, no need to choose technologies and a deployment roadmap that does not upset ongoing operations. On top of that, there is the additional challenge that the in-house tools you use to secure things like SAP, or the SIEM infrastructure you use for compliance reporting, may not be suitable when it comes to NoSQL.

Building security into the cluster

The number of security solutions that are compatible – if not outright built for – Hadoop is the biggest change since 2012. All of the major security pillars – authentication, authorization, encryption, key management and configuration management – are covered and the tools are viable. Most of the advancement have come from the firms that provide enterprise distributions of Hadoop. They have built, and in many cases contributed back to the open source community, security tools that accomplish the basics of cluster security. When you look at the threat-response models introduced in the previous two posts, every compensating security control is now available. Better still, they have done a lot of the integration legwork for services like Kerberos, taking a lot of the pain out of deployments.

Here are some of the components and functions that were not available – or not viable – in 2012.

  • LDAP/AD Integration – Technically AD and LDAP integration were available in 2012, but these services have both been advanced, and are easier to integrate than before. In fact, this area has received the most attention, and integration is as simple as a setup wizard with some of the commercial platforms. The benefits are obvious, as firms can leverage existing access and authorization schemes, and defer user and role management to external sources.
  • Apache Ranger – Ranger is one of the more interesting technologies to come available, and it closes the biggest gap: Module security policies and configuration management. It provides a tool for cluster administrators to set policies for different modules like Hive, Kafka, HBase or Yarn. What’s more, those policies are in context to the module, so it sets policies for files and directories when in HDSF, SQL policies when in Hive, and so on. This helps with data governance and compliance as administrators set how a cluster should be used, or how data is to be accessed, in ways that simple role based access controls cannot.
  • Apache Knox – You can think of Knox in it’s simplest form as a Hadoop firewall. More correctly, it is an API gateway. It handles HTTP and REST-ful requests, enforcing authentication and usage policies of inbound requests, and blocking everything else. Knox can be used as a virtual moat’ around a cluster, or used with network segmentation to further reduce network attack surface.
  • Apache Atlas – Atlas is a proposed open source governence framework for Hadoop. It allows for annotation of files and tables, set relationships between data sets, and even import meta-data from other sources. These features are helpful for reporting, data discovery and for controlling access. Atlas is new and we expect it to see significant maturing in coming years, but for now it offers some valuable tools for basic data governance and reporting.
  • Apache Ambari – Ambari is a facility for provisioning and managing Hadoop clusters. It helps admins set configurations and propagate changes to the entire cluster. During our interviews we we only spoke to two firms using this capability, but we received positive feedback by both. Additionally we spoke with a handful of companies who had written their own configuration and launch scripts, with pre-deployment validation checks, usually for cloud and virtual machine deployments. This later approach was more time consuming to create, but offered greater capabilities, with each function orchestrated within IT operational processes (e.g.: continuous deployment, failure recovery, DevOps). For most, Ambari’s ability to get you up and running quickly and provide consistent cluster management is a big win and a suitable choice.
  • Monitoring – Hive, PIQL, Impala, Spark SQL and similar modules offer SQL or pseudo-SQL syntax. This means that the activity monitoring, dynamic masking, redaction and tokenization technologies originally developed for relational platforms can be leveraged by Hadoop. The result is we can both alert and block on misuse, or provide fine-grained authorization (i.e.: beyond role based access) by altering queries or query result sets based upon users metadata. And, as these technologies are examining queries, they offer an application-centric view of events that is not always capture in log files.

Your first step in addressing these compliance concerns is mapping your existing governance requirements to a Hadoop cluster, then deciding on suitable technologies to meet data and IT security requirements. Next you will need to deploy technologies that provide security and reporting functions, and setting up the policies to enforce usage or detect misuse. Since 2012, many technologies have come available which can address common threats without killing scalability and performance, so there is no need to reinvent the wheel. But you will need to assemble these technologies into complete program, so there is work to be done. Let’s sketch out some over-arching strategies, then provide a basic roadmap to get there.

Security Models

Walled Garden

Walled Garden

The most common approach today is a ‘walled garden’ security model. You can think of this as the ‘moat’ model used for mainframe security for many years; place the entire cluster onto its own network, and tightly control logical access through firewalls or API gateways, and access controls for user or app authentication. In practice with this model there is virtually no security within the NoSQL cluster itself, as data and database security is dependent upon the outer ‘protective shell’ of the network and applications that surrounds the database. The advantage is simplicity; any firm can implement this model with existing tools and skills without performance or functional degradation to the database. On the downside, security is fragile; once a failure of the firewall or application occurs, the system is exposed. This model also does not provide means to prevent credentialed users from misusing the system or viewing/modifying data stored in the cluster. For organizations not particularly worried about security, this is a simple, cost effective approach.

Cluster Security

Unlike relational databases which function like a black-box, Hadoop exposes it’s skeleton to the network. Inter-node communication, replication and other cluster functions occur between many machines, through different types of services. Securing a Hadoop cluster is more akin to securing an entire data center than a traditional database. That said, for the best possible protection, building security into cluster operations is critical. This approach leverages security tools built in – or third-party products integrated into – the NoSQL cluster. Security in this case is systemic and built to be part of the base cluster function.

Built-in Security

Tools may include SSL/TLS for secure communication, Kerberos for node authentication, transparent encryption for data at rest security, and identity and authorization (e.g.: groups, roles) management just to name a few. This approach is more difficult as there are a lot more moving parts and areas where some skill are required. The setup of several security functions targeted at specific risks to the database infrastructure takes time. And, as third-party security tools are often required, typically more expensive. However, it does secure a cluster from attackers, rogue admins, and the occasional witless application programmer. It’s the most effective, and most comprehensive, approach to Hadoop security.

Data Centric Security

Big data systems typically share data from dozens of sources. As firms do not always know where their data is going, or what security controls are in place when it is stored, they’ve taken steps to protect the data regardless of where it is used. This model is called data-centric security because the controls are part of the data, or in some cases, the presentation layer of a database.

Data centric security

The three basic tools that support data-centric security are tokenization, masking and data element encryption. You can think of tokenization just like a subway or arcade token; it has no cash value but can be used to ride the train or play a game. In this case a data token is provided in lieu of sensitive data – it’s commonly used in credit card processing systems to substitute credit card numbers. The token has no intrinsic value other than as a reference to the original value in some other (e.g.: more secure) database. Masking is another very common tool used to protect data elements while retaining the aggregate value of a data set. For example, firms may substitute an individual’s social security number with a random number, or change their name randomly from a phone book, or replace a date value with a random date within some range. In this way the original – sensitive – data value is removed entirely from query results, but the value of the data set is preserved for analysis. Finally, data elements can be encrypted and passed without fear of compromise; only legitimate users with the right encryption keys can view the original value.

The data-centric security model provides a great deal of security when the systems that process data cannot be fully trusted. And many enterprises, given the immaturity of the technology, do not fully trust big data clusters to protect information. But a data-centric security model requires careful planning and tool selection, as it’s more about information lifecycle management. You define the controls over what data can be moved, and what protection must be applied before it is moved. Short of deleting sensitive data, this is the best model when you must populate a big data cluster for analysis work but cannot guarantee security.

These models are very helpful for conceptualizing how you want to approach cluster security. And they are really helpful when trying to get a handle on resource allocation; what approach is your IT team comfortable managing and what tools do you have the funds to acquire? That said, the reality is that firms no longer wholly adhere to any single model. They use a combination of two. For some firms we interviewed they used application gateways to validate requests, and they use IAM and transparent encryption to provide administrative Segregation of Duties on the back end. In another case, the highly multi-tenent nature of the cluster meant they relied heavily on TLS security for session privacy, and implemented dynamic controls (e.g.: masking, tokenization and redaction) for fine grained controls over data.

In our next post we will close this series with a set of succinct technical recommendations which help for all use cases.

—Adrian Lane

Friday, February 12, 2016

The Summary is dead. Long live the Summary!

By Rich

As part of our changes at Securosis this year, it’s time to say goodbye to the old Friday Summary, and hello to the new one. Adrian and I started the Summary way back before Mike joined the company, as our own version of his weekly Security Incite. Our objective was to review the highlights of the week, both our work and things we found on the Internet, typically with an introduction based on events in our personal lives.

As we look at growing and changing our focus this year, it’s time for a different format. Mike’s Incite (usually released on Wednesdays) does a great job highlighting important security stories, or whatever we find interesting. The Summary has always overlapped a bit. We also developed a tendency to overstuff it with links.

Moving forward we are switching gears, and the Summary will now focus on our main coverage areas: cloud, DevOps, and automation security. The new sections will be more tightly curated and prioritized, to better fit a weekly newsletter format for folks who don’t have time to keep up on everything.

We plan to keep the Incite our source for general security industry analysis, with the revised Summary targeting our new focus areas. We are also changing our email list provider from Aweber to MailChimp due to an ongoing technical issue. As part of that switch we will soon offer more email subscription options, which we used to have. You can pick the daily digest of all our posts, the weekly Incite, and/or the weekly Summary. If you want to subscribe directly to the Friday Summary only, just click here.

If you have any feedback, as always please feel free to leave a comment or email us at //info@securosis.com.

And don’t forget:

Top Posts for the Week

Tool of the Week

This is a new section highlighting a cloud, DevOps, or security tool we think you should take a look at. We still struggle to keep track of all the interesting tools that can help us; if you have submissions please email them to //info@securosis.com.

We are still looking at how we want to handle logging as we rearchitect securosis.com. Our friend Matt J. recommended I look at the fluentd open source log collector. It looks like a good replacement for Logstash, which is pretty heavy and can be hard to configure. You can pump everything into fluentd in an instance, container, or auto-scaled cluster if you need it. It can perform analysis right there, plus you can send them down the chain to things like ElasticSearch/Kibana, AWS Kinesis, or different kinds of storage.

What I really like is how it normalizes data into JSON as much as possible, which is great because that’s how we are structuring all our Trinity application logs.

Our plan is to use fluentd with some basic rules for securosis.com, pushing the logs into AWS hosted ElasticSearch (to reduce management overhead), and then Kibana to roll our own SIEM. We see a bunch of clients following a similar approach. This also fits well into cloud logging architectures where you collect the logs locally and only send alerts back to the SOC. Especially with S3 support, that can really reduce overall costs.

Securosis Blog Posts this Week

Other Securosis News and Quotes

We are posting our RSA Conference Guide on the RSA Conference blog – here are the latest posts:

Training and Events

—Rich

Wednesday, February 10, 2016

Securing Hadoop: Operational Security Issues

By Adrian Lane

Beyond the architectural security issues endemic to Hadoop and NoSQL platforms discussed in the last post, IT teams expect some common security processes and supporting tools familiar from other data management platforms. That includes “turning the dials” on configuration management, vulnerability assessment, and maintaining patch levels across a complex assembly of supporting modules. The day-to-day processes IT managers follow to ensure typical application platforms are properly configured have evolved over years – core platform capabilities, community contributions, and commercial third-party support to fill in gaps. Best practices, checklists, and validation tools to verify things like admin rights are sufficiently tight, and that nodes are patched against known and perhaps even unknown vulnerabilities. Hadoop security has come a long way in just a few years, but it still lacks the maturity in day to day operational security offerings, and it is here that we find most firms continue to struggle.

The following is an overview of the most common threats to data management systems, where operational controls offer preventative security measures to close off most common attacks. Again we will discuss the challenges, then map them to mitigation options.

  • Authentication and authorization: Identity and authentication are central to any security effort – without them we cannot determine who should get access to data. Fortunately the greatest gains in NoSQL security have been in identity and access management. This is largely thanks to providers of enterprise Hadoop distributions, who have performed much of the integration and setup work. We have evolved from simple in-database authentication and crude platform identity management to much better integrated LDAP, Active Directory, Kerberos, and X.509 based authentication options. Leveraging those capabilities we can use established roles for authorization mapping, and sometimes extend to fine-grained authorization services with Apache Sentry, or custom authorization mapping controlled from within the calling application the database.
  • Administrative data access: Most organizations have platform administrators and NoSQL database administrators, both with access to the cluster’s files. To provide separation of duties – to ensure administrators cannot view content – a facility is needed to segregate administrative roles and keep unwanted access to a minimum. Direct access to files or data is commonly addressed through a combination of role based-authorization, access control lists, file permissions, and segregation of administrative roles – such as with separate administrative accounts, bearing different roles and credentials. This provides basic protection, but cannot protect archived or snapshotted content. Stronger security requires a combination of data encryption and key management services, with unique keys for each application or cluster. This prevents different tenants (applications) in a shared cluster from viewing each other’s data.
  • Configuration and Patch Management: With a cluster of servers, which may have hundreds of nodes, it is common to run different configurations and patch levels at one time. As nodes are added we see configuration skew. Keeping track of revisions is difficult. Existing configuration management tools can cover the underlying platforms, and HDFS Federation will help with cluster management, but they both leave a lot to be desired – including issuing encryption keys, avoiding ad hoc configuration changes, ensuring file permissions are set correctly, and ensuring TLS is correctly configured. NoSQL systems do not yet have counterparts for the configuration management tools available for relational platforms, and even commercial Hadoop distributions offer scant advice on recommended configurations and pre-deployment checklists. But administrators still need to ensure configuration scripts, patches, and open source code revisions are consistent. So we see NoSQL databases deployed on virtual servers and cloud instances, with home-grown pre-deployment scripts. Alternatively a “golden master” node may embody extensive configuration and validation, propagated automatically to new nodes before they can be added into the cluster. Software Bundles: The application and Hadoop stacks are assembled from many different components. Underlying platforms and file systems also vary – with their own configuration settings, ownership rights, and patch levels. We see organizations increasingly using source code control systems to handle open source version management and application stack management. Container technologies also help developers bundle up consistent application deployments.
  • Authentication of applications and nodes: If an attacker can add a new node they control to the cluster, they can exfiltrate data from the cluster. To authenticate nodes (rather than users) before they can join a cluster, most firms we spoke with either employ X.509 certificates or Kerberos. Both can authenticate users as well, but we draw this distinction to underscore the threat of rogue applications or nodes being added to the cluster. Deployment of these services brings risks as well. For example if a Kerberos keytab file can be accessed or duplicated – perhaps using credentials extracted from virtual image files or snapshots – a node’s identity can be forged. Certificate-based identity options implicitly complicate setup and deployment, but properly deployed they can provide strong authentication and stronger security.
  • Audit and Logging: If you suspect someone has breached your cluster, can you detect it, or trace back to the root cause? You need an activity record, which is usually provided by event logging. A variety of add-on logging capabilities are available, both open source and commercial. Scribe and LogStash are open source tools which integrate into most big data environments, as do a number of commercial products. You can leverage the existing cluster to store logs, build an independent cluster, or even leverage other dedicated platforms like a SIEM or Splunk. That said, some logging options do not provide an auditor sufficient information to determine exactly what actions occurred. You will need to verify that your logs are capturing both the correct event types and user actions. A user ID and IP address are insufficient – you also need to know what queries were issued.
  • Monitoring, filtering, and blocking: There are no built-in monitoring tools to detect misuse or block malicious queries. There isn’t even yet a consensus on what a malicious big data query looks like – aside from crappy MapReduce scripts written by bad programmers. We are just seeing the first viable releases of Hadoop activity monitoring tools. No longer the “after-market speed regulators” they once were, current tools typically embedded into a service like Hive or Spark to capture queries. Usage of SQL queries has blossomed in the last couple years, so we can now leverage database activity monitoring technologies to flag misuse or even block it. These tools are still very new, but the approach has proven effective on relational platforms, and implementations should improve with time.
  • API security: Big data cluster APIs need to be protected from code and command injection, buffer overflow attacks, and all the other web service attacks. This responsibility typically rests on the applications using the cluster, but not always. Common security controls include integration with directory services, mapping OAuth tokens to API services, filtering requests, input validation, and managing policies across nodes. Some people leverage API gateways and whitelist allowable application requests. Again a handful of off-the-shelf solutions can help address API security, but most of the options are based on a gateway funneling all users and requests through a single interface (choke-point). Fortunately modern DevOps techniques for application stack patching and pre-deployment validation are proving effective at addressing application and cluster security issues. There are a great many API security issues, but a full consideration is beyond our scope for this paper

Threat-response models

There are one or more security countermeasures to mitigate each of the threats identified above. This diagram shows some options at your disposal.

Threat response model

Our next post will discuss strategic security models and how some of these security technologies are deployed. You will see how some deployments aim for simplicity and ease of management, rather than attempting to achieve the highest security they can. For example some firms use Kerberos to uniquely identify nodes on the cluster and leverage Kerberos certificates to prove identity. Others issue a TLS certificate to each node before adding it to the cluster – this provides bidirectional identification between nodes and session encryption, but not really node authentication. Kerberos offers much stronger identity security and enforcement, with a greater setup and management cost.

—Adrian Lane

Friday, February 05, 2016

Summary: Die Blah, Die!!

By Rich

Rich here.

I was a little burnt out when the start of this year rolled around. Not “security burnout” – just one of the regular downs that hit everyone in life from time to time. Some of it was due to our weird year with the company, a bunch of it was due to travel and impending deadlines, plus there was all the extra stress of trying to train for a marathon while injured (and working a ton).

Oh yeah, and I have kids. Two of whom are in school. With homework. And I thought being a paramedic or infosec professional was stressful?!?

Even finishing the marathon (did I mention that enough?) didn’t pull me out of my funk. Even starting the planning for Securosis 2.0 only mildly engaged my enthusiasm. I wasn’t depressed by any means – my life is too awesome for that – but I think many of you know what I mean. Just a… temporary lack of motivation.

But last week it all faded away. All it took was a break from airplanes, putting some new tech skills into practice, and rebuilding the entire company.

A break from work travel is kind of like the reverse of a vacation. The best vacations are a month long – a week to clear the head, two weeks to enjoy the vacation, a week to let the real world back in. A gap in work travel does the same thing, except instead of enjoying vacation you get to enjoy hitting deadlines. It’s kind of the same.

Then I spent time on a pet technical project and built the code to show how event-driven security can work. I had to re-learn Python while learning two new Amazon services. It was a cool challenge, and rewarding to build something that worked like I hoped. At the same time I was picking up other new skills for my other RSA Conference demos.

The best part was starting to rebuild the company itself. We’re pretty serious about calling this our “Securosis 2.0 pivot”. The past couple weeks we have been planning the structure and products, building out initial collateral, and redesigning the website (don’t worry – with our design firm). I’ve been working with our contractors to build new infrastructure, evaluating new products and platforms, and firming up some partnerships. Not alone – Mike and Adrian are also hard at work – but I think my pieces are a lot more fun because I get the technical parts.

It’s one thing to build a demo or write a technical blog post, but it’s totally different to be building your future. And that was the final nail in the blah’s coffin.

A month home. Learning new technical skills to build new things. Rebuilding the company to redefine my future. It turns out all that is a pretty motivating combination, especially with some good beer and workouts in the mix, and another trip to see Star Wars (3D IMAX with the kids this time).

Now the real challenge: seeing if it can survive the homeowner’s association meeting I need to attend tonight. If I can make it through that, I can survive anything.

Photo credit: Blah from pinterest

And now on to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Research Reports and Presentations

Top News and Posts

Blog Comment of the Week

This week’s best comment goes to Andy, in response to Event-Driven AWS Security: A Practical Example.

Cool post. We could consider the above as a solution to an out of band modification of a security group. If the creation and modification of all security groups is via Cloudformation scripts, a DevOps SDLC could be implemented to ensure only approved changes are pushed through in the first place. Another question is how does the above trigger know the modification is unwanted?! It’s a wee bugbear I have with AWS that there’s not currently a mechanism to reference rule functions or change controls.

My response:

I actually have some techniques to handle out of band approvals, but it gets more advanced pretty quickly (plan is to throw some of them into Trinity once we start letting anyone use it).

One quick example… build a workflow that kicks off a notification for approval, then the approval modifies something in Dynamo or S3, then that is one of the conditionals to check. E.g. have your change management system save down a token in S3 in a different account, then the Lambda function checks that.

You get to use cross-account access for separation of duties. Gets complicated quickly, which is why we figure we need a platform to manage it all.

—Rich

Wednesday, February 03, 2016

Incite 2/3/2016: Courage

By Mike Rothman

A few weeks ago I spoke about dealing with the inevitable changes of life and setting sail on the SS Uncertainty to whatever is next. It’s very easy to talk about changes and moving forward, but it’s actually pretty hard to do. When moving through a transformation, you not only have to accept the great unknown of the future, but you also need to grapple with what society expects you to do. We’ve all been programmed since a very early age to adhere to cultural norms or suffer the consequences. Those consequences may be minor, like having your friends and family think you’re an idiot. Or decisions could result in very major consequences, like being ostracized from your community, or even death in some areas of the world.

In my culture in the US, it’s expected that a majority of people should meander through their lives; with their 2.2 kids, their dog, and their white picket fence, which is great for some folks. But when you don’t fit into that very easy and simple box, moving forward along a less conventional path requires significant courage.

Courage

I recently went skiing for the first time in about 20 years. Being a ski n00b, I invested in two half-day lessons – it would have been inconvenient to ski right off the mountain. The first instructor was an interesting guy in his 60’s, a US Air Force helicopter pilot who retired and has been teaching skiing for the past 25 years. His seemingly conventional path worked for him – he seemed very happy, especially with the artificial knee that allowed him to ski a bit more aggressively. But my instructor on the second day was very interesting. We got a chance to chat quite a bit on the lifts, and I learned that a few years ago he was studying to be a physician’s assistant. He started as an orderly in a hospital and climbed the ranks until it made sense for him to go to school and get a more formal education. So he took his tests and applied and got into a few programs.

Then he didn’t go. Something didn’t feel right. It wasn’t the amount of work – he’d been working since he was little. It wasn’t really fear – he knew he could do the job. It was that he didn’t have passion for a medical career. He was passionate about skiing. He’d been teaching since he was 16, and that’s what he loved to do. So he sold a bunch of his stuff, minimized his lifestyle, and has been teaching skiing for the past 7 years. He said initially his Mom was pretty hard on him about the decision. But as she (and the rest of his family) realized how happy and fulfilled he is, they became OK with his unconventional path.

Now that is courage. But he said something to me as we were about to unload from the lift for the last run of the day. “Mike, this isn’t work for me. I happened to get paid, but I just love teaching and skiing, so it doesn’t feel like a job.” It was inspiring because we all have days when we know we aren’t doing what we’re passionate about. If there are too many of those days, it’s time to make changes.

Changes require courage, especially if the path you want to follow doesn’t fit into the typical playbook. But it’s your life, not theirs. So climb aboard the SS Uncertainty (with me) and embark on a wild and strange adventure. We get a short amount of time on this Earth – make the most of it. I know I’m trying to do just that.

Editors note: despite Mike’s post on courage, he declined my invitation to go ski Devil’s Crotch when we are out in Colorado. Just saying. -rich

–Mike

Photo credit: “Courage” from bfick


It’s that time of year again! The 8th annual Disaster Recovery Breakfast will once again happen at the RSA Conference. Thursday morning, March 3 from 8 – 11 at Jillians. Check out the invite or just email us at rsvp (at) securosis.com to make sure we have an accurate count.

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Securing Hadoop

SIEM Kung Fu

Building a Threat Intelligence Program

Recently Published Papers

* The Future of Security

Incite 4 U

  1. Evolution visually: Wade Baker posted a really awesome piece tracking the number of sessions and titles at the RSA Conference over the past 25 years. The growth in sessions is astounding (25% CAGR), up to almost 500 in 2015. Even more interesting is how the titles have changed. It’s the RSA Conference, so it’s not surprising that crypto would be prominent the first 10 years. Over the last 5? Cloud and cyber. Not surprising, but still very interesting facts. RSAC is no longer just a trade show. It’s a whole thing, and I’m looking forward to seeing the next iteration in a few weeks. And come swing by the DRB Thursday morning and say hello. I’m pretty sure the title of the Disaster Recovery Breakfast won’t change. – MR

  2. Embrace and Extend: The SSL/TLS cert market is a multi-billion dollar market – with slow and steady growth in the sale of certificates for websites and devices over the last decade. For the most part, certificate services are undifferentiated. Mid-to-large enterprises often manage thousands of them, which expire on a regular basis, making subscription revenue a compelling story for the handful of firms that provide them. But last week’s announcement that Amazon AWS will provide free certificates must have sent shivers through the market, including the security providers who manage certs or monitor for expired certificates. AWS will include this in their basic service, as long as you run your site in AWS. I expect Microsoft Azure and Google’s cloud to follow suit in order to maintain feature/pricing parity. Certs may not be the best business to be in, longer-term. – AL

  3. Investing in the future: I don’t normally link to vendor blogs, but this post by Chuck Robbins, Cisco’s CEO, is pretty interesting. He echoes a bunch of things we’ve been talking about, including how the security industry is people-constrained, and we need to address that. He also mentions a bunch of security issues, s maybe security is highly visible in security. Even better, Chuck announced a $10MM scholarship program to “educate, train and reskill the job force to be the security professionals needed to fill this vast talent shortage”. This is great to see. We need to continue to invest in humans, and maybe this will kick start some other companies to invest similarly. – MR

  4. Geek Monkey: David Mortman pointed me to a recent post about Automated Failure testing on Netflix’s Tech blog. A particularly difficult to find bug gave the team pause in how they tested protocols. Embracing both the “find failure faster” mentality, and the core Simian Army ideal of reliability testing through injecting chaos, they are looking at intelligent ways to inject small faults within the code execution path. Leveraging a very interesting set of concepts from a tool called Molly (PDF), they inject different results into non-deterministic code paths. That sounds exceedingly geeky, I know, but in simpler terms they are essentially fuzz testing inside code, using intelligently selected values to see how protocols respond under stress. Expect a lot more of this approach in years to come, as we push more code security testing earlier in the process. – AL

—Mike Rothman

Monday, February 01, 2016

Event-Driven AWS Security: A Practical Example

By Rich

Would you like the ability to revert unapproved security group (firewall) changes in Amazon Web Services in 10 seconds, without external tools? That’s about 10-20 minutes faster than is typically possible with a SIEM or other external tools. If that got your attention, then read on…

If you follow me on Twitter, you might have noticed I went a bit nuts when Amazon Web Services announced their new CloudWatch events a couple weeks ago. I saw them as an incredibly powerful too for event driven security. I will post about the underlying concepts tomorrow, but right now I think it’s better to just show you how it works first. This entire thing took about 4 hours to put together, and it was my first time writing a Lambda function or using Python in 10 years.

This example configures an AWS account to automatically revert any Security Group (firewall) changes without human interaction, using nothing but native AWS capabilities. No security tools, no servers, nada. Just wiring together things already built into AWS. In my limited testing it’s effective in 10 seconds or less, and it’s only 100 lines of code – including comments. Yes, this post is much longer than the code to make it all work.

I will walk you through setting it up manually, but in production you would want to automate this configuration so you can manage it across multiple AWS accounts. That’s what we use Trinity for, and I’ll talk more about automating automation at the end of the post. Also, this is Amazon specific because no other providers yet expose the needed capabilities.

For background it might help to read the AWS CloudWatch events launch post. The short version is that you can instrument a large portion of AWS, and trigger actions based on a wide set of very granular events. Yes, this is an example of the kind of research we are focusing on as part of our cloud pivot.

This might look long, but if you follow my instructions you can set it all up in 10-15 minutes. Tops.

Prep Work: Turn on CloudTrail

If you use AWS you should have CloudTrail set up already; if not you need to activate it and feed the logs to CloudWatch using these instructions. This only takes a minute or two if you accept all the defaults.

Step 1: Configure IAM

To make life easier I put all my code up on the Securosis public GitHub repository. You don’t need to pull that code – you will copy and paste everything into your AWS console.

Your first step is to configure an IAM policy for your workflow; then create a role that Lambda can assume when running the code. Lambda is an AWS service that allows you to store and run code based on triggers. Lambda code runs in a container, but doesn’t require you to manage containers or servers for it. You load the code, and then it executes when triggered. You can build entirely serverless architectures with Lambda, which is useful if you want to eliminate most of your attack surface, but that’s a discussion for another day.

IAM in Amazon Web Services is how you manage who can do what in your account, including the capabilities of Amazon services themselves. It is ridiculously granular and powerful, and so the most critical security tool for protecting AWS accounts.

  • Log into the AWS console. Got to the Identity and Access Management (IAM) dashboard.
  • Click on Policies, then Create Policy.
  • Choose Create Your Own Policy.
  • Name it lambda_revert_security_group. Enter a description, then copy and paste my policy from GitHub. My policy allows the Lambda function to access CloudWatch logs, write to the log, view security group information, and revoke ingress or egress statements (but not create new ones). Damn, I love granular policies!

IAM Policy

  • Once the policy is set you need to Create New Role. This is the role which the Lambda function will assume when it runs.
    • Name it lambda_revert_security_group, assign it an AWS Lambda role type, then attach the lambda_revert_security_group policy you just created.

choose lambda role type

That’s it for the IAM changes. Next you need to set up the Lambda function and the CloudWatch event.

Step 2: Create the Lambda function

First make sure you know which AWS region you are working in. I prefer us-west-2 (Oregon) for lab work because it is up to date and tends to support new capabilities early. us-east-1 is the granddaddy of regions, but my lab account has so much cruft after 6+ years that things don’t always work right for me there.

  • Go to Lambda (under Compute on the main services page) and Create a Lambda function.
  • Don’t pick a blueprint – hit the Skip button to advance to the next page.
  • Name your function revertSecurityGroup. Enter a description, and pick Python for the runtime. Then paste my code into the main window. After that pick the lambda_revert_security_group IAM role the function will use. Then click Next, then Create function.

Configure the Lambda function

A few points on Lambda. You aren’t billed until the function triggers; then you are billed per request and runtime. Lambda is very good for quick tasks, but it does have a timeout (I think an hour these days), and the longer you run a function the less sense it makes compared to a dedicated server. I actually looked at migrating Trinity to Lambda because we could offload our workflows, but at that time it had a 5-minute timeout, and running hour-long workflows at scale would likely have killed us financially.

Now some notes on my code.

  • The main function handler includes a bunch of conditional statements you can use to only trigger reverting security group changes based on things like who requested the change, which security group was changed, whether the security group is in a specified VPC, and whehter the security group has a particular tag. None of those lines will work for you, because they refer to specific identifiers in my account – you need to change them to work in your account.
  • By default, the function will revert any security group change in your account. You need to cut and paste the line “revert_security_group(event)” into a conditional block to run only on matching conditions.
  • The function only works for inbound rule changes. It is trivial to modify for egress rule changes, or to restrict both ingress and egress. The IAM policy we set will work for both – you just need to change the code.
  • This only works for EC2-VPC. EC2-Classic works differently, and my code cannot parse the EC2-Classic API.
  • The code pulls the event details, finds the changes (which could be multiple changes submitted at the same time), and reverses them.
  • There may be ways around this. I ran through it over the weekend and tested multiple ways of making an EC2-VPC security group change, and my reversion always worked, but there might be a way I don’t know about that would change the log format enough that my code wouldn’t work. I plan to update it to work with EC2-Classic, but neither I nor Securosis ever uses that EC2-Classic, and we advise clients not to use it, so that is a low priority. If you find a hole, please drop me a line.
  • This works for internal (security group to security group) changes as well as external or internal IP address based rules.

Step 3: Configure the CloudWatch Event trigger

CloudWatch is Amazon’s built-in logging service. You cannot turn it off because it is the tool AWS uses to monitor and manage the performance of your instances and services. CloudWatch Logs is a newer feature you can use to store various log streams, including CloudTrail, the service that records all API calls in your account (including internal AWS calls).

  • Go to CloudWatch, then Events, then Create rule.
  • In the Event selector > Select event source, pick AWS API call. This only works with CloudTrail turned on.
  • Pick EC2 as the Service name. Then click Specific operation(s), then AuthorizeSecurityGroupIngress. You can also add egress if you want.
  • For Targets, pick Add target then Lambda function, and then select the one you just created. If you have a notification function you could add it here to get a text message or email whenever it runs, or send an alert to your SIEM.
  • Then name it. It’s active by default.

Create the event rule

Now test it. Go into the console, make a security group change, wait about 10 seconds, and refresh the console. Your changes should be gone. You can also check the CloudWatch log to see what happened, the details of the API call, and how the function executed.

Automating for Scale

This might only take 10-15 minutes once you have the code and know the process, but imagine configuring all this on hundreds or thousands of accounts at a time – which is typical for a mid-size or large organization with many projects.

To scale this up you need to create a new account deployment package. That’s what we use Trinity for (okay, that’s what I’m currently coding into Trinity, for our internal use right now). The idea is that when you provision an account you hook into it and blast out all the configurations, settings, Lambda functions, etc. using automation code.

In last year’s Black Hat training we demonstrated that, with demo code to configure alerts on IAM changes via CloudTrail and CloudWatch. We plan to go into more detail in our new Advanced Cloud Security and Applied DevOps class this summer.

It isn’t really all that complicated. Once you spend time on your cloud platform of choice and learn some basic coding via the APIs, the rest is pretty easy. It’s just basic check-a-setting, make-a-change stuff – no complex math or crazy decision trees needed (for the most part).

This is seriously exciting stuff – we security professionals can now directly manage, monitor, and manipulate our infrastructure using the exact same tools as development and operations. The infrastructure itself can identify and fix configuration and other issues – including security issues – faster than a person or (most) external tools.

Try it out. It’s easy to get started, and with minimal work you can make my sample code work for a whole host of different situations beyond basic firewalling.

—Rich

Securing Hadoop: Architectural Security Issues

By Adrian Lane

Now that we have sketched out the elements a Hadoop cluster, and what one looks like, let’s talk threats to the databases. We want to consider both the database infrastructure itself, as well as the data under management. Given the complexity of a Hadoop cluster, the task is closer to securing an entire data center than a typical relational database. All the features that provide flexibility, scalability, performance, and openness, create specific security challenges. The following are some specific threats to clustered databases.

  • Data access & ownership: Role-based access is central to most database security schemes, and NoSQL is no different. Relational and quasi-relational platforms include roles, groups, schemas, label security, and various other facilities for limiting user access to subsets of available data. Most big data environments now offer integration with identity stores, along with role-based facilities to divide up data access between groups of users. That said, authentication and authorization require cooperation between the application designer and the IT team managing the cluster. Leveraging existing Active Directory or LDAP services helps tremendously with defining user identities, and pre-defined roles may be available for limiting access to sensitive data.
  • Data at rest protection: The standard for protecting data at rest is encryption, which protects against attempts to access data outside established application interfaces. With Hadoop systems we worry about people stealing archives or directly reading files from disk. Encrypted files are protected against access by users without encryption keys. Replication effectively replaces backups for big data, but beware a rogue administrator or cloud service manager creating their own backups. Encryption limits how data can be copied from the cluster. Unlike 2012, where the lack of suitable encryption was a serious issue. Apache offers HDFS encryption as an option; this is a major advance, but remember that you can only encrypt HDFS, and you’ll need to fill the gaps with key management and key storage. Several commercial Hadoop vendors offer transparent encryption, and third parties have advanced the state of the art, with transparent encryption options for both both HDFS and non-HDFS on-disk formats, especially coupled with parallel progress in key management.
  • Inter-node communication: Hadoop and the vast majority of distributions (Cassandra, MongoDB, Couchbase, etc.) don’t communicate securely by default – they use unencrypted RPC over TCP/IP. TLS and SSL are bundled in big data distributions, but not typically used between applications and databases – and almost never for inter-node communication. This leaves data in transit, and application queries, accessible for inspection and tampering.
  • Client interaction: Clients interact with resource managers and nodes. While gateway services can be created to load data, clients communicate directly with both resource managers and individual data nodes. Compromised clients can send malicious data or links to either service. This facilitates efficient communication but makes it difficult to protect nodes from clients, clients from nodes, and even name servers from nodes. Worse, the distribution of self-organizing nodes is a poor fit for security tools such as gateways, firewalls, and monitors. Many security tools are designed to require a choke-point or span port, which may not be available in a peer-to-peer mesh cluster.
  • Distributed nodes: One of the reasons big data makes sense is an old truism: “moving computation is cheaper than moving data”. Data is processed wherever resources are available, enabling massively parallel computation. Unfortunately this produces complicated environments with lots of attack surface. With so many moving parts, it is difficult to verify consistency or security across a highly distributed cluster of (possibly heterogeneous) platforms. Patching, configuration management, node identity, and data at rest protection – and consistent deployment of each – are all issues.

Threat-response models

One or more security countermeasures are available to mitigate each threat identified above. The following diagram shows which specific options you have at your disposal to help you choose a ‘preventative’ security measure.

Arch Threat-Response

We don’t have room to go into much detail on the tradeoffs of each response – each area really deserves its own paper. But we do want to mention a couple areas where we have seen the most change since our original research four years ago.

If your goal is to protect session privacy – either between clients and data nodes, or for inter-node communication – Transport Layer Security (TLS) is your first choice. This was unheard of in 2012, but since then about 25% of the companies we spoke with have implemented SSL or TLS for inter-node communication – not just between applications and name servers. Transport encryption protects all communications from access or modification by attackers. Some firms instead use network segmentation and firewalls to ensure that attackers cannot access network traffic. This approach is less robust but much easier to implement. Some clusters were deployed to third-party cloud services, where virtualized network services make sniffing nearly impossible; these companies typically chose not to encrypt internal cluster communications.

Enforcing data usage is one of the areas where we have seen the most progress, thanks to database links into existing Active Directory and LDAP identity stores. This seems obvious now but was a rarity in 2012, when data architects were focused on scalability and getting basic analytics up and running. Fortunately support for linking identity stores to Hadoop clusters has advanced considerably, making it much easier to leverage existing roles and management infrastructure. But we also have other tools at our disposal. We don’t see it often, but a handful of organizations encrypt sensitive data elements at the application layer, so information is stored as encrypted elements. This way the application manages decryption and key management functions, and can offer additional controls over who can see which information. This is very secure, but must be designed in during application design and coded into the application from the beginning. Retrofitting application-layer encryption into an existing application and database stack is highly challenging at beast, which is why we also see wide usage of masking and redaction technologies – from both enterprise Hadoop vendors and third-party security vendors. These technologies offer fine control over which data is displayed to which users, and can be easily built into existing clusters to enforce security and compliance.

If you need deeper technical analysis, we have published much more information on technologies above – specifically Understanding Database Encryption which covers both NoSQL clusters and relational stores, Understanding Data Masking, and Understanding and Selecting a Key Management Solution.

Our goal here is to ensure you are aware of the risks, and to point out that you have choices to address each specific threat. Each option offers different advantages and costs; the costs will drive our recommendations later.

Up next: a look at how and where to embed security into day-to-day operations.

—Adrian Lane

Friday, January 29, 2016

Securing Hadoop: Architecture and Composition

By Adrian Lane

Our goal for this post is to succinctly outline what Hadoop (and most NoSQL) clusters look like, how they are assembled, and how they are used. This provides better understanding of the security challenges, and what sort of protections need to be leveraged to secure them. Developers and data scientists continue to stretch system performance and scalability, using customized combinations of open source and commercial products, so there is really no such thing as a ‘standard’ Hadoop deployment. With these considerations in mind, it is time to map out threats to the cluster.

NoSQL databases enable companies to collect, manage, and analyze incredibly large data sets. Thousands of firms are working on big data projects, from small startups to large enterprises. Since our original paper in 2012 the rate of adoption has only increased; platforms such as Hadoop, Cassandra, Mongo, and RIAK are now commonplace, with some firms supporting multiple installations. In just a couple years they went from “rogue IT” to “core systems”. Most firms recognized the value of “big data”, acknowledged these platforms are essential, and tasked IT teams with bringing them “under IT governance”. Most firms today are taking their first steps to retrofit security and governance controls onto Hadoop.

Let’s dig into how all the pieces fit together:

Architecture and Data Flow

Hadoop has been wildly successful because it scales well, can be configured to handle a wide variety of use cases, and is very inexpensive compared to relational and data warehouse alternatives. Which is all another way of saying it’s cheap, fast, and flexible. To show why and how it scales, let’s take a look at a Hadoop cluster architecture:

Hadoop Architecture

There are several things to note here. The architecture promotes scaling and performance. It provides parallel processing, and additional nodes provide ‘horizontal’ scalability. This architecture is also inherently multi-tenant, supporting multiple applications across one or more file groups. But there are a lot of moving parts; each node communicates with its peers to ensure that data is properly replicated, nodes are on-line and functional, storage is optimized, and application requests are being processed. We’ll dig into specific threats to Hadoop clusters later in this series.

Hadoop Stack

To appreciate Hadoop’s flexibility, you need to understand that a cluster can be fully customized. It is useful to think about the Hadoop framework as a ‘stack’, much like a LAMP stack, but much less standardized. While Pig and Hive are commonly used, the ability to mix and match components makes deployments much more diverse. For example, Sqoop and Yarn are alternative data access services. You can select different big data environments to support columnar, graph, document, XML, or multidimensional data. And over the last couple years MapReduce has largely given way to SQL query engines – with Spark, Drill, Impala, and Hive all accommodating increasing use of SQL-style queries. This modularity offers great flexibility to assemble and tailor clusters to behave and perform exactly as desired. But it also makes security more difficult – each option brings its own security options and issues.

Hadoop Stack

The beauty part is that you can set up a cluster to satisfy your usability, scalability, and performance goals. You can tailor it to specific types of data, or add modules to facilitate analysis of certain data sets. But that flexibility brings complexity. Each module runs a specific version of code, has its own configuration, and may require independent authentication to work in the cluster. Many pieces must work in tandem here to process data, so each requires its own security review.

Some of you reading this are already familiar with the architecture and component stack of a Hadoop cluster. You may be asking, “Why we are we going through these basics?”. To understand threats and appropriate responses, you need to first understand how all the pieces of the cluster work together. Each component interface is a trust relationship, and each relationships is a target. Each component offers attacker a specific set of potential exploits, and defenders have a corresponding set of options for attack detection and prevention. Understanding architecture and cluster composition is the first step to putting together your security strategy.

Our next post will present several strategies used to secure big data. Each model includes basic benefits and requires supplementary security tools. After selecting a strategy, you can put together a collection of security controls to meet your objectives.

—Adrian Lane

Monday, January 25, 2016

Securing Hadoop: Security Recommendations for NoSQL platforms [New Series]

By Adrian Lane

It’s been three and a half years since we published our research paper on Securing Big Data. That research paper has been one of the more popular papers we’ve ever written. And it’s no wonder as NoSQL adoption was faster than we expected; we see hundreds of new projects popping up, leveraging the scale, analytics and low cost of these platforms. It’s not hyperbole to claim it has revolutionized the database market over the last 5 years, and community support behind these platforms – and especially Hadoop – is staggering.

At the time we wrote the last paper security, Hadoop – much less the other platforms – was something of a barren wasteland. They did not include basic controls for data protection, most third party tools could not scale along with NoSQL and thus were of little use to developers, and leaders of NoSQL firms directed resources to improving performance and scalability, not security. Heck, in 2012 the version of Hadoop I evaluated did not even require and administrative password!

But when it comes to NoSQL security, and Hadoop specifically, things have changed dramatically. As we advise clients on how to implement security controls, there are many new options to consider. And while there remains some gaps in monitoring and assessment capabilities, Hadoop has (mostly) reached security parity with the relational platforms of old. We can’t call it a barren wasteland any longer, so to accurately advise people on approaches and tools to leverage, we can no longer refer them back to that original paper.

So we are kicking off a new research series to refresh this paper. Most of the content will be new. And this time we will do this a little bit differently that the last time. First, we are going to provide less background on what makes NoSQL different than relational databases, as most people in IT are now comfortable with the architectural and functional distinctions between the two. Second, most of our recommendations will still apply to NoSQL platforms in general, but this research will be more focused on Hadoop as we get a majority of questions on Hadoop security despite dozens of alternatives. Finally, as there are lots more aspects to talk about, we’ll weave preventative and detective controls into a more operational (i.e.: day to day management) model for both data and database infrastructure.

Here is how we are laying out the series:

Hadoop Architecture and Assembly — The goal with this post is to succinctly outline what Hadoop and similar styles of NoSQL clusters look like, how they are assembled and how they are used. In this light we get a better idea of the security challenges and what sort of protections need to be leveraged. As developers and data scientists stretch systems from a performance and scalability standpoint, and custom assemblage of open source and commercial products, there really is no such thing as a standard Hadoop deployment. So with these considerations in mind we will map out threats to the cluster.

Use Cases & Security Architectures — This post will discuss the strategic considerations for deploying security for big data. Depending upon which model you choose, you change where certain types of threats are addressed, and consequentially what tools you will rely upon to provide security. Or stated another way, the security model you choose will dictate what security technologies you need to prevent and detect threats. There are several approaches that organizations take to secure Hadoop and other NoSQL clusters. These range from securing the network around the cluster, Identity Management, to maintaining security controls on each node within the cluster, or even taking a data centric approach to security. We’ll go over the major trends we see today, and discuss the advantages and pitfalls of each approach.

Building Security Into the Cluster — Here is where we discuss how all of the pieces fit together. There are many security controls available, and each address a specific threat vector an attacker may employ. We’ll focus on security controls you want to build into your cluster from the start: identity, authorization, transport layer security, application security and data encryption. This will focus on the base security controls that allow you to define how the cluster should be used from a security standpoint.

Operational Security — Here we will focus on the day to day security controls for monitoring ongoing security and discovering user behavior and ongoing security operations. Aspects like configuration management, patching, logging, monitoring, and node validation. We’ll even discuss integrating a DevOps approach to cluster administration to improve speed and consistency.

Commercial Hadoop and NoSQL variants — Hadoop is the dominant flavor of ‘big data’ in use today. In this section we will discuss what the commercial Hadoop platform vendors are doing to promote security for their customers with a blend of open source, home grown and 3rd party security product support. There is no reason to roll you’re own security out of necessity as commercial variants often add on their own products or provide bundles for you. Each offers unique capabilities and each has a vision of what their customers should focus on, so we will cover some of the current offerings. We will also offer some advice on the application of security to non-Hadoop platforms. While Hadoop is the most commonly used platform, there are specialized flavors of NoSQL that are eminently appropriate for certain business challenges and are in wide use. Some even use HDFS or other Hadoop components that allow the use of the same security controls across different clusters. We will close out this section discussing where the security controls we have already discussed can be deployed in non-Hadoop environments where appropriate.

As with our original paper, this is not intended to be an exhaustive look at all potential security options, but to get the IT and development teams who run these clusters basic security controls in place.

Up next, Hadoop Architecture and Assembly.

—Adrian Lane

The EIGHTH Annual Disaster Recovery Breakfast: Clouds Ahead

By Mike Rothman

DRB 2016

Once again Securosis and friends are hosting our RSA Conference Disaster Recovery Breakfast. It’s really hard to believe this is the eighth year for this event. Regardless of San Francisco’s February weather, we expect to be seeing clouds all week. But we’re happy to help you cut through the fog to grab some grub, drinks, and bacon.

Kidding aside, we are grateful that so many of our friends, clients, and colleagues enjoy a couple hours away from the show that is now the RSAC. By Thursday we’re all disasters, and it’s very nice to have a place to kick back, have some conversations at a normal decibel level, and grab a nice breakfast. Did we mention there will be bacon?

With the continued support of Kulesa Faul, we’re honored to bring two new supporters in this year. If you don’t know our friends at CHEN PR and LaunchTech, you’ll have a great opportunity to say hello and thank them for helping support your habits.

As always the breakfast will be Thursday morning from 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted non-prescription recovery items to ease your day. Yes, the bar will be open – Mike has acquired a taste for Bailey’s in his coffee.

Please remember what the DR Breakfast is all about. No marketing, no spin, no t-shirts, and no flashing sunglasses – it’s just a quiet place to relax and have muddled conversations with folks you know, or maybe even go out on a limb and meet someone new. After three nights of RSA Conference shenanigans, we are confident you will enjoy the DRB as much as we do.

See you there.

To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.

—Mike Rothman

Security is Changing. So is Securosis.

By Rich

Last week Rich sent around Cockroaches Versus Unicorns: The Golden Age Of Cybersecurity Startups, by Mahendra Ramsinghani over at TechCrunch, for us to read. It isn’t an article every security professional needs to read, but it is certainly mandatory reading for anyone who makes buying decisions, tracks the security market, or is on the investment or startup side.

It also nearly perfectly describes what we are going through as a company.

His premise is that ‘unicorns’ are rare in the security industry. There are very few billion-dollar market cap companies, relative to the overall size of the market. But security companies are better suited to survive downturns and other challenging times. We are basically ‘cockroaches’, which persist through every tech Armageddon, often due to our ability to fall back on services.

Many security startups are not unicorns; rather, they are cockroaches – they rarely die, and  in tough times, they can switch into a frugal/consulting mode. Like cockroaches, they can survive long nuclear winters. Security companies can be capital-efficient, and typically consume ~$40 million to reach break-even. This gives them a survival edge – but VCs are looking for a “growth edge.”

The security market also appears much smaller than it should be considering the market dynamics, although it is very possible that is changing thanks to the hostile world out there. The article also postulates that the entire environment is shifting, with carriers and managed services providers jumping into acquisitions while large established players struggle.

Yet most of the startups VCs see are just more of the same, fail to differentiate, and rely far too much on really poor FUD-based sales dynamics.

With increasing hacks, the CISO’s life has just become a lot messier. One CISO told me, “Between my HVAC vendor and my board of directors, I am stretched. And everyday I get a hundred LinkedIn requests from vendors. Their FUD approach to security sales is exhausting.”

And

“I have seen at least 40 FireEye killers in the past 12 months,” one Palo Alto-based VC told me. Clearly he was exhausted. Some sub-sectors are overheated and investors are treading cautiously.

We certainly see the same thing. How many threat intel and security analytics startups does the industry need? We get a few briefing requests a week, from another new company doing exactly the same things. And all our CISO friends hate vendor sales techniques. These senior security folks get upwards of 500 emails and 100 phone calls a week from sales people trying to get meetings. All this security crap looks the same.

This combination inevitably leads to a contraction of seed capital, and that is where our story starts.

DisruptOPS

Most of you have noticed that over the past few years our research has skewed strongly toward cloud security, automation, and DevOps. This started with our initial partnership with the Cloud Security Alliance to build out the CCSK training class around 6 years ago. Rich had to create all the hands-on labs, which augered him down the rabbit hole of Amazon Web Services, OpenStack, Azure, and all the supporting tools.

As analysts we like to think it’s our job to have a good sense of what’s coming down the road. We made a bet on the cloud and it paid off, transitioning from a hobby to generate beer money to a major source of ongoing revenue. It also opened us up to a wider client base, especially among end-user organizations.

Three years ago Rich realized that in all his cloud security engagements, and all the classes we taught, we heard the same problems over and over. The biggest unsolved problem seemed to be cloud security automation. The next year was spent writing some proof-of-concept code merely to support conference presentations because there were no vendor examples, but at every talk attendees kept asking for “more… faster”.

This demand became too great to ignore, and nearly 2 years ago we decided to start building our own platform. And we did … we built our own cloud security platform. Don’t worry, we don’t have anything to sell you – this is where Ramsinghani’s article comes in.

screenshot

Our initial plan was to self fund development (Securosis is an awesome business) until we had a solid demo/prototype. Then we assumed it would be easy to get seed cash from some of our successful friends and build a new company in parallel with Securosis to focus on the product. We didn’t just want to start up a software company and jettison Securosis because our research is an essential driver to maintain differentiation, and we wanted to build the company without going the traditional VC route.

We also have some practical limitations on how we can do things. We are older, have families to support, and have deep roots where we live that preclude relocation. The analogy we use is that we can’t go back to eating ramen for dinner every night in a coding flophouse. The demo killed when we showed it to people, we are really smart, and people like us. Our future was bright.

Then we got hit with the reality clue bat. Everything was looking awesome last year at RSA when we started showing people and talking to investors. By summer all our options fell apart. We didn’t fit the usual model. We weren’t going to move to the Bay Area. We couldn’t take pay cuts to ‘normal’ founder levels and still support our families. And to be honest, we still didn’t want to go the normal VC route. We just weren’t going to play that game, given the road rash both Mike and Adrian have from earlier in their careers.

Just like the article said, we couldn’t find seed funding. At least not the way we wanted to build the company. We even had a near-miss on an acquisition, but we couldn’t line everything up to hit everyone’s goals and expectations.

Yet while this all went on, the Securosis business you see every day continued to boom. We increased revenue despite all the distractions and opportunity cost of running a second company. Our services and research continued to drive toward the cloud and automation, exactly as Ramsinghani described. Even the product platform continued to come together well, despite our super limited resources.

Securosis 2.0

We weren’t going to talk about any of this yet, but that article struck too close to home. It described exactly what we have been seeing on the analyst side, and also experiencing as we tried to build a separate company.

First of all we aren’t discarding our core business or customers, but we are most definitely changing direction. Our biggest area of growth has been our cloud security workshops, training, and project/architecture assessments. We barely even talk about them, but they sell like crazy. We’ve spent 6+ years working hands-on in the cloud, and it’s paying off. We spent 3 years focusing on automation and DevOps, and that is also now part of almost all our engagements.

So that’s going to be our new focus. Cloud security and supporting automation, and DevOps tools and techniques. But there are only 24 hours in a day, so we are backing off some of our other research to focus.

We don’t know exactly what this will look like or how quickly we will be able to shift our focus, but we should have our first pass of the new workshops ready to reveal pretty soon, plus another major partnership. We are also looking at options for local events and a new membership program, and have already started new kinds of research. We aren’t changing our spots. A lot of our research will remain free; some will probably be tied into one of our other projects. Nothing changes for existing customers. We will also rebrand to reflect the new focus. But we will keep the Securosis name in some form – we’re attached to it.

We’ll use our automation and orchestration platform, Trinity, as a research tool to test our hare-brained ideas about how cloud security and automation should happen. As we continue to build out its capabilities (we need them for some of our projects), we hope Trinity will interest our research clients in some capacity. It’s not the first time a security services shop has built a product to help them deliver better services cheaper and faster. We call that operation “Securosis Labs” for now.

We have been the security research shop which has been most vociferous and aggressive about how the cloud is going to change everything we know about the technology business and securing it. We’re putting our money where our mouth is because this is so clearly where the world is headed, we would be idiots not to jump on it.

Now is the time. It’s time to grow the company beyond what 3 guys in coffee shops can deliver. It’s time to put into practice everything we have learned about the new world order. It’s time to lead organizations through what will be a turbulent ride into the clouds. It’s time for Securosis 2.0. We’re very fired up, and we ask you to stay tuned as we figure out and announce what this will look like over the next few months.

—Rich

Wednesday, January 20, 2016

Incite 1/20/2016 — Ch-ch-ch-ch-changes

By Mike Rothman

I have always gotten great meaning from music. I can point back to times in my life when certain songs totally resonate. Like when I was a geeky teen and Rush’s Signals spoke to me. I saw myself as the awkward kid in Subdivisions who had a hard time fitting in. Then I went through my Pink Floyd stage in college, where “The Wall” dredged up many emotions from a challenging childhood and the resulting distance I kept from people. Then Guns ‘n Roses spoke to me when I was partying and raging, and to this day I remain shocked I escaped largely unscathed (though my liver may not agree).

But I never really understood David Bowie. I certainly appreciated his music. And his theatrical nature was entertaining, but his music never spoke to me. In fact I’m listening to his final album (Blackstar) right now and I don’t get it. When Bowie passed away last week, I did what most people my age did. I busted out the Ziggy Stardust album (OK, I searched for it on Apple Music and played it) and once again gained a great appreciation for Bowie the musician.

Bowie Changes

Then I queued up one of the dozens of Bowie Greatest Hits albums. I really enjoyed reconnecting with Space Oddity, Rebel Rebel, and even some of the songs from “Let’s Dance”, if only for nostalgia’s sake. Then Changes came on. I started paying attention to the lyrics.

Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Don’t want to be a richer man Ch-ch-ch-ch-changes (Turn and face the strange) Ch-ch-changes Just gonna have to be a different man Time may change me But I can’t trace time – David Bowie, “Changes”

I felt the wave of meaning wash over me. Changes resonates for me at this moment in time. I mean really resonates. I’ve alluded that I have been going through many changes in my life the past few years. A few years ago I reached a crossroads. I remembered there are people who stay on shore, and others who set sail without any idea what lies ahead. Being an explorer, I jumped aboard the SS Uncertain, and embarked upon the next phase of my life.

Yet I leave shore today a different man than 20 years ago. As the song says, time has changed me. I have more experience, but I’m less jaded. I’m far more aware of my emotions, and much less judgmental about the choices others make. I have things I want to achieve, but no attachment to achieving them. I choose to see the beauty in the world, and search for opportunities to connect with people of varied backgrounds and interests, rather than hiding behind self-imposed walls. I am happy, but not satisfied, because there is always another place to explore, more experiences to have, and additional opportunities for growth and connection.

Bowie is right. I can’t trace time and I can’t change what has already happened. I’ve made mistakes, but I have few regrets. I have learned from it all, and I take those lessons with me as I move forward. I do find it interesting that as I complete my personal transformation, it’s time to evolve Securosis. You’ll learn more about that next week, but it underscores the same concept. Ch-ch-ch-ch-changes. Nothing stays the same. Not me. Not you. Nothing. You can turn and face the strange, or you can rue for days gone by from your chair on the shore.

You know how I choose.

–Mike

Photo credit: “Chchchange” from Cole Henley


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

SIEM Kung Fu

Building a Threat Intelligence Program

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Everyone is an insider: Since advanced threat detection is still very shiny, it’s not a surprise that attention has swung back to the insider threat. It seems that every 4-5 years people remember that insiders have privileged access and can steal things if they so desire. About the same time, some new technology appears that promises to identify those malicious employees and save your bacon. Then it turns out finding the insiders is hard and everyone focuses on the latest shiny attack vector. Of course, the reality is that regardless of whether the attack starts externally or internally to your network, at some point the adversary will gain presence in your environment. Therefore they are an insider, regardless of whether they are on your payroll or not. This NetworkWorld Insider (no pun intended and the article requires registration) does a decent job of giving you some stuff to look for when trying to find insider attacks. But to be clear, these are good indicators of any kind of attack. Not sure to track insiders. Looking for DNS traffic anomalies, data flows around key assets, and tracking endpoint activity are good tips. And things you should already be doing… – MR

  2. Scarecrow has a brain: On first review, Gary McGraw’s recent post on 7 Myths of Software Security best practices set off my analyst BS detector. Gary is about as knowledgable as anyone in the application security space, but the ‘Myths’ struck me as straw man arguments; these are not the questions customers are asking. But when you dig in, you realize that the ‘Myths’ accurately reflect how companies act. All too often IT departments fail to comprehend security requirements and software developers taking their first missteps in security fall into these traps. They focus on one aspect of a software security program – maybe a pen test – not understanding that security needs to touch every facet of development. Application security is not a bolt-on ‘thing’, but a systemic commitment to delivering of secure software as a whole. If you’re starting a software security program, this is recommended reading. – AL

  3. What’s next, the Triceratops Attack? Yes, I’m poking fun at steganography, but pretty much every sophisticated attack (and a lot of unsophisticated ones) entails hiding malicious code in seemingly innocuous files through this technique. So you might as well learn a bit about it, right? This pretty good overview by Nick Lewis on SearchSecurity (registration required here too, ugh) describes how steganography has become commonplace. With an infinite number of places to hide malicious code, we always come back to the need to monitor devices and activity to find signs of attack. Sure, you should try to prevent attacks. But, as we’ve been saying for years, it’s also critical to increase investment in detection, because attackers are getting better at hiding attacks in plain sight. – MR

  4. Winning: Jeremiah Grossman has a good succinct account of the ad-blocking wars, capturing the back and forth between ad-tech and personal blocker technologies. He also nails the problem people outside security are not fully aware of, that “the ad tech industry behaves quite similarly to the malware industry, with both the techniques and delivery” and – just like malware – advertisers want to pwn your browser. I guess you could make a case that most endpoint security packages are rootkits, but I digress. Although I disagree with his conclusion that “ad tech” will win. Many of us are fine with not getting content that requires registration, having our personal data siphoned off and sold, or paying for crap. With so many voices on the Internet you can usually find the same (or better) content elsewhere. Trackers and scripts are just another indication that a site does not have your best interests at heart. So yes, you can win… if you choose to. – AL

  5. Increasing the security of your (Mac): As a long-time security person, I kind of forget the basics. Sure I write about fundamentals from time to time on the blog, but what about the simple stuff we do by habit? That’s the stuff that our friends and family need to do and see. Some understand because they have been around folks like us for years. Others depend on you to configure and protect their devices. Being the family IT person is OK, but it can get tiring. So you can thank Constin Raiu for documenting some good consumer hygiene tactics on the Kaspersky blog. Yes, this is obvious stuff, but probably only to you. Yes, it’s allegedly Mac-focused. But the tactics apply to Windows PCs as well. And we can debate how useful so-called security solutions are. Yet that’s nitpicking. You can’t stop every attack (duh!), but you (and the people you care about) don’t need to be low-hanging fruit for attackers either. – MR

—Mike Rothman