Login  |  Register  |  Contact
Friday, June 19, 2015

Summary: I Am Now a Security Risk

By Rich

Rich here,

Yep, it looks very likely my personal data is now in the hands of China, or someone pretending to be China, or someone who wants it to look like China. While I can’t go into details, as many of you know I’ve done things with the federal government related to my rescue work. It isn’t secret or anything, but I never feel comfortable talking specifics because it’s part-time and I’m not authorized to represent any agency.

I haven’t been directly notified, but I have to assume that any of my records OPM had, someone… else… has. To be honest, based on what details have come out, I’d be surprised if it wasn’t multiple someone elses – this level of nation-state espionage certainly isn’t limited to any one country.

Now, on the upside, if I lose my SSN, I have it backed up overseas. Heck, I’m really bad at keeping copies of all my forms, which I seem to have to resubmit every few years, so hopefully whoever took them will set up a help desk I can call to request copies. I’d pay not to have to redo that stuff all over.

Like many of you, my data has been breached multiple times. The worst so far was the student health service at the University of Colorado, because I know my SSN and student medical records were in that one (mostly sprained ankles and a bad knee, if you were wondering – nothing exciting). That one didn’t seem to go anywhere but the OPM breach is more serious. There is a lot more info than my SSN in there, Including things like my mother’s maiden name.

This will hang over my head for the rest of my life. Long beyond the 18 months of credit monitoring I may or may not receive. I’m not worried about a foreign nation mucking with my credit, but they may well have enough to compromise my credentials for a host of services. Not by phishing me, but by walking up the long chain of identity and interconnected services until they can line up the one they want.

I am now officially a security risk for any organization I work with. Even mine.

And now on to the Summary…

We are deep into the summer, with large amounts of personal and professional travel, so this week’s will be a little short – and you probably already noticed we’ve been a bit inconsistent. Hey, we have lives, ya know!

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Research Reports and Presentations

Top News and Posts

—Rich

Threat Detection Evolution: Data Collection

By Mike Rothman

The first post in this series set the stage for the evolution of threat detection. Now that we’ve made the case for why detection must evolve, let’s work through the mechanics of what that actually means. It comes down to two functions: security data collection, and analytics of the collected data. First we’ll go through what data is helpful and where it should come from.

Threat detection requires two main types of security data. The first is internal data, security data collected from your devices and other assets within your control. It’s the stuff the PCI-DSS has been telling you to collect for years. Second is external data, more commonly known as threat intelligence. But here’s the rub: there is no useful intelligence in external threat data without context for how that data relates to your organization. But let’s not put the cart before the horse. We need to understand what security data we have before worrying about external data.

Internal Data

You’ve likely heard a lot about continuous monitoring because it is such a shiny and attractive term to security types. The problem we described in Vulnerability Management Evolution is that ‘continuous’ can have a bunch of different definitions, depending on who you are talking to. We have a rather constructionist view (meaning, look at the dictionary) and figure the term means “without cessation.” But in many cases, monitoring assets continually doesn’t really add much value over regular and reliable daily monitoring.

So we prefer consistent monitoring of internal resources. That may mean truly continuous, for high-profiles asset at great risk of compromise. Or possibly every week for devices/servers that don’t change much and don’t access high-value data. But the key here is to be consistent about when you collect data from resources, and to ensure the data is reliable.

There are many data sources you might collect from for detection, including:

  • Logs: The good news is that pretty much all your technology assets generate logs in some way, shape, or form. Whether it’s a security or network device, a server, an endpoint, or even mobile. Odds are you can’t manage to collect data from everything, so you’ll need to choose which devices to monitor, but pretty much all devices generate log data.
  • Vulnerability Data: When trying to detect a potential issue, knowing which devices are vulnerable to what can be important for narrowing down your search. If you know a certain attack targets a certain vulnerability, and you only have a handful of devices that haven’t been patched to address the vulnerability, you know where to look.
  • Configuration Data: Configuration data yields similar information to vulnerability data for providing context to understand whether a device could be exploited by a specific attack.
  • File Integrity: File integrity monitoring provides important information for figuring out which key files have changed. If a system file has been tampered with outside of an authorized change, it may indicate nefarious activity and should be checked out.
  • Network Flows: Network flow data can identify patterns of typical (normal) network activity; which enables you to look for patterns which aren’t exactly normal and could represent reconnaissance, lateral movement, or even exfiltration.

Once you decide what data to collect, you have figure out from where and how much. This involves selecting logical collection points and where to aggregate the data. This depends on the architecture of your technology stack. Many organization opt for a centralized aggregation point to facilitate end-to-end analysis, but that is contingent on the size of the organization. Large enterprises may not be able to handle the scale of collecting everything in one place, and should consider some kind of hierarchical collection/aggregation strategy where data is stored and analyzed locally, and then a subset of the data is sent upstream for central analysis.

Finally, we need to mention the role of the cloud in collection and aggregation, because almost everything is being offered either in the cloud or as a Service nowadays. The reality is that cloud-based aggregation and analysis depend on a few things. The first is the amount of data. Moving logs or flow records is not a big deal because they are pretty small and highly compressible. Moving network packets is a much larger endeavor, and hard to shift to a cloud-based service. The other key determinant is data sensitivity – some organizations are not comfortable with their key security data outside their control in someone else’s data center/service. That’s an organizational and cultural issue, but we’ve seen a much greater level of comfort with cloud-based log aggregation over the past year, and expect it to become far more commonplace inside a 2-year planning horizon.

The other key aspect of internal data collection is integration and normalization of the data. Different data sources have different data formats, which creates a need to normalize data to compare datasets. That involves compromise in terms of granularity of common data formats, and can favor an integrated approach where all data sources are already integrated into a common security data store. Then you (as the practitioner) don’t really need to worry about making all those compromises – instead you can bet that your vendor or service provider has already done the work.

Also consider the availability of resources for dealing with these disparate data sources. The key issue, mentioned in the last post, remains the skills shortage; so starting a data aggregation/collection effort that depends on skilled resources to manage normalization and integration of data may not be the best idea. This doesn’t really have much to do with the size of the organization – it’s really about the sophistication of staff – security data integration is an advanced function that can be beyond even large organizations with less mature security efforts.

Ultimately your goal is visibility into your entire technology infrastructure. An end-to-end view of what’s happening in your environment, wherever your data is, gives you a basis for evolving your detection capabilities.

External Data

We have published a lot of research on threat intel to date, most recently a series on Applied Threat Intelligence, which summarized the three main use cases we see for external data.

There are plenty of sources of external data nowadays. The main types are:

  • Commercial integrated: It seems every security vendor has a research group providing some type of intelligence. This data is usually very tightly integrated into the product or service you buy from the vendor. There may be a separate charge for intelligence, beyond the base cost of the product or service.
  • Commercial standalone: Standalone threat intel is an emerging security market. These vendors typically offer an aggregation platform to collect external data and integrate it into your controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster in specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data from an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally there is open source intel, publicly available sources for things like malware samples and IP reputation, which can be queried and/or have intel integrated directly into user systems.

How does this external data play into an evolved threat detection capability? As mentioned above, external data without context isn’t very helpful. You don’t know which of the alerts or notifications apply to your environment, so you just create a lot of extra work to figure it out. And the idea is not to create more work.

How can you provide that context? Use the external threat data to look for specific instances of an attack. As we described in Threat Intelligence and Security Monitoring, you can use indicators from other attacks to pinpoint that activity in your network, even if you’ve never seen the attack before. Historically you were restricted to only alerting on conditions/correlations you knew about, so this is a big deal.

To use a more tangible example, let refer back to the concept of retrospection. Let’s say you didn’t know about a heretofore unknown attack (like Duqu 2.0), and received a set of indicators from a threat intelligence provider. You could then look for those indicators within your network. Even if you don’t find that specific attack immediately, you could set your monitoring system (typically a SIEM or an IPS) to watch for those indicators. Basically you can jump time, looking for attacks that haven’t happened to you – yet.

Default to the Process

As usual, it all comes back to process. We mapped that out in TI+SM and Threat Intelligence and Incident Response. You need a process to procure, collect, and utilize threat intelligence, from whatever sources it comes.

Then use external data as triggers or catalysts to mine your internal data using advanced analytics, to see if you find those indicators in your network. We’ll dig into that part of the process next.

—Mike Rothman

Thursday, June 11, 2015

My 2015 Personal Security Guiding Principles and the New Rand Report

By Rich

In 2009, I published My Personal Security Guiding Principles. They hold up well, but my thinking has evolved over six years. Some due to personal maturing, and a lot due to massive changes in our industry.

It’s time for an update. The motivation today comes thanks to Juniper and Rand. I want to start with my update, so I will cover the report afterwards.

Here is my 2015 version:

  1. Don’t expect human behavior to change. Ever.
  2. Simple doesn’t scale.
  3. Only economics really changes security.
  4. You cannot eliminate all vulnerabilities.
  5. You are breached. Right now.

In 2009 they were:

  1. Don’t expect human behavior to change. Ever.
  2. You cannot survive with defense alone.
  3. Not all threats are equal, and all checklists are wrong.
  4. You cannot eliminate all vulnerabilities.
  5. You will be breached.

The big changes are dropping numbers 2 and 3. I think they still hold true, and they would now come in at 6 and 7 if I wasn’t trying to keep to 5 total. The other big change is #5, which was You will be breached. and is now You are breached.

Why the changes? I have always felt economics is what really matters in inciting security change, and we have more real-world examples showing that it’s actually possible. Take a look at Apple’s iOS security, Amazon Web Services, Google, and Microsoft (especially Windows). In each case we see economic drivers creating very secure platforms and services, and keeping them there.

Want to fix security in your organization? Make business units and developers pay the costs of breaches – don’t pay for them out of central budget. Or at least share some liability.

As for simple… I’m beyond tired of hearing how “If company X just did Y basic security thing, they wouldn’t get breached that particular way this particular time.” Nothing is simple at scale; not even the most basic security controls. You want secure? Lock things down and compartmentalize to the nth degree, and treat each segment like its own little criminal cell. It’s expensive, but it keeps groups of small things manageable. For a while.

Lastly, let’s face it, you are breached. Assume the bad guys are already behind your defenses and then get to work. Like one client I have, who treats their entire employee network as hostile, and makes them all VPN in with MFA to connect to anything.

Motivated by Rand

The impetus for finally writing this up is a Rand report sponsored by Juniper. I still haven’t gotten through the entire thing, but it reads like a legitimate critical analysis of our entire industry and profession from the outside, not the usual introspection or vendor-driven nonsense FUD.

Some choice quotes from the summary:

  • Customers look to extant tools for solutions even though they do not necessarily know what they need and are certain no magic wand exists.
  • When given more money for cybersecurity, a majority of CISOs choose human-centric solutions.
  • CISOs want information on the motives and methods of specific attackers, but there is no consensus on how such information could be used.
  • Current cyberinsurance offerings are often seen as more hassle than benefit, useful in only specific scenarios, and providing little return.
  • The concept of active defense has multiple meanings, no standard definition, and evokes little enthusiasm.
  • A cyberattack’s effect on reputation (rather than more-direct costs) is the biggest cause of concern for CISOs. The actual intellectual property or data that might be affected matters less than the fact that any intellectual property or data are at risk.
  • In general, loss estimation processes are not particularly comprehensive.
  • The ability to understand and articulate an organization’s risk arising from network penetrations in a standard and consistent matter does not exist and will not exist for a long time.

Most metrics? Crap. Loss metrics? Crap. Risk-based approaches? All talk. Tools? No one knows if they work. Cyberinsurance? Scam.

Overall conclusion? A marginally functional shitshow.

Those are my words. I’ve used them a lot over the years, but this report lays it out cleanly and clearly. It isn’t that we are doing everything wrong – far from it – but we are stuck in an endless cycle of blocking and tackling, and nothing will really change until we take a step back.

Personally I am quite hopeful. We have seen significant progress over the past decade, and I fell like we are at an inflection point for change and improvement.

—Rich

Incite 6/10/2015: Twenty Five

By Mike Rothman

This past weekend I was at my college reunion. It’s been twenty five years since I graduated. TWENTY FIVE. It’s kind of stunning when you think about it. I joked after the last reunion in 2010 that the seniors then were in diapers when I was graduating. The parents of a lot of this year’s seniors hadn’t even met. Even scarier, I’m old enough to be their parent. It turns out a couple friends who I graduated with actually have kids in college now. Yeah, that’s disturbing.

It was great to be on campus. Life is busy, so I only see some of my college friends every five years. But it seems like no time has passed. We catch up about life and things, show some pictures of our kids, and fall right back into the friendships we’ve maintained for almost thirty years. Facebook helps people feel like they are still in touch, but we aren’t. Facebook isn’t real life – it’s what you want to show the world. Fact is, everything changes, and most of that you don’t see. Some folks have been through hard times. Others are prospering.

Dunbar's Ithaca NY

Even the campus has evolved significantly over the past five years. The off-campus area is significantly different. Some of the buildings, restaurants, & bars have the same names; but they aren’t the same. One of our favorite bars, called Rulloff’s, shut down a few years back. It was recently re-opened and pretty much looked the same. But it wasn’t. They didn’t have Bloody Marys on Thursday afternoon. The old Rulloff’s would have had galloons of Bloody Mix preparing for reunion, because that’s what many of us drank back in the day. The new regime had no idea. Everything changes.

Thankfully a bar called Dunbar’s was alive and well. They had a drink called the Combat, which was the root cause of many a crazy night during college. It was great to go into D-bars and have it be pretty much the same as we remembered. It was a dump then, and it’s a dump now. We’re trying to get one of our fraternity brothers to buy it, just to make sure it remains a dump. And to keep the Combats flowing.

It was also interesting to view my college experience from my new perspective. Not to overdramatize, but I am a significantly different person than I was at the last reunion. I view the world differently. I have no expectations for my interactions with people, and am far more accepting of everyone and appreciative of their path. Every conversation is an opportunity to learn, which I need. I guess the older I get, the more I realize I don’t know anything.

That made my weekend experience all the more gratifying. The stuff that used to annoy me about some of my college friends was no longer a problem. I realized it has always been my issue, not theirs. Some folks could tell something was different when talking to me, and that provided an opportunity to engage at a different level. Others couldn’t, and that was fine by me; it was fun to hear about their lives.

In 5 years more stuff will have changed. XX1 will be in college herself. All of us will undergo more life changes. Some will grow, others won’t. There will be new buildings and new restaurants. And I’ll still have an awesome time hanging out in the dorms until the wee hours drinking cocktails and enjoying time with some of my oldest friends. And drinking Combats, because that’s what we do.

–Mike

Photo credit: “D-bars” taken by Mike in Ithaca NY


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Vulnerabilities are not intrusions: Richard Bejtlich is a busy guy. As CSO of FireEye, I’m sure his day job keeps him pretty busy, as well as all his external responsibilities to gladhand big customers. So when he writes something on his personal blog you know he’s pissed off. And he’s really pissed that it seems parties within the US federal government doesn’t understand the different between vulnerabilities and intrusions. In the wake of the big breach at the Office of Personnel Management (yeah, the Fed HR department), people are saying that the issue was the lack of implementation of CDM (continuous diagnostic monitoring). But that just tells you what’s vulnerable, and we all know that’s not a defense against advanced adversaries. Even the lagging Einstein system would have had limited success, but at least it’s focusing on the right stuff: who is in your network. Richard has been one of the most fervent evangelicals of hunting for adversaries, and his guidance is pretty straightforward: “find the intruders in the network, remove them, and then conduct counter-intrusion campaigns to stop them from accomplishing their mission when they inevitably return.” Easier said than done, of course. But you never will get there if your answer is a vulnerability management program. – MR

  2. De-Googled: The Internet is a means for people to easily find information, but many large firms use the Internet to investigate you, and leverage it to monitor pretty much everything users do online. Every search, every email, every purchase, every blog comment, all the time – from here to eternity. I know a lot of privacy advocates who read the blog. Heck, I talk to many of them at security conferences, and read their comments on the stuff we post. If that’s you, a recent post from ExpressVPN on How to delete everything Google knows about you should be at the top of your reading list. It walks you through a process to collect and then delete your past Google history. I can’t vouch for the accuracy of the steps – frankly I am too busy to try it out – but it’s novel that Google provided the means, and someone has documented the obfuscated steps to delete your history. Bravo! Of course if you continue to use the embedded Google search bar, or Google+, or Gmail, or any of the other stuff Google offers, you will still be tracked. – AL

  3. What point are you trying to make? There have always been disagreements over the true cost of a lost data record. Ponemon has been publishing numbers in the hundreds of dollars per record for years (this year’s number was $350), and Verizon Business recently published a $0.58 number in the 2015 DBIR. So CSO asks if it’s $350 or $0.58? The answer is neither. There is no standard cost. There is only what it costs you, and how much you want to bury in that number to create FUD internally. Ponemon includes pretty much everything (indirect costs) and then some. Verizon includes pretty much nothing and bases their numbers off insurance claims, which can be supported by objective data. Security vendors love Ponemon’s numbers. Realists think Verizon’s are closer. Again, what are you trying to achieve? If it’s to scare the crap out of the boardroom, Ponemon is your friend. If it’s to figure out what you’ll get from your cyber-insurance policy, you need the DBIR. As we have always said, you can make numbers dance and tell whatever story you want them to. Choose wisely. – MR

  4. Barn door left open: Apache ZooKeeper is a configuration management and synchronization tool commonly used in Hadoop clusters. It’s a handy tool to help you manage dynamic databases, but it moves critical data between nodes, so the privacy and integrity of its data are critical to safe and secure operations. Evan Gilman of PagerDuty posted a detailed write-up of a ZooKeeper session encryption bug found in an Intel extension to Linux kernel modules and XEN hypervisors which essentially disables checksums. In a nutshell, the Intel support for AES within encryption module aesni-intel, which is used for VPNs and SSL traffic, will – under certain circumstances – disable checksums on the TCP headers. That’s no bueno. The bug should be simple to fix, but at this time there is no patch from Intel. Thanks to the guys at PagerDuty for taking the time to find and document this bug for the rest of us! – AL

  5. Cyber all the VC things…: Mary Meeker survived the Internet bubble as the Internet’s highest profile stock analyst, and then moved west to work with VC big shots Kleiner Perkins. She still writes the annual Internet Trends report and this year security has a pretty prominent place. Wait, what? So, in case you were wondering whether security is high-profile enough, it is. We should have been more careful about what we wished for. She devoted two pages to security in the report. Of course her thoughts are simplistic (Mobile devices are used to harvest data and insiders cause breaches. Duh.) and possibly even wrong. (Claiming MDM is critical for preventing breaches. Uh, no.) But she pinpoints the key issue: the lack of security skills. She is right on the money with that one. Overall, we should be pleased with the visibility security is getting. And it’s not going to stop any time soon. – MR

—Mike Rothman

Wednesday, June 10, 2015

Threat Detection Evolution: Why Evolve? [New Series]

By Mike Rothman

As we discussed recently in Network-based Threat Detection, prevention isn’t good enough any more. Every day we see additional proof that adversaries cannot be reliably stopped. So we have started to see the long-awaited movement of focus and funding from prevention, to detection and investigation. That said, for years security practitioners have been trying to make sense of security data to shorten the window between compromise and detection – largely unsuccessfully.

Not to worry – we haven’t become the latest security Chicken Little, warning everyone that the sky is falling. Mostly because it fell a long time ago, and we have been working to pick up the pieces ever since. It can be exhausting to chase alert after alert, never really knowing which are false positives and which indicate real active adversaries in your environment. Something has to change – it is time to advance the practice of detection, to provide better and more actionable alerts. This requires thinking more broadly about detection, and starting to integrate the various different security monitoring systems in use today.

So it’s time to bring our recent research on detection and threat intelligence together within the context of Threat Detection Evolution. As always, we are thankful that some forward-looking organizations see value in licensing our content to educate their customers. AlienVault plans to license the resulting paper at the conclusion of the series, and we will build the content using our Totally Transparent Research methodology.

(Mostly) Useless Data

There is no lack of security data. All your devices stream data all the time. Network devices, security devices, servers, and endpoints all generate a ton of log data. Then you collect vulnerability data, configuration data, and possibly network flows or even network packets. You look for specific attacks with tools like intrusion detection devices and SIEM, which generate lots of alerts.

You probably have all this security data in a variety of places, with separate policies to generate alerts implemented within each monitoring tool. It’s hard enough to stay on top of a handful of consoles generating alerts, but when you get upwards of a dozen or more, getting a consistent view of your environment isn’t really feasible.

It’s not that all this data is useless. But it’s not really useful either. There is value in having the data, but you can’t really unlock its value without performing some level of integration, normalization, and analytics on the data. We have heard it said that finding attackers is like finding a needle in a stack of needles. It’s not a question of whether there is a needle there – you need to figure out which needle is the one poking you.

This amount of traffic and activity generates so much data that it is trivial for adversaries to hide in plain sight, obfuscating their malicious behavior in a morass of legitimate activity. You cannot really figure out what’s important until it’s too late. And it’s not getting easier – cloud computing and mobility promise to disrupt the traditional order of how technology is delivered and information is consumed by employees, customers, and business partners, so there will be more data and more activity to further complicate threat detection.

Minding the Store…

In the majority of our discussions with practitioners, sooner or later we get around to the challenge of finding skilled resources to implement the security program. It’s not a funding thing – companies are willing to invest, given the high profile of threats. The challenge is resource availability, and unfortunately there is no easy fix. The security industry is facing a large enough skills gap that there is no obvious answer.

Why can’t security practitioners be identified? What are the constraints on training more people to do security? It is actually pretty counter-intuitive, because security isn’t a typical job. It’s hard for a n00b to come in and be productive their first couple years. Even those with formal (read: academic) training in security disciplines need a couple years of operational experience before they start to become productive. And a particular mindset is required to handle a job where true success is a myth. It’s not a matter of whether an organization will be breached – it’s when, and that is hard for most people to deal with day after day.

Additionally, if your organization is not a Global 1000 company or major consulting firm, finding qualified staff is even harder because you have many of the same problems as a large enterprise, but far less budget and available skills to solve it.

Clearly what we are doing is insufficient to address the issue moving forward. So we need to look at the problem differently. It’s not a challenge that can be fixed by throwing people at it, because there aren’t enough people. It’s not a challenge that can be fixed by throwing products at it either, because organizations both large and small have been trying for years with poor results. Our industry needs to evolve its tactics to focus on doing the most important things more efficiently.

Efficiency and Integration

When you don’t have enough staff you need to make your existing staff far more efficient. That typically involves two different tactics:

  1. Minimize False Positives and Negatives: The thing that burns up more time than anything else is chasing alerts into ratholes and then finding out that they are out to be false positives. So making sure alerts represent real risk is the best efficiency increase you can manage. Obviously you also want to minimize false negatives because when you miss an attack you will spend a ton of time cleaning it up. Overall you need to focus on minimizing errors to get better utilization out of your limited staff.
  2. Automate: The other aspect of increasing efficiency is automation of non-strategic functions where possible. There isn’t a lot of value in making individual IPS rule changes based on reliable threat intel or vulnerability data. Once you can trust your automation, you can have your folks do tasks that aren’t suited to automation, like triaging possible attacks.

The other way to make better use of your staff is integration. The security business has grown incrementally to address specific problems. For example, when first connecting to the Internet you needed a firewall to provide access control for inbound connections. Soon enough your network was being attacked, so you deployed an IPS to address those attacks. Then you wanted to control employee web use, so you installed a web filter. Then you needed to see which devices where vulnerable and bought a vulnerability scanner, and so on and so forth.

This security sprawl continues in earnest today, with new advanced technologies to be deployed on the network, on endpoints, within your data center, and in the cloud. Of course you can’t turn off the old controls, so a smaller organization may need to manage 7-10 different security products and services. Larger organizations can have dozens. Obviously an integrated solution provides leverage by not having all those policies separated out, and providing a streamlined user experience for faster response.

The Goal: Risk-based Prioritization

To delve a bit into the land of motherhood and apple pie, organizations have been trying to allocate scarce resources based on potential impact to the organization. Yes, the mythical unicorn of security: prioritized alerts with context on what is actually at risk within your organization. There is no generic answer. What presents risk to one organization might not to another. What’s important to one organization clearly differs from others. Your threat detection approach needs to reflect these differences.

An evolved view of threat detection isn’t just about finding attacks. It’s about finding the attacks that present the biggest risk to your organization, and enabling an efficient and effective response. This involves integrating a bunch of existing security data sources (both internal and external) and monitors, then performing contextual analysis on that data to prioritization based on importance to your organization.

So how can you do that? We’re glad you asked – that is our subject for this series. First touching on data collection, and then the analytics necessary to detect threats accurately and efficiently. We will wrap up with a Quick Win scenario, which we use to describe tactics you can use right now to kick-start your efforts and build toward evolved threat detection.

—Mike Rothman

Contribute to the Cloud Security Alliance Guidance: Community Drives, Securosis Writes

By Rich

This week we start one of the cooler projects in the history of Securosis. The Cloud Security Alliance contracted Securosis to write the next version of the CSA Guidance.

(Okay, the full title is “Guidance for Critical Areas of Focus in Cloud Computing”). The Guidance is a foundational document at the CSA, used by a ton of organizations to define security programs when they start jumping into the world of cloud. It’s currently on version 3, which is long in the tooth, so we are starting version 4.

One of the problems with the previous version is that it was compiled from materials developed by over a dozen working groups. The editors did their best, but there are overlaps, gaps, and readability issues. To address those the CSA hired us to come in and write the new version. But a cornerstone of the CSA is community involvement, so we have come up with a hybrid approach for the next version. During each major stage we will combine our Totally Transparent Research process with community involvement. Here are the details:

  • Right now the CSA is collecting feedback on the existing Guidance. The landing page is here, and it directs you to a Google document of the current version where anyone can make suggestions. This is the only phase of the project in Google Docs, because we only have a Word version of the existing Guidance.
  • We (Securosis) will take the public feedback and outline each domain for the new version. These will be posted for feedback on GitHub (exact project address TBD).
  • After we get input on the outlines we will write first drafts, also on GitHub. Then the CSA will collect another round of feedback and suggestions.
  • Based on those, we will write a “near final” version and put that out for final review.

GitHub not only allows us to collect input, but also to keep the entire writing and editing process public.

In terms of writing, most of the Securosis team is involved. We have also contracted two highly experienced professional tech writers and editors to maintain voice and consistency. Pure community projects are often hard to manage, keep on schedule, and keep consistent… so we hope this open, transparent approach, backed by professional analysts and writers with cloud security experience, will help keep things on track, while still fully engaging the community.

We won’t be blogging this content, but we will post notes here as we move between major phases of the project. For now, take a look at the current version and let the CSA know about what major changes you would like to see.

—Rich

Tuesday, June 09, 2015

Network Security Gateway Evolution [New Series]

By Mike Rothman

(Note: We’re restarting this series over the next week, so we are reposting the intro to get things moving again. – Mike )

When is a firewall not a firewall? I am not being cute – that is a serious question. The devices that masquerade as firewalls today provide much more than just an access control on the edge of your network(s). Some of our influential analyst friends dubbed the category next generation firewall (NGFW), but that criminally undersells the capabilities of these devices.

The “killer app” for NGFW remains enforcement of security policies by application (and even functions within applications), rather than merely by ports and protocols. This technology has matured since we last covered the enterprise firewall space in Understanding and Selecting an Enterprise Firewall. Virtually all firewall devices being deployed now (except very low-end gear) have the ability to enforce application-level policies in some way. But, as with most new technologies, having new functionality doesn’t mean the capabilities are being used well. Taking full advantage of application-aware policies requires a different way of thinking about network security, which will take time for the market to adapt to.

At the same time many network security vendors continue to integrate their previously separate FW and IPS devices into common architectures/platforms. They have also combined network-based malware detection and some light identity and content filtering/protection features. If this sounds like UTM, that shouldn’t be surprising – the product categories (UTM and NGFW) provide very similar functionality, just handled differently under the hood.

Given this long-awaited consolidation, we see rapid evolution in the network security market. Besides additional capabilities integrated into NGFW devices, we also see larger chassis-based models, smaller branch office devices, and even virtualized and cloud-based configurations to extend these capabilities to every point in the network. Improved threat intelligence integration is also available to block current threats.

Now is a good time to revisit our research from a couple years ago. The drivers for selection and procurement have changed since our last look at the field. But, as mentioned above, these devices are much more than firewalls. So we use the horribly pedestrian Network Security Gateway moniker to describe what network security devices look like moving forward. We are pleased to launch the Network Security Gateway Evolution series, describing how to most effectively use the devices for the big 3 network security functions: access control (FW), threat prevention (IPS), and malware detection.

Given the forward-looking nature of our research, we will dig into a few additional use cases we are seeing – including data center segmentation, branch office protection, and protecting those pesky private/public cloud environments.

As always, we develop our research using our Totally Transparent Research methodology, ensuring no hidden influence on the research.

The Path to NG

Before we jump into how the NSG is evolving, we need to pay our respects to where it has been. The initial use case for NGFW was sitting next to an older port/protocol firewall and providing visibility int which applications are being used, and by whom. Those reports showing, in gory detail, all the nonsense employees get up to on the corporate network (much of it using corporate devices) at the end of the product test, tend to be quite pretty enlightening for the network security team and executives.

Once your organization saw the light with real network activity, you couldn’t unsee it. So you needed to take action, enforcing policies on those applications. This action leveraged capabilities such as blocking email access via a webmail interface, detecting and stopping file uploads to Dropbox, and detecting/preventing Facebook photo uploads. It all sounds a bit trivial nowadays, but a few years ago organizations had real trouble enforcing this kind of policies on web traffic.

Once the devices were enforcing policy-based control over application traffic, and then matured to offer feature parity with existing devices in areas like VPN and NAT, we started to see significant migration. Some of the existing network security vendors couldn’t keep up with these NGFW competitive threats, so we have seen a dramatic shift in the enterprise market share over the past few years, creating a catalyst for multi-billion M&A.

The next step has been the move from NGFW to NSG through adding non-FW capabilities such as threat prevention. Yes, that means not only enforcement of positive policies (access control), but also detecting attacks like a network intrusion prevention device (IPS) works. The first versions of these integrated devices could not compare to a ‘real’ (standalone) IPS, but as time marches on we expect NSGs to reach feature parity for threat prevention. Likewise, these gateways are increasingly integrating detection malware files as they enter the network, in order to provide additional value.

Finally, some companies couldn’t replace their existing firewalls (typically for budget or political reasons), but had more flexibility to replace their web filters. Given the ability of NSGs to enforce policies on web applications, block bad URLs, and even detect malware, standalone web filters took a hit. As with IPS, NSGs do not yet provide full feature parity with standalone web filters yet. But many companies don’t need the differentiating features of a dedicated web filter – making an NSG a good fit.

The Need for Speed

We have shown how NSGs have and will continue to integrate more and more functionality. Enforcing all these policies at wire speed requires increasing compute power. And it’s not like networks are slowing down. So first-generation NGFW reached scaling constraints pretty quickly. Vendors continue to invest in bigger iron, including more capable chassis and better distributed policy management, to satisfy scalability requirements.

As networks continue to get faster, will the devices be able to keep pace, retaining all their capabilities on a single device? And do you even need to run all your stuff on the same device? Not necessarily. This raises an architectural question we will consider later in the series. Just because you can run all these capabilities on the same device, doesn’t mean you should…

Alternatively you can run a NSG in “firewall” mode, just enforcing basic access control policies. Or you can deploy another NSG in “threat prevention” mode, looking for attacks. Does that sound like your existing architecture? Of course – and there is value in separating functions, depending on the scale of the environment. More important is the ability to manage all these policies from a single console, and to change the box’s capabilities through software, without needing a forklift.

Graceful Migration

We will also cover how you can actually migrate to this evolved network security platform. Budgets aren’t unlimited, so unless your existing network security vendor isn’t keeping pace (there are a few of those), your hand may not be forced into immediate migration. That gives you time to figure out the best timing to introduce these new capabilities. We will wrap up this series by with a process for figuring out how and when to introduce these new capabilities, deployment architectures, and how to select your key vendor.

The next post will dig into the firewall features of NSG, and how they continue to evolve, and why it matters to you.

—Mike Rothman

Tuesday, May 26, 2015

We Don’t Know Sh—. You Don’t Know Sh—.

By Rich

Once again we have a major security story slumming in the headlines. This time it’s Hackers on a Plane, but without all that Samuel L goodness. But what’s the real story? It’s time to face the fact that the only people who know are the ones who aren’t talking, and everything you hear is most certainly wrong.

Watch or listen:


—Rich

Friday, May 22, 2015

Summary: Ginger

By Rich

Rich here.

As a redhead (what little is left) I have spent a large portion of my life answering questions about red hair. Sometimes it’s about pain tolerance/wound healing (yes, there are genetic differences), but most commonly I get asked if the attitude is genetic or environmental.

You know, the short temper/bad attitude.

Well, here’s a little insight for those of you that lack the double recessive genes.

Yesterday I was out with my 4-year-old daughter. The one with the super red, super curly hair. You ever see Pixar’s Brave? Yeah, they would need bigger computers to model my daughter’s hair, and a movie projector with double the normal color gamut.

In a 2-hour shopping trip, at least 4 people commented on it (quite loudly and directly), and many more stared. I was warned by no less than two probable-grandmothers that I should “watch out for that one… you’ll have your hands full”. There was one “oh my god, what wonderful hair!” and another “how do you like your hair”.

At REI and Costco.

This happens everywhere we go, all the time. My son also has red hair, and we get nearly the same thing, but without the curls it’s not quite as bad. I also have an older daughter without red hair. She gets the “oh, your hair is nice too… please don’t grow up to be a serial killer because random strangers like your sister more”. At least that’s what I hear.

Strangers even come up and start combing their hands through her hair. Strangers. In public. Usually older women. Without asking.

I went through a lot of this myself growing up, but it’s only as an adult, with red-haired kids, that I see how bad it is. I thought I was a bit of an a-hole because, as a boy, I had more than my fair share of fights due to teasing over the hair. Trust me, I’ve heard it all. Yeah, fireball, very funny you —-wad, never heard that one before. I suppose I blocked out how adults react when I tried to buy a camping flashlight with my dad.

Maybe there is a genetic component, but I don’t think scientists could possible come up with a deterministic ethical study to figure it out. And if my oldest, non-red daughter ever shivs you in a Costco, now you’ll know why.

We have been so busy the past few weeks that this week’s Summary is a bit truncated. Travel has really impacted our publishing, sorry.

Securosis Posts

Favorite Outside Posts

  • Mike: Advanced Threat Detection: Not so SIEMple: Aside from the pithy title, Arbor’s blog post does a good job highlighting differences between the kind of analysis SIEM offers and the function of security analytics…
  • Rich: Cloudefigo. This is pretty cool: it’s a cloud security automation project based on some of my previous work. One of the people behind it, Moshe, is one of our better Cloud Security Alliance CCSK instructors.

Research Reports and Presentations

Top News and Posts

—Rich

Wednesday, May 20, 2015

Incite 5/20/2015: Slow down [to speed up]

By Mike Rothman

When things get very busy it’s hard to stay focused. There is so much flying at you, and so many things stacking up. Sometimes you just do the easy things because they are easy. You send the email, you put together the proposal, you provide feedback on the document. It can be done in 15 minutes, so you do it. Leaving the bigger stuff for later. At least I do.

Then later becomes the evening, and the big stuff is still lagging. I pop open the laptop and try to dig into the big stuff, but that’s very hard to do at the end of the day. For me, at least. In the meantime a bunch more stuff showed up in the inbox. A couple more things need to get done. Some easy, some hard. So you run faster, get up earlier, rearrange the list, get something done. Wash, rinse, repeat. Sure, things get done. But I need to ask whether it’s the right stuff. Not always.

Slow down. You're going too fast!

I know this is a solved problem. For others. They’ll tell me about their awesome Kanban workflow to control unplanned work. How they use a Pomodoro timer to make sure they give themselves enough time to get something done. Someone inevitably busts out some GTD goodness or possibly some Seven Habits wisdom. Sigh. Here’s the thing. I have a system. It works. When I use it.

The lack of a system isn’t my problem. It’s that I’m running too fast. I need to slow down. When I slow down things come into focus. Sure, more stuff may pile up. But not all that stuff will need to get done. The emails will still be there. The proposal will get written, when I have a slot open to actually do the work. And when I say slow down, that doesn’t mean work less. It means give myself time to mentally explore and wander. With nowhere to be. With nothing to achieve.

I do that through meditation, which I haven’t done consistently over the last few months. I prioritized my physical practices (running and yoga) for the past few months, at the expense of my mental practice. I figured if I just follow my breath when running I can address both my mental and physical practice at the same time. Efficiency, right? Nope. Running and yoga are great. But I get something different from meditation.

I’m most effective when I have time to think. To explore. To indulge my need to go down paths that may not seem obvious at first. I do that when meditating. I see the thought and sometimes I follow it down a rathole. I don’t know where it will go or what I’ll learn. I follow it anyway. Sometimes I just let the thought pass and return my awareness to the breath. But one thing is for sure – my life flows a lot easier when I’m meditating every day. Which is all that matters.

So forgive me if I don’t respond to your email within the hour. I’ll forgive myself for letting things pile up on my to do list. The emails and tasks will be there when I’m done meditating. It turns out I will be able to work through lists much more efficiently once I give myself space to slow down. Strangely enough, that allows me to speed up.

–Mike

Photo credit: “Slow Down” originally uploaded by Tristan Schmurr


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Don’t believe everything you read: The good news about Securosis’ business is that we don’t have to chase news. Sure, if there is something timely and we have room on our calendar, we’ll comment on current events. But if you look at our blog lately it’s clear we’re pretty busy. So we didn’t get around to commenting on this plane hacking stuff. But if we wait around long enough, one of our friends will say pretty much what I’m thinking. So thanks to Wendy who summed up the situation nicely. And that reminds me of something I have to tell my kids almost every day. Don’t believe everything you read on the Internet. You aren’t getting the full story. Media outlets, bloggers, and other folks with websites have agendas and biases. Consider what you read with a skeptical eye and confirm/validate to ensure you have the full story. Or fall in line with the rest of the lemmings who believe what they read, and react emotionally to what usually amounts to a pile of rubbish. – MR

  2. Super-Fish-er: Dennis Fisher over at ThreatPost wrote a great article highlighting ad injector networks and how attackers are hijacking SSL connections to collect ad revenue for bogus ‘clicks’ from bogus sites. It’s a sobering look at how your computer can be leveraged – with a couple simple alterations – to behave just like another person. So much of browsers’ behavior is hidden from users precisely to hide the avalanche of ads and tracking that it’s fairly easy for attackers to hide within that environment. We will see lots more hacking of ad networks while this remains so profitable. – AL

  3. The monster in the closet: I really like Scott Roberts’ discussion of Imposter Syndrome – basically the fear that you will be found out as a fraud. He looks at it from the perspective of DFIR. We all struggle with it. Our brains, in a misplaced attempt to protect us, make us feel unworthy. It turns out that feeling can shut you down, or motivate you to continue growing and learning. Scott’s recommendations include being aware of the feelings and searching out experts who can help you learn and grow. Every time I question my skills I remember that I do different things differently than most everyone else. I’m not trying to be anyone else so I can’t really be an imposter. And if someone doesn’t appreciate what I do or how I do it, that’s fine by me. You can’t make everyone happy all the time, and that includes your internal imposter. Acknowledge it, and then let it go. – MR

  4. Financial aid: In news that surprised no one, the University of California, Los Angeles (UCLA) announced 800k records were accessed by hackers – far bigger than the 2009 UC Berkeley breach. Some of you with mad crazy math skilz may be saying, “Hey, wait, even at 50k students a year, that’s 16 years of student data!” but the stolen records included application data, including all that financial aid related stuff students provide universities. It’s normally at this point where we ask, “What the frack are you doing keeping all those records?!” and recommend deletion or crypto-shredding to dispose of data, but in this case that does not matter as much – the attackers gained access in 2005. Yeah, ten years, so we’ll just say your odds of detecting a compromise without monitoring are pretty much zero. – AL

  5. Maturity is a thing… A while back (I’m a bit behind in my reading) Brian Krebs posted about security maturity. He presented a couple models to describe how a security program changes based on the maturity of the function. We use this concept a lot because it makes sense, especially to those stepping into a very unsophisticated who and need to advance it quickly. First you have to acknowledge where you are today – honestly. Deceiving yourself is not going to help. But even more importantly, you need to figure out where you want to be. What is your goal? And then you can figure out how much that will cost. Not every organization needs a world-class security program. Ultimately this is a convenient metaphor to manage expectations because it forces everyone to think about the end goal, and we all know how critical that is. – MR

—Mike Rothman

Friday, May 15, 2015

Summary: DevOpsinator

By Rich

It seems we messed up, and last week’s Summary never made it out of draft. So I doubled up and apologize for the spam, but since I already put in all the time, here you go…

Rich here,

As you can tell we are deep in the post-RSA Conference/pre-Summer marsh. I always think I’ll get a little time off, but it never really works out. All of us here at Securosis have been traveling a ton and are swamped with projects. Although some of them are home-related, as we batten down the hatches for the impending summer heat wave here in Phoenix.

Two things really struck me recently as I looked at the portfolio of projects in front of me. First, that large enterprises continue to adopt public cloud computing faster than even my optimistic expectations. Second, they are adopting DevOps almost as quickly.

In both cases adoption is primarily project-based for companies that have been around a while. That makes excellent sense once you spend time with the technologies and processes, because retrofitting existing systems often requires a complete redesign to get the full benefit. You can do it, but preferably as a planned transition.

It looks like even big, slow movers see the potential benefits of agility, resiliency, and economics to be gained by these moves. In my book it all comes down to competitiveness: you simply can’t compete without cloud and DevOps anymore. Not for long.

Nearly all my work these days is focused on them, and they are keeping me busier than any other coverage area in my career (which might say something about my career which I don’t want to think about). Most of it is either end-user focused, or working with vendors and service providers on internal stuff – not the normal analyst product and marketing advice.

I am finding that while it’s intimidating on the surface, there really are only so many ways to skin a cat. I see consistent design patterns emerging among those seeing successes, and a big chunk of what I spend time on these days is adapting them for others who are wandering through the same wilderness. The patterns change and evolve, but once you get them down it’s like that first time you make it from top to bottom on your snowboard. You’re over the learning curve, and get to start having fun.

Although it sure helps if you actually like snowboarding. Or just snow. I meet plenty of people in tech who are just in it for the paycheck, and don’t actually like technology. That’s like being a chef who only drinks Soylent at home. Odds are they won’t get the Michelin Star any time soon. And they probably need to medicate themselves to sleep.

But if you love technology? Oh, man – there’s never been a better time to roll up our sleeves, have some fun, and make a little cash in the process. On that note, I need to go reset some demos, evaluate a client’s new cloud security controls, and finish off a proposal to help someone else integrate security testing into their DevOps process. There are, most definitely, worse ways to spend my day.

On to the Summary:

Webcasts, Podcasts, Outside Writing, and Conferences

Favorite Securosis Posts

  • Mike Rothman: Network-based Threat Detection: Prioritizing with Context: Prioritization is still the bane of most security folks’ existence. We’re making slow but steady progress.
  • Rich: Incite 5/6/2015: Just Be. I keep picking on Mike because I’m the one from Hippieville (Boulder), but figuring out what grounds you is insanely important, and the only way to really enjoy life. For me it’s moving meditation (crashing my bike or getting my face smashed by a friend). Mike is on a much healthier path.

Other Securosis Posts

Favorite Outside Posts

  • Mike Rothman: Google moves its corporate applications to the Internet: This is big. Not the first time we’re seeing it, but the first at this scale. Editor’s note: one of my recent cloud clients has done the same thing. They assume the LAN is completely hostile.
  • Rich: CrowdStrike’s VENOM vulnerability writeup. It’s pretty clear and at the right tech level for most people (unless you are a vulnerability researcher working on a PoC). Although I am really tired of everyone naming vulnerabilities – eventually we’ll need to ask George Lucas’ kids to make up names for us.

Research Reports and Presentations

Top News and Posts

—Rich

Wednesday, May 13, 2015

Network-based Threat Detection: Operationalizing Detection

By Mike Rothman

As we wrap up our Network-based Threat Detection series, we have already covered why prevention isn’t good enough and how to find indications that an attack is happening, based on what you see on the network. Our last post worked through adding context to collected data to allow some measure of prioritization for alerts. To finish things off we will discuss additional context and making alerts operationally useful.

Leveraging Threat Intelligence for Detection

This analysis is still restricted to your organization. You are gathering data from your networks and adding context from your enterprise systems. Which is great but not enough. Factoring data from other organizations into your analysis can help you refine it and prioritize your activities more effectively. Yes, we are talking about using threat intelligence in your detection process.

For prevention, threat intel can be useful to decide which external sites should be blocked on your egress filters, based on reputation and possibly adversary analysis. This approach helps ensure devices on your network don’t communicate with known malware sites, bot networks, phishing sites, watering hole servers, or other places on the Internet you want nothing to do with. Recent conversations with practitioners indicate much greater willingness to block traffic – so long as they have confidence in the alerts.

But this series isn’t called Network-based Threat Prevention, so how does threat intelligence help with detection? TI provides a view of network traffic patterns used in attacks on other organizations. Learning about these patterns enables you to look for them (Domain Generating Algorithms, for example) within your own environment. You might also see indicators of internal reconnaissance or lateral movement typically used by certain adversaries, and use them to identify attacks in process. Watching for bulk file transfers, for example, or types of file encryption known to be used by particular crime networks, could yield insight into exfiltration activities.

Like the burden of proof is far lower in civil litigation than in criminal litigation, the bar for useful accuracy is far lower in detection modes than in prevention. When you are blocking network traffic for prevention, you had better be right. Users get cranky when you block legitimate network sessions, so you will be conservative about what you block. That means you will inevitably miss something – the dreaded false negative, a legitimate attack. But firing an alert provides more leeway, so you can be a bit less rigid.

That said, you still want to be close – false positives are still very expensive. This is where the approach mapped out in our last post comes into play. If you see something that looks like an attack based on external threat intel, you apply the same contextual filters to validate and prioritize.

Retrospection

What happens when you don’t know an attack is actually an attack when the traffic enters your network? This happens every time a truly new attack vector emerges. Obviously you don’t know about it, so your network controls will miss it and your security monitors won’t know what to look for. No one has seen it yet, so it doesn’t show up in threat intel feeds. So you miss, but that’s life. Everyone misses new attacks. The question is: how long do you miss it?

One of the most powerful concepts in threat intelligence is the ability to use newly discovered indicators and retrospectively look through security data to see if an attack has already hit you. When you get a new threat intel indicator you can search your network telemetry (using your fancy analytics engine) to see if you’ve seen it before. This isn’t optimal because you already missed. But it’s much better than waiting for an attacker to take the next step in the attack chain. In the security game nothing is perfect. But leveraging the hard-won experience of other organizations makes your own detection faster and more accurate.

A Picture Is Worth a Thousand Words

At this point you have alerts, and perhaps some measure of prioritization for them. But one of the most difficult tasks is deciding how to navigate through the hundreds or thousands of alerts that happen in networks at scale. That’s where visualization techniques come into play. A key criterion for choosing a detection offering is getting information presented in a way that makes sense to you and will work in your organization’s culture.

Some like the traditional user experience, which looks like a Top 10 list of potentially compromised devices, with the grid showing details of the alert. Another way to visualize detection data is as a heat map showing devices and potential risks visually, offering drill-down into indicators and alert causes. There is no right or wrong here – it is just a question of what will be most effective for your security operations team.

Operationalizing Detection

As compelling as network-based threat detection is conceptually, a bunch of integration needs to happen before you can provide value and increase your security program’s effectiveness. There are two sides to integration: data you need for detection, and information about alerts that is sent to other operational systems. For the former, these connections to identity systems and external threat intelligence drive analytics for detection. The latter includes the ability to pump the alert and contextual data to your SIEM or other alerting system to kick off your investigation process.

If you get comfortable enough with your detection results you can even configure workarounds such as IPS blocking rules based on these alerts. You might prevent compromised devices from doing anything, blocking C&C traffic, or block exfiltration traffic. As described above, prevention demands minimization of false positives, but disrupting attackers can be extremely valuable. Similarly, integration with Network Access Control can move a compromised device onto a quarantine network until it can be investigated and remediated.

For network forensics you might integrate with a full packet capture/network forensics platform. In this use case, when a device shows potential compromise, traffic to and from it could be captured for forensic analysis. Such captured network traffic may provide a proverbial smoking gun. This approach could also make you popular with the forensics folks, because you would be able to provide the actual packets from the attack. Prioritized alerts enable you to be more precise and efficient about what traffic to capture, and ultimately what to investigate.

Automation of these functions is still in its infancy. But we expect all sorts of security automation to emerge within the short-term planning horizon (18-24 months). We will increasingly see security controls reconfigured based on alerts, network traffic redirected, and infrastructure quarantined and pulled offline for investigation. Attacks hit too fast to do it any other way, but automation scares many security professionals. We expect to see this play out over the next 5-7 years, but have no doubt that it will happen.

When to Remediate, and When Not to

It may be hard to believe, but there are real scenarios where you might not want to immediately remediate a compromised device. The first – and easiest to justify – is when it is part of an ongoing investigation; HR, legal, senior management, law enforcement, or anyone else may mandate that the device be observed but otherwise left alone. There isn’t much wiggle room in this scenario. With the remediation decision no longer in your hands, and the risk of an actively compromised device on your network determined to be acceptable, you then take reasonable steps to monitor the device closely and ensure it is unable to exfiltrate data.

Another scenario where remediation may not be appropriate is when you need to study and profile your adversary, and the malware and command and control apparatus they use, through direct observation. Obviously you need a sophisticated security program to undertake a detailed malware analysis process (as described in Malware Analysis Quant), but clearly understanding and identifying indicators of compromise can help identify other compromised devices, and enable you to deploy workarounds and other infrastructure protections such as IPS rules and HIPS signatures.

That said, in most cases you will just want to pull the device off the network as quickly as possible, pull a forensic image, and then reimage it. That is usually the only way to ensure the device is clean before letting it back into the general population. If you are going to follow an observational scenario, however, it and your decision tree need to be documented and agreed on as part of your incident response plan.

With that we wrap up our Network-based Threat Detection series. We will be assembling this series into a white paper, and posting it in the Research Library soon. As always, if you see something here that doesn’t make sense or doesn’t reflect your experience or issues, please let us know in the comments. That kind of feedback makes our research more impactful.

—Mike Rothman

Monday, May 11, 2015

Network-based Threat Detection: Prioritizing with Context

By Mike Rothman

During speaking gigs we ask how many in the audience actually get through their to-do list every day. Usually we get one or two jokers in the crowd between jobs, or maybe just trying to troll us a bit. But nobody in a security operational role gets everything done every day. So the critical success factor is to make sure you are getting the right things done, and not burning time on activities that don’t reduce risk or contain attack damage.

Underpinning this series is the fact that prevention inevitably fails at some point. Along with a renewed focus on network-based detection, that means your monitoring systems will detect a bunch of things. But which alerts are important? Which represent active adversary activity? Which are just noise and need to be ignored? Figuring out which is which is where you need the most help.

To use a physical security analogy, a security fence will alert regularly. But you need to figure out whether it’s a confused squirrel, a wayward bird, a kid on a dare, or the offensive maneuver of an adversary. Just looking at the alert won’t tell you much. But if you add other details and additional context into your analysis, you can figure out which is which. The stakes are pretty high for getting this right, as the postmortems of many recent high-profile breaches indicated alerts did fire – in some cases multiple times from multiple systems – but the organizations failed to take action… and suffered the consequences.

Our last post listed network telemetry you could look for to indicate potential malicious activity. Let’s say you like the approach laid out in that post and decide to implement it in your own monitoring systems. So you flip the switch and the alerts come streaming in. Now comes the art: separating signal from noise and narrowing your focus to the alerts that matter and demand immediate attention. You do this by adding context to general network telemetry and then using an analytics engine to crunch the numbers.

To add context you can leverage both internal and external information. At this point we’ll focus on internal data, because you already have that and can implement it right away. Our next post will tackle external data, typically accessible via a threat intelligence feed.

Device Behavior

You start by figuring out what’s important – not all devices are created equal. Some store very important data. Some are issued to employees with access to important data, typically executives. But not all devices present a direct risk to your organization, so categorizing them provides the first filter for prioritization. You can use the following hierarchy to kickstart your efforts:

  1. Critical devices: Devices with access to protected information and/or particularly valuable intellectual property should bubble to the top. Fast. If a device on a protected and segmented network shows indications of compromise, that’s bad and needs to be dealt with immediately. Even if the device is dormant, traffic on a protected network that looks like command and control constitutes smoke, and you need to act quickly to ensure any fire doesn’t spread. Or enjoy your disclosure activities…
  2. Active malicious devices: If you see device behavior which indicates an active attack (perhaps reconnaissance, moving laterally within the environment, blasting bits at internal resources, or exfiltrating data), that’s your next order of business. Even if the device isn’t considered critical, if you don’t deal with it promptly the compromise might find an exploitable hole to a higher-value device and move laterally within the organization. So investigate and remediate these devices next.
  3. Dormant devices: These devices at some point showed behavior consistent with command and control traffic (typically staying in communication with a C&C network), but aren’t doing anything malicious at the moment. Given the number of other fires raging in your environment, you may not have time to remediate these dormant devices immediately.

These priorities are fairly coarse but should be sufficient. You don’t want a complicated multi-tier rating system which is too involved to use on a daily basis. Priorities should be clear. If you have a critical device that is showing malicious activity, that’s a red alert. Critical devices that throw alerts need to be investigated next, and then non-critical devices showing malicious activity. Finally, after you have all the other stuff done, you can get around to dealing with devices you’re pretty sure are compromised. Of course this last bucket might show malicious activity at any time, so you still need to watch it. The question is when you remediate.

This categorization helps, but within each bucket you likely have multiple devices. So you still need additional information and context to make decisions.

Who and Where

Not all employees are created equal either. Another source of context is user identity, and there are a bunch of groups you need to pay attention to. The first is people with elevated privileges, such as administrators and others with entitlements to manage devices that hold critical information. They can add, delete, delete, change accounts and access rules on the servers, and manipulate data. They have access to tamper with logs, and basically can wreck an environment from the inside. There are plenty of examples of rogue or disgruntled administrators making a real mess, so when you see an administrator’s device behaving strangely, that should bubble up to the top of your list.

The next group of folks to watch closely are executives with access to financials, company strategy, and other key intellectual property. These users are attacked most frequently via phishing and other social engineering, so they need to be watched closely – even trained, they aren’t perfect. This may trigger organizational backlash – some executives get cranky when they are monitored. But that’s not your problem, and without this kind of context it’s hard to do your job. So dig in and make your case to the executives for why it’s important. As you look for indicators that devices are connecting to a C&C server or performing reconnaissance, you are protecting the organization, and executives should know better than to fight that.

The location of your critical data also provides context for priorities. Critical data lives on particular network segments, typically in the data center, so you should be making sure those networks are monitored. But it’s not just PII you need to worry about. Your organization should isolate segments for labs doing cutting-edge R&D, finance networks with preliminary numbers from last quarter, and anything else needing special caution. Isolation is your friend – use different segments, at least logically, to minimize data intermingling.

You can get contextual information from a variety of sources, which you likely already use. For instance identity information (such as Active Directory users and groups) enables you to map a devices to a user and/or group. Then you can profile typical finance department activity and know it’s different than how marketing and engineering groups communicate with each other and the broader Internet. You could go deeper and profile specific people.

Additionally, network topology information is important in attack path analysis to understand the blast radius of any specific attack. That’s a fancy term for damage assessment in case a device or network is compromised: what else would be directly exposed? Once you figure out which other devices on the network can be reached from the compromised device (during lateral movement), and what potential attacks would succeed, you can use this information to further prioritize your activities.

Content

The next area to mine for context is content – as you might guess, not all data is created equal either. You’ll need to be able to analyze the content stream within network traffic to look for protected data, or data identified as critical intellectual property. This rough data classification can be very resource-intensive and hard to keep current (ask anyone trying to implement DLP), so make it as simple as possible. For instance private personal information (PPI) may be the most important data to protect in your environment. But intellectual property is the lifeblood of most non-medical high-tech organizations, and thus typically their top priority. It doesn’t really matter what is at the top, so long as it reflects your organization’s priorities.

Compliance remains a factor for many organizations, so potential compliance violations bubble up when figuring out priorities.

The importance of various specific types of content depends on the organization, and you need to do the work to understand how they need to be protected and monitored. That will entail building consensus with executives, because you need clear marching orders for what alerts need to be validated and investigated first.

Math

Armed with network data identifying indicators and additional context such as identity, location, and content, now you need to figure out what is at greatest risk and react accordingly. This involves crunching numbers and identifying out the highest priority alert. You are looking to:

  1. Get a verdict on a device and/or a network: whether it has been compromised and to what degree.
  2. Dig deeper into the attack to figure out the extent of the damage and how far it has spread.

This requires math. We aren’t being flippant (okay, maybe a little), but this type of analysis requires fairly sophisticated algorithms to establish a general risk assessment. You will hear a lot of noise about “risk scoring” as you dig into the current state of network-based detection. Coming up with a quantified risk score can be pretty arbitrary, so it’s good to understand how the score is calculated and where the numbers come from. Make sure your numbers pass the sniff test and you can defend where they come from, because they will be used to make decisions.

As discussed above, your organization will have its own ideas about what’s important and different risk tolerances than other organizations. So you should be able to tune algorithms and weight factors differently to get more meaningful alerts. Your environment is not static – it will change constantly, which means you need to tune your alerting systems on an ongoing basis. Sorry, but there is not much set it and forget it any more. We recommend that you include include a feedback loop in your security alerting process. Assess the value of your alerts, identify gaps, and then tune further based on what is really happening in the field.

Once you have a score you need to operationalize the detection process. That entails figuring out how you will visualize the data and integrate it into your security operational processes. Our next post will discuss getting context from external data/threat intelligence sources, and then using it to help you remediate attacks completely and efficiently.

—Mike Rothman

Wednesday, May 06, 2015

Incite 5/6/2015: Just Be

By Mike Rothman

I’m spent after the RSAC. By Friday I have been on for close to a week. It’s nonstop, from the break of dawn until the wee hours of the morning. But don’t feel too bad – it’s one of my favorite weeks of the year. I get to see my friends. I do a bunch of business. And I get a feel for how close our research is to reflecting the larger trends in the industry.

But it’s exhausting. When the kids were smaller I would fly back early Friday morning and jump back into the fray of the Daddy thing. I had very little downtime and virtually no opportunity to recover. Shockingly enough, I got sick or cranky or both. So this year I decided to do it differently. I stayed in SF through the weekend to unplug a bit.

be

I made no plans. I was just going to flow. There was a little bit of structure. Maybe I would meet up with a friend and get out of town to see some trees (yes, Muir Woods was on the agenda). I wanted to catch up with a college buddy who isn’t in the security business, at some point. Beyond that, I’d do what I felt like doing, when I felt like doing it. I wasn’t going to work (much) and I wasn’t going to talk to people. I was just going to be.

Turns out my friend wasn’t feeling great, so I was solo on Friday after the closing keynote. I jumped in a Zipcar and drove down to Pacifica. Muir Woods would take too long to reach, and I wanted to be by the water. Twenty minutes later I was sitting by the ocean. Listening to the waves. The water calms me and I needed that. Then I headed back to the city and saw an awesome comedian was playing at the Punchline. Yup, that’s what I did. He was funny as hell, and I sat in the back with my beer and laughed. I needed that too.

Then on Saturday I did a long run on the Embarcadero. Turns out a cool farmer’s market is there Saturdays. So I got some fruit to recover from the run, went back to the hotel to clean up, and then headed back to the market. I sat in a cafe and watched people. I read a bit. I wrote some poetry. I did a ZenTangle. I didn’t speak to anyone (besides a quick check-in with the family) for 36 hours after RSA ended. It was glorious. Not that I don’t like connecting with folks. But I needed a break.

Then I had an awesome dinner with my buddy and his wife, and flew back home the next day in good spirits, ready to jump back in. I’m always running from place to place. Always with another meeting to get to, another thing to write, or another call to make. I rarely just leave myself empty space with no plans to fill it. It was awesome. It was liberating. And I need to do it more often.

This is one of the poems I wrote, watching people rushing around the city.

Rush
You feel them before you see
They have somewhere to be
It’s very important
Going around you as quickly as they can.
They are going places.

Then another
And another
And another
Constantly rushing
But never catching up.

They are going places.
Until they see
that right here
is the only place they need to be.
– MSR, 2015

–Mike

Photo credit: “65/365: be. [explored]“_ originally uploaded by It’s Holly


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Threat intel still smells like poop? I like colorful analogies. I’m sad that my RSAC schedule doesn’t allow me to see some of the more interesting sessions by my smart friends. But this blow-by-blow of Rick Holland’s Threat Intelligence is Like Three-Day Potty Training makes me feel like I was there. I like the maturity model, and know many large organization invest a boatload of cash in threat intel, and as long as they take a process-centric view (as Rick advises) they can get great value from that investment. But I’m fixated on the not Fortune 500. You know, organizations with a couple folks on the security team (if that) and a budget of a few Starbucks cards for threat intel. What do those folks do? Nothing right now, but over time they will expect and get threat intel built into their controls. Why should they have to spend time and money they don’t have, to integrate data their products should just use. Oh, does that sound like the way security products have worked for decades? Driven by dynamic updates from the vendor who produces the device? Right, back to the future. But a better future with better data, and possibly even better results. – MR

  2. Backwards: In the current round of vulnerability disclosure lunacy, the FBI detained security researcher Chris Roberts – who recently disclosed major vulnerabilities in airline in-flight WiFi systems – for questioning after exiting a recent flight. What makes this story suspect is that Robert was cooperating with airlines and the FBI prior to this. He met with both to discuss the issues, so they were fully aware of his findings. From statements it looks like the FBI performed a forensic analysis of the plane’s systems, and given their desire to examine Roberts’ laptop, it looks like this was an attempt to entrap determine whether Roberts stupidly hacked the plane he was on. The disclosure was a month prior, so the FBI could have pulled Roberts prior to boarding, or gone to his office, or even called and asked him to come in – but that’s not what they did. So far as we know, none of the executives who produce the vulnerable WiFi systems has been pulled from their flights; and more troubling, none of those systems were disabled pending investigation prior to Roberts’ flight. If the threat was serious, quietly disabling in-flight entertainment would be the correct action – not a grandstanding public arrest of a guy openly trying to get vulnerabilities fixed. – AL

  3. Even a mindset shift won’t solve the problem: Working through the round-ups of the RSAC 2015, I found some coverage of RSA President Amit Yoran’s keynote. His main contention was that security issues come down to having a change mindset, as opposed to expecting some new widget to solve all problems. I like that message, because I agree that chasing shiny new products and services, seeking a silver bullet, has moved us backwards. Clearly a mindset shift to focus on the people side is necessary, but it’s not sufficient. I think the goal of stopping attackers is a bit misguided, so that’s what we need to shift. It’s about managing loss, not blocking attacks. Some loss is actually necessary, because loss would be too expensive to completely avoid. But how can you find the right balance? That’s the art of doing security. Balancing the value of what’s at risk with the cost to protect it. Feel better now? – MR

  4. Busting the confusion: When the cloud was new some experts told us it was nothing more than outsourced mainframe computing. Lots of rubbish like that gets thrown out there when people don’t fully comprehend innovative or disruptive technology. Such is also the case with DevOps, and Gene Kim’s recent myth-busting article for DevOps makes some great points to address some of the big misconceptions I hear frequently. For me his first point is the biggest: DevOps does not replace Agile. DevOps helps make the rest of the organization more Agile. Additionally, the Agile with Scrum development methodology continues to work as before, but with less friction and impediments from outside groups. Sure, automation of many IT and QA tasks into a development pipeline is a big part of that, but focusing on that aspect diminishes the importance of addressing Work in Progress, a bigger source of friction. Gene’s comments are right on the mark and required reading – at least for those of you who don’t take the time to read The Phoenix Project. And yes, you should make that time. – AL

  5. Cheaters. Shocking! It seems a bevy of Chinese anti-virus vendors keep getting caught cheating on effectiveness tests, according to Graham Cluley. I find this pretty entertaining, mostly because anyone who buys an AV product based on the results of an effectiveness test is a joke. Additionally, it seems people forget that China plays business by different rules. They have no issue with taking your intellectual property, because they view it differently. So why would anyone be surprised that they think differently about AV comparison tests? It comes back to something we learned early on: you can’t expect other folks to act like you. Just because you won’t cheat doesn’t mean other folks are bound by the same ethics. You need to understand how to buy these products, and if you’re relying on third-party testing you will get what you deserve. – MR

—Mike Rothman

Monday, May 04, 2015

RSAC wrap-up. Same as it ever was.

By Rich

The RSA conference is over and put up some massive numbers (for security). But what does it all mean? Can all those 450 vendors on the show floor possibly survive? Do any of them add value?

Do bigger numbers mean we are any better than last year? And how can we possibly balance being an industry, community, and profession simultaneously? Not that we answer any of that, but we can at least keep you entertained for 13 minutes.

Watch or listen:


—Rich