It seems we messed up, and last week’s Summary never made it out of draft. So I doubled up and apologize for the spam, but since I already put in all the time, here you go…
As you can tell we are deep in the post-RSA Conference/pre-Summer marsh. I always think I’ll get a little time off, but it never really works out. All of us here at Securosis have been traveling a ton and are swamped with projects. Although some of them are home-related, as we batten down the hatches for the impending summer heat wave here in Phoenix.
Two things really struck me recently as I looked at the portfolio of projects in front of me. First, that large enterprises continue to adopt public cloud computing faster than even my optimistic expectations. Second, they are adopting DevOps almost as quickly.
In both cases adoption is primarily project-based for companies that have been around a while. That makes excellent sense once you spend time with the technologies and processes, because retrofitting existing systems often requires a complete redesign to get the full benefit. You can do it, but preferably as a planned transition.
It looks like even big, slow movers see the potential benefits of agility, resiliency, and economics to be gained by these moves. In my book it all comes down to competitiveness: you simply can’t compete without cloud and DevOps anymore. Not for long.
Nearly all my work these days is focused on them, and they are keeping me busier than any other coverage area in my career (which might say something about my career which I don’t want to think about). Most of it is either end-user focused, or working with vendors and service providers on internal stuff – not the normal analyst product and marketing advice.
I am finding that while it’s intimidating on the surface, there really are only so many ways to skin a cat. I see consistent design patterns emerging among those seeing successes, and a big chunk of what I spend time on these days is adapting them for others who are wandering through the same wilderness. The patterns change and evolve, but once you get them down it’s like that first time you make it from top to bottom on your snowboard. You’re over the learning curve, and get to start having fun.
Although it sure helps if you actually like snowboarding. Or just snow. I meet plenty of people in tech who are just in it for the paycheck, and don’t actually like technology. That’s like being a chef who only drinks Soylent at home. Odds are they won’t get the Michelin Star any time soon. And they probably need to medicate themselves to sleep.
But if you love technology? Oh, man – there’s never been a better time to roll up our sleeves, have some fun, and make a little cash in the process. On that note, I need to go reset some demos, evaluate a client’s new cloud security controls, and finish off a proposal to help someone else integrate security testing into their DevOps process. There are, most definitely, worse ways to spend my day.
On to the Summary:
Webcasts, Podcasts, Outside Writing, and Conferences
Favorite Securosis Posts
- Mike Rothman: Network-based Threat Detection: Prioritizing with Context: Prioritization is still the bane of most security folks’ existence. We’re making slow but steady progress.
- Rich: Incite 5/6/2015: Just Be. I keep picking on Mike because I’m the one from Hippieville (Boulder), but figuring out what grounds you is insanely important, and the only way to really enjoy life. For me it’s moving meditation (crashing my bike or getting my face smashed by a friend). Mike is on a much healthier path.
Other Securosis Posts
Favorite Outside Posts
- Mike Rothman: Google moves its corporate applications to the Internet: This is big. Not the first time we’re seeing it, but the first at this scale. Editor’s note: one of my recent cloud clients has done the same thing. They assume the LAN is completely hostile.
- Rich: CrowdStrike’s VENOM vulnerability writeup. It’s pretty clear and at the right tech level for most people (unless you are a vulnerability researcher working on a PoC). Although I am really tired of everyone naming vulnerabilities – eventually we’ll need to ask George Lucas’ kids to make up names for us.
Research Reports and Presentations
Top News and Posts
Posted at Friday 15th May 2015 5:13 am
(0) Comments •
By Mike Rothman
As we wrap up our Network-based Threat Detection series, we have already covered why prevention isn’t good enough and how to find indications that an attack is happening, based on what you see on the network. Our last post worked through adding context to collected data to allow some measure of prioritization for alerts. To finish things off we will discuss additional context and making alerts operationally useful.
Leveraging Threat Intelligence for Detection
This analysis is still restricted to your organization. You are gathering data from your networks and adding context from your enterprise systems. Which is great but not enough. Factoring data from other organizations into your analysis can help you refine it and prioritize your activities more effectively. Yes, we are talking about using threat intelligence in your detection process.
For prevention, threat intel can be useful to decide which external sites should be blocked on your egress filters, based on reputation and possibly adversary analysis. This approach helps ensure devices on your network don’t communicate with known malware sites, bot networks, phishing sites, watering hole servers, or other places on the Internet you want nothing to do with. Recent conversations with practitioners indicate much greater willingness to block traffic – so long as they have confidence in the alerts.
But this series isn’t called Network-based Threat Prevention, so how does threat intelligence help with detection? TI provides a view of network traffic patterns used in attacks on other organizations. Learning about these patterns enables you to look for them (Domain Generating Algorithms, for example) within your own environment. You might also see indicators of internal reconnaissance or lateral movement typically used by certain adversaries, and use them to identify attacks in process. Watching for bulk file transfers, for example, or types of file encryption known to be used by particular crime networks, could yield insight into exfiltration activities.
Like the burden of proof is far lower in civil litigation than in criminal litigation, the bar for useful accuracy is far lower in detection modes than in prevention. When you are blocking network traffic for prevention, you had better be right. Users get cranky when you block legitimate network sessions, so you will be conservative about what you block. That means you will inevitably miss something – the dreaded false negative, a legitimate attack. But firing an alert provides more leeway, so you can be a bit less rigid.
That said, you still want to be close – false positives are still very expensive. This is where the approach mapped out in our last post comes into play. If you see something that looks like an attack based on external threat intel, you apply the same contextual filters to validate and prioritize.
What happens when you don’t know an attack is actually an attack when the traffic enters your network? This happens every time a truly new attack vector emerges. Obviously you don’t know about it, so your network controls will miss it and your security monitors won’t know what to look for. No one has seen it yet, so it doesn’t show up in threat intel feeds. So you miss, but that’s life. Everyone misses new attacks. The question is: how long do you miss it?
One of the most powerful concepts in threat intelligence is the ability to use newly discovered indicators and retrospectively look through security data to see if an attack has already hit you. When you get a new threat intel indicator you can search your network telemetry (using your fancy analytics engine) to see if you’ve seen it before. This isn’t optimal because you already missed. But it’s much better than waiting for an attacker to take the next step in the attack chain. In the security game nothing is perfect. But leveraging the hard-won experience of other organizations makes your own detection faster and more accurate.
A Picture Is Worth a Thousand Words
At this point you have alerts, and perhaps some measure of prioritization for them. But one of the most difficult tasks is deciding how to navigate through the hundreds or thousands of alerts that happen in networks at scale. That’s where visualization techniques come into play. A key criterion for choosing a detection offering is getting information presented in a way that makes sense to you and will work in your organization’s culture.
Some like the traditional user experience, which looks like a Top 10 list of potentially compromised devices, with the grid showing details of the alert. Another way to visualize detection data is as a heat map showing devices and potential risks visually, offering drill-down into indicators and alert causes. There is no right or wrong here – it is just a question of what will be most effective for your security operations team.
As compelling as network-based threat detection is conceptually, a bunch of integration needs to happen before you can provide value and increase your security program’s effectiveness. There are two sides to integration: data you need for detection, and information about alerts that is sent to other operational systems. For the former, these connections to identity systems and external threat intelligence drive analytics for detection. The latter includes the ability to pump the alert and contextual data to your SIEM or other alerting system to kick off your investigation process.
If you get comfortable enough with your detection results you can even configure workarounds such as IPS blocking rules based on these alerts. You might prevent compromised devices from doing anything, blocking C&C traffic, or block exfiltration traffic. As described above, prevention demands minimization of false positives, but disrupting attackers can be extremely valuable. Similarly, integration with Network Access Control can move a compromised device onto a quarantine network until it can be investigated and remediated.
For network forensics you might integrate with a full packet capture/network forensics platform. In this use case, when a device shows potential compromise, traffic to and from it could be captured for forensic analysis. Such captured network traffic may provide a proverbial smoking gun. This approach could also make you popular with the forensics folks, because you would be able to provide the actual packets from the attack. Prioritized alerts enable you to be more precise and efficient about what traffic to capture, and ultimately what to investigate.
Automation of these functions is still in its infancy. But we expect all sorts of security automation to emerge within the short-term planning horizon (18-24 months). We will increasingly see security controls reconfigured based on alerts, network traffic redirected, and infrastructure quarantined and pulled offline for investigation. Attacks hit too fast to do it any other way, but automation scares many security professionals. We expect to see this play out over the next 5-7 years, but have no doubt that it will happen.
When to Remediate, and When Not to
It may be hard to believe, but there are real scenarios where you might not want to immediately remediate a compromised device. The first – and easiest to justify – is when it is part of an ongoing investigation; HR, legal, senior management, law enforcement, or anyone else may mandate that the device be observed but otherwise left alone. There isn’t much wiggle room in this scenario. With the remediation decision no longer in your hands, and the risk of an actively compromised device on your network determined to be acceptable, you then take reasonable steps to monitor the device closely and ensure it is unable to exfiltrate data.
Another scenario where remediation may not be appropriate is when you need to study and profile your adversary, and the malware and command and control apparatus they use, through direct observation. Obviously you need a sophisticated security program to undertake a detailed malware analysis process (as described in Malware Analysis Quant), but clearly understanding and identifying indicators of compromise can help identify other compromised devices, and enable you to deploy workarounds and other infrastructure protections such as IPS rules and HIPS signatures.
That said, in most cases you will just want to pull the device off the network as quickly as possible, pull a forensic image, and then reimage it. That is usually the only way to ensure the device is clean before letting it back into the general population. If you are going to follow an observational scenario, however, it and your decision tree need to be documented and agreed on as part of your incident response plan.
With that we wrap up our Network-based Threat Detection series. We will be assembling this series into a white paper, and posting it in the Research Library soon. As always, if you see something here that doesn’t make sense or doesn’t reflect your experience or issues, please let us know in the comments. That kind of feedback makes our research more impactful.
Posted at Wednesday 13th May 2015 8:38 pm
(0) Comments •
By Mike Rothman
During speaking gigs we ask how many in the audience actually get through their to-do list every day. Usually we get one or two jokers in the crowd between jobs, or maybe just trying to troll us a bit. But nobody in a security operational role gets everything done every day. So the critical success factor is to make sure you are getting the right things done, and not burning time on activities that don’t reduce risk or contain attack damage.
Underpinning this series is the fact that prevention inevitably fails at some point. Along with a renewed focus on network-based detection, that means your monitoring systems will detect a bunch of things. But which alerts are important? Which represent active adversary activity? Which are just noise and need to be ignored? Figuring out which is which is where you need the most help.
To use a physical security analogy, a security fence will alert regularly. But you need to figure out whether it’s a confused squirrel, a wayward bird, a kid on a dare, or the offensive maneuver of an adversary. Just looking at the alert won’t tell you much. But if you add other details and additional context into your analysis, you can figure out which is which. The stakes are pretty high for getting this right, as the postmortems of many recent high-profile breaches indicated alerts did fire – in some cases multiple times from multiple systems – but the organizations failed to take action… and suffered the consequences.
Our last post listed network telemetry you could look for to indicate potential malicious activity. Let’s say you like the approach laid out in that post and decide to implement it in your own monitoring systems. So you flip the switch and the alerts come streaming in. Now comes the art: separating signal from noise and narrowing your focus to the alerts that matter and demand immediate attention. You do this by adding context to general network telemetry and then using an analytics engine to crunch the numbers.
To add context you can leverage both internal and external information. At this point we’ll focus on internal data, because you already have that and can implement it right away. Our next post will tackle external data, typically accessible via a threat intelligence feed.
You start by figuring out what’s important – not all devices are created equal. Some store very important data. Some are issued to employees with access to important data, typically executives. But not all devices present a direct risk to your organization, so categorizing them provides the first filter for prioritization. You can use the following hierarchy to kickstart your efforts:
- Critical devices: Devices with access to protected information and/or particularly valuable intellectual property should bubble to the top. Fast. If a device on a protected and segmented network shows indications of compromise, that’s bad and needs to be dealt with immediately. Even if the device is dormant, traffic on a protected network that looks like command and control constitutes smoke, and you need to act quickly to ensure any fire doesn’t spread. Or enjoy your disclosure activities…
- Active malicious devices: If you see device behavior which indicates an active attack (perhaps reconnaissance, moving laterally within the environment, blasting bits at internal resources, or exfiltrating data), that’s your next order of business. Even if the device isn’t considered critical, if you don’t deal with it promptly the compromise might find an exploitable hole to a higher-value device and move laterally within the organization. So investigate and remediate these devices next.
- Dormant devices: These devices at some point showed behavior consistent with command and control traffic (typically staying in communication with a C&C network), but aren’t doing anything malicious at the moment. Given the number of other fires raging in your environment, you may not have time to remediate these dormant devices immediately.
These priorities are fairly coarse but should be sufficient. You don’t want a complicated multi-tier rating system which is too involved to use on a daily basis. Priorities should be clear. If you have a critical device that is showing malicious activity, that’s a red alert. Critical devices that throw alerts need to be investigated next, and then non-critical devices showing malicious activity. Finally, after you have all the other stuff done, you can get around to dealing with devices you’re pretty sure are compromised. Of course this last bucket might show malicious activity at any time, so you still need to watch it. The question is when you remediate.
This categorization helps, but within each bucket you likely have multiple devices. So you still need additional information and context to make decisions.
Who and Where
Not all employees are created equal either. Another source of context is user identity, and there are a bunch of groups you need to pay attention to. The first is people with elevated privileges, such as administrators and others with entitlements to manage devices that hold critical information. They can add, delete, delete, change accounts and access rules on the servers, and manipulate data. They have access to tamper with logs, and basically can wreck an environment from the inside. There are plenty of examples of rogue or disgruntled administrators making a real mess, so when you see an administrator’s device behaving strangely, that should bubble up to the top of your list.
The next group of folks to watch closely are executives with access to financials, company strategy, and other key intellectual property. These users are attacked most frequently via phishing and other social engineering, so they need to be watched closely – even trained, they aren’t perfect. This may trigger organizational backlash – some executives get cranky when they are monitored. But that’s not your problem, and without this kind of context it’s hard to do your job. So dig in and make your case to the executives for why it’s important. As you look for indicators that devices are connecting to a C&C server or performing reconnaissance, you are protecting the organization, and executives should know better than to fight that.
The location of your critical data also provides context for priorities. Critical data lives on particular network segments, typically in the data center, so you should be making sure those networks are monitored. But it’s not just PII you need to worry about. Your organization should isolate segments for labs doing cutting-edge R&D, finance networks with preliminary numbers from last quarter, and anything else needing special caution. Isolation is your friend – use different segments, at least logically, to minimize data intermingling.
You can get contextual information from a variety of sources, which you likely already use. For instance identity information (such as Active Directory users and groups) enables you to map a devices to a user and/or group. Then you can profile typical finance department activity and know it’s different than how marketing and engineering groups communicate with each other and the broader Internet. You could go deeper and profile specific people.
Additionally, network topology information is important in attack path analysis to understand the blast radius of any specific attack. That’s a fancy term for damage assessment in case a device or network is compromised: what else would be directly exposed? Once you figure out which other devices on the network can be reached from the compromised device (during lateral movement), and what potential attacks would succeed, you can use this information to further prioritize your activities.
The next area to mine for context is content – as you might guess, not all data is created equal either. You’ll need to be able to analyze the content stream within network traffic to look for protected data, or data identified as critical intellectual property. This rough data classification can be very resource-intensive and hard to keep current (ask anyone trying to implement DLP), so make it as simple as possible. For instance private personal information (PPI) may be the most important data to protect in your environment. But intellectual property is the lifeblood of most non-medical high-tech organizations, and thus typically their top priority. It doesn’t really matter what is at the top, so long as it reflects your organization’s priorities.
Compliance remains a factor for many organizations, so potential compliance violations bubble up when figuring out priorities.
The importance of various specific types of content depends on the organization, and you need to do the work to understand how they need to be protected and monitored. That will entail building consensus with executives, because you need clear marching orders for what alerts need to be validated and investigated first.
Armed with network data identifying indicators and additional context such as identity, location, and content, now you need to figure out what is at greatest risk and react accordingly. This involves crunching numbers and identifying out the highest priority alert. You are looking to:
- Get a verdict on a device and/or a network: whether it has been compromised and to what degree.
- Dig deeper into the attack to figure out the extent of the damage and how far it has spread.
This requires math. We aren’t being flippant (okay, maybe a little), but this type of analysis requires fairly sophisticated algorithms to establish a general risk assessment. You will hear a lot of noise about “risk scoring” as you dig into the current state of network-based detection. Coming up with a quantified risk score can be pretty arbitrary, so it’s good to understand how the score is calculated and where the numbers come from. Make sure your numbers pass the sniff test and you can defend where they come from, because they will be used to make decisions.
As discussed above, your organization will have its own ideas about what’s important and different risk tolerances than other organizations. So you should be able to tune algorithms and weight factors differently to get more meaningful alerts. Your environment is not static – it will change constantly, which means you need to tune your alerting systems on an ongoing basis. Sorry, but there is not much set it and forget it any more. We recommend that you include include a feedback loop in your security alerting process. Assess the value of your alerts, identify gaps, and then tune further based on what is really happening in the field.
Once you have a score you need to operationalize the detection process. That entails figuring out how you will visualize the data and integrate it into your security operational processes. Our next post will discuss getting context from external data/threat intelligence sources, and then using it to help you remediate attacks completely and efficiently.
Posted at Monday 11th May 2015 7:00 am
(0) Comments •
By Mike Rothman
I’m spent after the RSAC. By Friday I have been on for close to a week. It’s nonstop, from the break of dawn until the wee hours of the morning. But don’t feel too bad – it’s one of my favorite weeks of the year. I get to see my friends. I do a bunch of business. And I get a feel for how close our research is to reflecting the larger trends in the industry.
But it’s exhausting. When the kids were smaller I would fly back early Friday morning and jump back into the fray of the Daddy thing. I had very little downtime and virtually no opportunity to recover. Shockingly enough, I got sick or cranky or both. So this year I decided to do it differently. I stayed in SF through the weekend to unplug a bit.
I made no plans. I was just going to flow. There was a little bit of structure. Maybe I would meet up with a friend and get out of town to see some trees (yes, Muir Woods was on the agenda). I wanted to catch up with a college buddy who isn’t in the security business, at some point. Beyond that, I’d do what I felt like doing, when I felt like doing it. I wasn’t going to work (much) and I wasn’t going to talk to people. I was just going to be.
Turns out my friend wasn’t feeling great, so I was solo on Friday after the closing keynote. I jumped in a Zipcar and drove down to Pacifica. Muir Woods would take too long to reach, and I wanted to be by the water. Twenty minutes later I was sitting by the ocean. Listening to the waves. The water calms me and I needed that. Then I headed back to the city and saw an awesome comedian was playing at the Punchline. Yup, that’s what I did. He was funny as hell, and I sat in the back with my beer and laughed. I needed that too.
Then on Saturday I did a long run on the Embarcadero. Turns out a cool farmer’s market is there Saturdays. So I got some fruit to recover from the run, went back to the hotel to clean up, and then headed back to the market. I sat in a cafe and watched people. I read a bit. I wrote some poetry. I did a ZenTangle. I didn’t speak to anyone (besides a quick check-in with the family) for 36 hours after RSA ended. It was glorious. Not that I don’t like connecting with folks. But I needed a break.
Then I had an awesome dinner with my buddy and his wife, and flew back home the next day in good spirits, ready to jump back in. I’m always running from place to place. Always with another meeting to get to, another thing to write, or another call to make. I rarely just leave myself empty space with no plans to fill it. It was awesome. It was liberating. And I need to do it more often.
This is one of the poems I wrote, watching people rushing around the city.
You feel them before you see
They have somewhere to be
It’s very important
Going around you as quickly as they can.
They are going places.
But never catching up.
They are going places.
Until they see
that right here
is the only place they need to be.
– MSR, 2015
Photo credit: “65/365: be. [explored]“_ originally uploaded by It’s Holly
The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.
Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.
We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.
Network-based Threat Detection
Applied Threat Intelligence
Network Security Gateway Evolution
Recently Published Papers
Incite 4 U
Threat intel still smells like poop? I like colorful analogies. I’m sad that my RSAC schedule doesn’t allow me to see some of the more interesting sessions by my smart friends. But this blow-by-blow of Rick Holland’s Threat Intelligence is Like Three-Day Potty Training makes me feel like I was there. I like the maturity model, and know many large organization invest a boatload of cash in threat intel, and as long as they take a process-centric view (as Rick advises) they can get great value from that investment. But I’m fixated on the not Fortune 500. You know, organizations with a couple folks on the security team (if that) and a budget of a few Starbucks cards for threat intel. What do those folks do? Nothing right now, but over time they will expect and get threat intel built into their controls. Why should they have to spend time and money they don’t have, to integrate data their products should just use. Oh, does that sound like the way security products have worked for decades? Driven by dynamic updates from the vendor who produces the device? Right, back to the future. But a better future with better data, and possibly even better results. – MR
Backwards: In the current round of vulnerability disclosure lunacy, the FBI detained security researcher Chris Roberts – who recently disclosed major vulnerabilities in airline in-flight WiFi systems – for questioning after exiting a recent flight. What makes this story suspect is that Robert was cooperating with airlines and the FBI prior to this. He met with both to discuss the issues, so they were fully aware of his findings. From statements it looks like the FBI performed a forensic analysis of the plane’s systems, and given their desire to examine Roberts’ laptop, it looks like this was an attempt to
entrap determine whether Roberts stupidly hacked the plane he was on. The disclosure was a month prior, so the FBI could have pulled Roberts prior to boarding, or gone to his office, or even called and asked him to come in – but that’s not what they did. So far as we know, none of the executives who produce the vulnerable WiFi systems has been pulled from their flights; and more troubling, none of those systems were disabled pending investigation prior to Roberts’ flight. If the threat was serious, quietly disabling in-flight entertainment would be the correct action – not a grandstanding public arrest of a guy openly trying to get vulnerabilities fixed. – AL
Even a mindset shift won’t solve the problem: Working through the round-ups of the RSAC 2015, I found some coverage of RSA President Amit Yoran’s keynote. His main contention was that security issues come down to having a change mindset, as opposed to expecting some new widget to solve all problems. I like that message, because I agree that chasing shiny new products and services, seeking a silver bullet, has moved us backwards. Clearly a mindset shift to focus on the people side is necessary, but it’s not sufficient. I think the goal of stopping attackers is a bit misguided, so that’s what we need to shift. It’s about managing loss, not blocking attacks. Some loss is actually necessary, because loss would be too expensive to completely avoid. But how can you find the right balance? That’s the art of doing security. Balancing the value of what’s at risk with the cost to protect it. Feel better now? – MR
Busting the confusion: When the cloud was new some experts told us it was nothing more than outsourced mainframe computing. Lots of rubbish like that gets thrown out there when people don’t fully comprehend innovative or disruptive technology. Such is also the case with DevOps, and Gene Kim’s recent myth-busting article for DevOps makes some great points to address some of the big misconceptions I hear frequently. For me his first point is the biggest: DevOps does not replace Agile. DevOps helps make the rest of the organization more Agile. Additionally, the Agile with Scrum development methodology continues to work as before, but with less friction and impediments from outside groups. Sure, automation of many IT and QA tasks into a development pipeline is a big part of that, but focusing on that aspect diminishes the importance of addressing Work in Progress, a bigger source of friction. Gene’s comments are right on the mark and required reading – at least for those of you who don’t take the time to read The Phoenix Project. And yes, you should make that time. – AL
Cheaters. Shocking! It seems a bevy of Chinese anti-virus vendors keep getting caught cheating on effectiveness tests, according to Graham Cluley. I find this pretty entertaining, mostly because anyone who buys an AV product based on the results of an effectiveness test is a joke. Additionally, it seems people forget that China plays business by different rules. They have no issue with taking your intellectual property, because they view it differently. So why would anyone be surprised that they think differently about AV comparison tests? It comes back to something we learned early on: you can’t expect other folks to act like you. Just because you won’t cheat doesn’t mean other folks are bound by the same ethics. You need to understand how to buy these products, and if you’re relying on third-party testing you will get what you deserve. – MR
Posted at Wednesday 6th May 2015 6:00 am
(0) Comments •
The RSA conference is over and put up some massive numbers (for security). But what does it all mean? Can all those 450 vendors on the show floor possibly survive? Do any of them add value?
Do bigger numbers mean we are any better than last year? And how can we possibly balance being an industry, community, and profession simultaneously? Not that we answer any of that, but we can at least keep you entertained for 13 minutes.
Watch or listen:
Posted at Monday 4th May 2015 9:36 pm
(0) Comments •
By Mike Rothman
Now that RSAC is behind us, it’s time to get back to our research agenda. So we pick up Network-based Threat Detection where we left off. In that first post, we made the case that math and context are the keys to detecting attacks from network activity, given that we cannot totally prevent endpoint compromise. Attackers always leave a trail on the network.
So we need to collect and analyze network telemetry to determine whether the communication between devices and the content of communications are legitimate, or warrant additional investigation. Modern malware relies heavily on the network to initiate the connection between the device and the controller, download attacks, perform automated beaconing, etc. Fortunately these activities show a deterministic pattern, which enables you to pinpoint malicious activity and identify compromised systems.
Attackers bet they will be able to obscure their communications within the tens of billions of legitimate packets traversing enterprise networks on any given day, and on defenders’ general lack of sophistication preventing them from identifying the giveaway patterns. But if you can identify the patterns, you have an opportunity to detect attacks.
Command and Control
Command and Control (C&C) traffic is communication between compromised devices and botnet controllers. Once the device executes malware (by whatever means) and the dropper is installed, the device searches for its controller to receive further instructions. There are two main ways to identify C&C activity: traffic destination and communications patterns between.
The industry has been using IP reputation for years to identify malicious destinations on the Internet. Security researchers evaluate each IP address and determine whether it is ‘good’ or ‘bad’ based on activity they observe across a massive network of sensors. IP reputation turns out to be a pretty good indicator that an address has been used for malicious activity at some point. Traffic to known-bad destinations is definitely worth checking out, and perhaps even blocking. But malicious IP addresses (and even domains) are not active for long, as attackers cycle through addresses and domains frequently.
Attackers also use legitimate sites as C&C nodes, which can leave innocent (but compromised) sites with a bad reputation. So the downside to blocking traffic to sites with bad reputation is the risk of irritating users who want to use the legitimate site. Our research shows increasing comfort with blocking sites because the great majority of addresses with bad reputations have legitimately earned it.
Keep in mind that IP reputation is not sufficient to identify all the C&C traffic on your network – many malicious sites don’t show up on IP reputation lists. So next look for other indications of malicious activity on the network, which depends on how compromised devices find their controllers.
With the increasing use of domain generating algorithms (DGA), malware doesn’t need to be hard-coded with specific domains or IP addresses – instead it cycles through a set of domains according to its DGA, searching for a dynamically addressed C&C controller; the addresses cycle daily. This provides tremendous flexibility for attackers to ensure newly-compromised devices can establish contact, despite frequent domain takedowns and C&C interruptions. But these algorithms look for controllers in a predictable pattern, making frequent DNS calls in specific patterns. So DNS traffic analysis has become critical for identification of C&C traffic, along with monitoring packet streams.
Identifying C&C traffic before the compromised device becomes a full-fledged member of the botnet is optimal. But if you miss, once the device is part of the botnet you can look for indications that it is being used as part of an attack chain. You do this by looking for outliers: devices acting atypically.
Does this sound familiar? It should – anomaly detection has been used to find attackers for over a decade, typically using Netflow. You profile normal traffic patterns for users on your network (source/destination/protocol), and then look for situations where traffic varies outside your baseline and exceeds tolerances.
Network-based anomaly detection was reasonably effective, but as adversaries got more sophisticated detection needed to dig more deeply into traffic. Deep packet inspection and better analytics enabled detection offerings to apply context to traffic. Attack traffic tends to occur in a few cycles:
- Command and Control: As described above, devices communicate with botnet controllers to join the botnet.
- Reconnaissance: After compromising the device and gaining access via the botnet, attackers communicate with internal devices to map the network and determine the most efficient path to their target.
- Lateral Movement: Once the best path to the target is identified, attackers systematically move through your network to approach their intended target, by compromising additional devices.
- Exfiltration: Once the target device is compromised, the attacker needs to move the data from the target device, outside the network. This can be done using tunnels, staging servers, and other means to obfuscate activity.
Each of these cycles includes patterns you can look for to identify potential attacks. But this still isn’t a smoking gun – at some point you will need to apply additional context to understand intent. Analyzing content in the communication stream is the next step in identifying attacks.
One way to glean more context for network traffic is to understand what is being moved. With deep packet inspection and session reassembly, you can perform file-based analysis on content as well. Then you can compare against baselines to look for anomalies in the movement of content within your network as well.
- File size: For example, if a user moved 2gb of traffic over a 24 hour period, when they normally move no more than 100mb, that should trigger an alert. Perhaps it’s nothing, but it should be investigated.
- Time of day: Similarly, if a user doesn’t normally work in the middle of the night, but does so two days in a row by themselves, that could indicate malicious activity. Of course it might be just a big project, but it bears investigation.
- Simple DLP: You can fingerprint files to look for sensitive content, or regular expressions which match account numbers or other protected data. Of course that isn’t full DLP-style classification and analysis. But it could flag something malicious without the overhead of full DLP.
Content analysis won’t to provide a smoking gun either. But with network traffic detection as discussed above, it provides more context to start to discern intent. This context helps explain some behavior that would otherwise be flagged as anomalous, to reduce false positives.
Malware crossing the perimeter does not necessarily mean it executed on any devices. That is a weakness of network-based sandboxes, which just look at and alert on files coming into the network. Those devices fire an alert whenever they see malware, even if the target device is totally protected from the attack. One way to further identify real attacks is to integrate endpoint telemetry into analysis, to verify and validate what actually happened. So we increasingly see a drive for network-based detection coordinated with endpoint detection.
That doesn’t mean you don’t want to know malware entered the network, but you need some way to prioritize whether or not it needs to be dealt with right now. Which brings up the much larger issue of prioritization, and knowing which potential attack to deal with first. It comes down to understanding what presents the most clear and present danger (risk) to your environment, which we’ll tackle in our next post.
Posted at Wednesday 29th April 2015 9:12 pm
(2) Comments •
By Mike Rothman
Last year Big Data was all the rage at the RSAC in terms of security monitoring and management. So the big theme this year will be…(drum roll, please)…Big Data. Yes, it’s more of the same, though we will see security big data called a bunch of different things—including insider threat detection, security analytics, situational awareness, and probably two or three more where we have no idea what they even mean.
But they all have one thing in common: math. That’s right—remember those differential equations you hated in high school and college? Be glad that helpful freshman in AP Calculus actually liked math. Those are the folks who will save your bacon, because their algorithms are helping detect attackers doing their thing.
Detecting the Insider
It feels a bit like we jumped into a time machine, and ended up in back 1998. Or 2004. Or 2008. You remember—that year when everyone was talking about insiders and how they were robbing your organization blind. We still haven’t solved the problem, because it’s hard. So every 4-5 years the vendors get tired of using black-masked external-attacker icons in their corporate PowerPoint decks, and start talking about catching insiders instead.
This year will be no different—you will hear a bunch of noise at RSAC about the insider threat. The difference this year is that the math folks I mentioned earlier have put their algorithms to work on finding anomalous behaviors inside your network, and profiling what insiders typically does while they are robbing you blind. You might even be able to catch them before Brian Krebs calls to tell you all about your breach.
These technologies and companies are pretty young, so you will see them on the outside rings of the conference hall and in the RSAC Innovation Sandbox, but they are multiplying like [name your favorite pandemic]. It won’t be long before the big SIEM players and other security management folks (yes, vulnerability management vendors, we’re looking at you) start talking about users and insiders to stay relevant. Don’t you just love the game?
Security Analytics: Bring Your PhD
The other epiphany many larger organizations had over the past few years is that they already have a crapton of security data. You can thank PCI-DSS for making them collect and aggregate all sorts of logs over the past few years. Then the forensics guys wanted packets, so you started capturing those too. Then you had the bright idea to put everything into a common data model.
Then what? Your security management strategy probably looked something like this:
- Collect data.
- Put all data in one place.
- Detect attacks.
This year a bunch of vendors will be explaining how they can help you with step 3, using their analytical engines to answer questions you didn’t even know to ask. They’ll use all sorts of buzzwords like ElasticSearch and Cassandra, talk about how cool their Hadoop is, and convince you they have data scientists thinking big thoughts about how to solve the security problem, and their magic platform will do just that.
Try not to laugh too hard at the salesperson. Then find an SE and have them walk you through setup and tuning of the analytics platform. Yes, it needs to be tuned regardless of what the salesperson tells you. How do you start? What data do you need? How do you refine queries? How do you validate a potential attack? Where can you send data for more detailed forensic analysis? If the SE has on dancing shoes, the product probably isn’t ready yet—unless you have your own group of PhDs you can bring to the table. Make sure the analytics tool actually saves time, rather than just creating more detailed alerts you don’t have time to handle.
We’re not saying PhD’s aren’t cool—we think it’s great that math folks are rising in prominence. But understand that when your SOC analyst wants you to call them a “Data Scientist” it’s so they can get a 50% raise for joining another big company.
We have finally reached the point as an industry where practitioners don’t actually believe they can stop all attacks any more. We know that story was less real than the tooth fairy, but way too many folks actually believed it. Now that ruse is done, so we can focus on the fact that at some point soon you will be investigating an incident. So you will have forensics professionals onsite, trying to figure out what actually happened.
The forensicators will ask to see your data. It’s good you have a crapton of security data, right? But you will increasingly be equipping your internal team for the first few steps of the investigation. So you will see a lot of forensics tools at the RSAC, and forensics companies repositioning as security shops. They will show their forensics hooks within your endpoint security products and your network security controls. Almost every vendor will have something to say about forensics. Mostly because it’s shiny.
Even better, most vendors are fielding their own incident response service. It is a popular belief that if a company can respond to an incident, they are well positioned to sell product at the back-end of the remediation/recovery. Of course that creates a bull market for folks with forensics skills. These folks can jump from company to company, driving up compensation quickly. They are on the road 5 days a week anyway, if not more, so why would they care which company is on their business cards?
This wave of focus on forensics, and resulting innovation, has been a long time coming. The tools are still pretty raw and cater to overly sophisticated customers, but we see progress. This progress is absolutely essential – there aren’t enough skilled forensics folks, so you need a way to make your less skilled folks more effective with tools and automation. Which is a theme throughout the RSAC-G this year.
SECaaS or SUKRaaS
The other downside to an overheated security environment is that because end-user organizations can’t find skilled staff, they need to supplement with managed services. Of course that assumes your managed services provider will have better luck finding people than you do. Again, it’s just math. There aren’t enough folks who know enough about security. Just because the company is a managed service provider, doesn’t mean they have a secret fountain of security professionals. Nor is a higher being dropping those folks in some field like manna.
So make sure you aren’t buying a Sucker as a Service (SUKRaaS) offering, by contracting a multi-year deal with an organization that has a huge SOC but not enough folks to keep it staffed. Texans would call that “All SOC, no cattle.” Of course there is leverage to be found in this business, and a managed service provider will be able to scale a bit better than an enterprise. But they still have a lot of the same problems as their enterprise clients.
This is where the diligence part of the process comes in. Before you sign that 3-year deal, make sure your SECaaS (Security as a Service) partner actually has the folks. Dig into their HR and staffing plans. Understand how they train new analysts. Get a feel for turnover in their SOC, and what kinds of tools they are investing in to gain leverage in operations.
And be happy when they start talking about all the data scientists they hired and the wonderful security analytics platform they implemented over the past year. Math strikes again!
Posted at Thursday 23rd April 2015 6:00 pm
(0) Comments •
By Mike Rothman
Identity is one of the more difficult topics to cover in our yearly RSAC Guide, because identity issues and trends don’t grab headlines. Identity and Access Management vendors tend to be light-years ahead of most customers. You may be thinking “Passwords and Active Directory: What else do I need to know?” which is pretty typical. IAM responsibilities sit in a no-man’s land between security, development, and IT… and none of them wants ownership. Most big firms now have a CISO, CIO, and VP of Engineering, but when was the last time you heard of a VP of Identity? Director? No, we haven’t either. That means customers—and cloud providers, as we will discuss in a bit—are generally not cognizant of important advancements. But those identity systems are used by every employee and customer. Unfortunately, despite ongoing innovation, much of what gets attention is somewhat backwards.
The Cutting Edge—Role-Based Access Control for the Cloud
Roles, roles, and more roles. You will hear a lot about Role-Based Access Controls from the ‘hot’ product vendors in cloud, mobile management, and big data. It’s ironic—these segments may be cutting-edge in most ways, but they are decidedly backwards for IAM. Kerberos, anyone? The new identity products you will hear most about at this year’s RSAC show—Azure Active Directory and AWS Access Control Lists—are things most of the IAM segment have been trying to push past for a decade or more. We are afraid to joke about it, because an “identity wizard” to help you create ACLs “in the cloud” could become a real thing. Despite RBAC being outdated, it keeps popping up unwanted, like that annoying paper clip because customers are comfortable with it and even look for those types of solutions. Attribute Based Access Controls, Policy Based Access Controls, real-time dynamic authorization, and fully cloud-based IDaaS are all impressive advances, available today. Heck, even Jennifer Lawrence knows why these technologies are important—her iCloud account was apparently hacked because there was no brute-force replay checker to protect her. Regardless, these vendors sit unloved, on the outskirts of the convention center floor.
We hear it all the time from identity vendors: “Standards-based identity instills confidence in customers,” but the vendors cannot seem to agree on a standard. OpenID vs. SAML vs. OAuth, oh my! Customers do indeed want standards-based identity, but they fall asleep when this debate starts. There are dozens of identity standards in the CSA Guidance, but which one is right for you? They all suffer from the same issue: they are all filled with too many options. As a result interoperability is a nightmare, especially for SAML. Getting any two SAML implementations to talk to each other demands engineering time from both product teams. IAM in general, and specifically SAML, beautifully illustrate Tannenbaum’s quote: “The nice thing about standards is that you have so many to choose from.” Most customers we speak with don’t really care which standard is adopted—they just want the industry to pick one and be done with it. Until then they will focus on something more productive, like firewall rules and password resets. They are waiting for it to be over so they can push a button to interoperate—you do have an easy button, right?
Good Dog, Have a Biscuit
We don’t like to admit it, but in terms of mobile payments and mobile identity, the U.S. is a laggard. Many countries we consider ‘backwards’ were using mobile payments as their principal means to move money long before Apple Pay was announced. But these solutions tend to be carrier-specific; U.S. adoption was slowed by turf wars between banks, carriers, and mobile device vendors. Secure elements or HCE? Generic wallets or carrier payment infrastructure? Tokens or credit cards? Who owns the encryption keys? Do we need biometrics, and if so which are acceptable? Each player has a security vision which depends on and only supports and their business model. Other than a shared desire to discontinue the practice of sending credit card numbers to merchants over SSL, there has been little agreement.
For several years now the FIDO Alliance has been working on an open and interoperable set of standards to promote mobile security. This standard does not just establish a level playing field for identity and security vendors—it defines a user experience to make mobile identity and payments easier. So the FIDO standard is becoming a thing. It enables vendors to hook into the framework, and provide their solution as part of the ecosystem. You will notice a huge number of vendors on the show floor touting support for the FIDO standard. Many demos will look pretty similar because they all follow the same privacy, security, and ease of use standards, but all oars are finally pulling in the same direction.
Posted at Wednesday 22nd April 2015 6:30 pm
(0) Comments •
By Mike Rothman
What you’ll see at the RSAC in terms of endpoint security is really more of the same. Advanced attacks blah, mobile devices blah blah, AV-vendor hatred blah blah blah. Just a lot of blah… But we are still recovering from the advanced attacker hangover, which made painfully clear that existing approaches to preventing malware just don’t work. So a variety of alternatives have emerged to do it better. Check out our Advanced Endpoint and Server Protection paper to learn more about where the technology is going. None of these innovations has really hit the mainstream yet, so it looks like the status quo will prevail again in 2015. But the year of endpoint security disruption is coming—perhaps 2016 will be it…
White listing becomes Mission: POSsible
Since last year’s RSAC many retailers have suffered high-profile breaches. But don’t despair—if your favorite retailer hasn’t yet sent you a disclosure notice, it will arrive with your new credit card just as soon as they discover the breach. And why are retailers so easy to pop? Mostly because many Point-of-Sale (POS) systems use modern operating systems like Embedded Windows XP. These devices are maintained using state-of-the-art configuration and patching infrastructures—except when they aren’t. And they all have modern anti-malware protection, unless they don’t have even ineffective signature-based AV. POS systems have been sitting ducks for years. Quack quack.
Clearly this isn’t a really effective way to protect devices that capture credit cards and handle money, which happen to run on circa-1998 operating systems. So retailers and everyone else dealing with kiosks and POS systems has gotten the white listing bug, big-time. And this bug doesn’t send customer data to carder exchanges in Eastern Europe.
What should you look for at the RSAC? Basically a rep who isn’t taking an order from some other company.
Calling Dr. Quincy…
We highlighted a concept last year, which we call endpoint monitoring. It’s a method for collecting detailed and granular telemetry from endpoints, to facilitate forensic investigation after a device compromise. As it turned out, that actually happened—our big research friends who shall not be named have dubbed this function ETDR (Endpoint Threat Detection and Response). And ETDR is pretty shiny nowadays.
As you tour the RSAC floor, pay attention to ease-of-use. The good news is that some of these ETDR products have been acquired by big companies, so they will have a bunch of demo pods in their huge booths. If you want to check out a startup you might have to wait—you can only fit so much in a 10’ by 10’ booth, and we expect these technologies to garner a lot of interest. And since the RSAC has outlawed booth babes (which we think is awesome), maybe the crowded booths will feature cool and innovative technology rather than spandex and leather.
While you are there you might want to poke around a bit, to figure out when your EDTR vendor will add prevention to their arsenal, so you can finally look at alternatives to EPP. Speaking of which…
Don’t look behind the EPP curtain…
The death of endpoint protection suites has been greatly exaggerated. Which continues to piss us off, to be honest. In what other business can you be largely ineffective, cost too much, and slow down the entire system, and still sell a couple billion dollars worth of product annually? The answer is none, but the reason companies still spend money is compliance. If EPP was a horse we would have shot it a long time ago.
So what is going to stop the EPP hegemony? We need something that can protect devices and drive down costs, without killing endpoint performance. It will take a vendor with some cajones. Companies offering innovative solutions tend to be content positioning them as complimentary solution to EPP suites. Then they don’t have to deal with things like signature engines (to keep QSAs who are stuck in 2006 happy) or full disk encryption.
Unfortunately cajones will be in short supply at the 2015 RSAC—even in a heavily male-dominated crowd. But at some point someone will muster up the courage to acknowledge the EPP emperor has been streaking through RSAC for 5 years, and finally offer a compelling package that satisfies compliance requirements.
Can you do us a favor on the show floor? Maybe drop some hints that you would be happy to divert the $500k you plan to spend renewing EPP this year to something that doesn’t suck instead.
Mobility gets citizenship…
As we stated last year, managing mobile devices is quite the commodity now. The technology keeps flying off the shelves, and MDM vendors continue to pay lip service to security. But last year devices were not really integrated into the organization’s controls and defenses. That has started to change. Thanks to a bunch of acquisitions, most MDM technology is now controlled by big IT shops, so we will start to see the first linkages between managing and protecting mobile devices, and the rest of infrastructure. Leverage is wonderful, especially now when we have such a severe skills gap in security.
Now that mobile devices are full citizens, what does that even mean? It means MDM environments are now expected to send alerts to the SIEM and integrate with the service/operations infrastructure. They need to speak enterprise language and play nice with other enterprise systems.
Even though there have been some high-profile mobile app problems (such as providing access to a hotel chain’s customer database), there still isn’t much focus on assessing apps and ensuring security before apps hit an app store. We don’t get it. You might check out folks assessing mobile apps (mostly for privacy issues, rather than mobile malware) and report back to your developers so they can ignore you. Again.
IoT: Not so much
It wouldn’t be an RSAC-G if we didn’t do at least a little click baiting. Mostly just to annoy people who are hoping for all sorts of groundbreaking research on protecting the Internet of Things (IoT). At this point there doesn’t seem to be much to protect. But it is another thing to secure, so you will see vendors talking about it. Though it is still a bit early to add IoT to your RSAC buzzword bingo drinking game.
At some point a researcher will do some kind of proof of concept showing how your Roomba is the great-great-great-great-grandfather of the T1000. Click-baiting achievement unlocked! With a gratuitous Terminator reference to boot. Win!
Posted at Wednesday 22nd April 2015 1:00 pm
(1) Comments •
By Mike Rothman
We had a little trouble coming up with a novel and pithy backdrop for what you will see in the Network Security space at RSAC 2015. We wonder if this year we will see the first IoT firewall, because hacking thermostats and refrigerators has made threat models go bonkers. The truth is that most customers are trying to figure out what to do with the new next-generation devices they already bought. We shouldn’t wonder why the new emperor looks a lot like the old emperor, when we dress our new ruler (NGFW) up in clothes (rules) that look so similar to our old-school port- and protocol-based rulesets.
But the fact is there will be some shiny stuff at this year’s conference, largely focused on detection. This is a very productive and positive trend—for years we have been calling for a budget shift away from ineffective prevention technologies to detecting and investigating attacks. We see organizations with mature security programs making this shift, but far too many others continue to buy the marketing hyperbole, “of course you can block it.” Given that no one really knows what ‘it’ is, we have a hard time understanding how we can make real progress in blocking more stuff in the coming year.
Which means you need to respond faster and better. Huh, where have we heard that before?
Giving up on Prevention…
Talking to many practitioners over the past year I felt like I was seeing a capitulation of sorts. There is finally widespread acknowledgement that it is hard to reliably prevent attacks. And we are not just talking about space alien attacks coming from a hacking UFO. It’s hard enough for most organizations to deal with Metasploit.
Of course we are not going all Jericho on you, advocating giving up on prevention on the network. Can you hear the sigh of relief from all the QSAs? Especially the ones feeling pressure to push full isolation of protected data (as opposed to segmentation) during assessments. Most of those organizations cannot even manage one network, so let’s have them manage multiple isolated environments. That will work out just great.
There will still be a lot of the same old same old—you still need a firewall and IPS to enforce both positive (access control) and negative (attack) policies on your perimeter. You just need to be realistic about what they can block—even shiny NGFW models. Remember that network security devices are not just for blocking attacks. We still believe segmentation is your friend—you will continue to deploy those boxes, both to keep the QSAs happy and to make sure that critical data is separated from not-so-critical data.
And you will also hear all about malware sandboxes at the RSAC this year. Again. Everyone has a sandbox—just ask them. Except some don’t call them sandboxes. I guess they are discriminating against kids who like sand in today’s distinctly un-politically-correct world. They might be called malware detonation devices or services. That sounds shinier, no? But if you want to troll the reps on the show floor (and who doesn’t?), get them to debate an on-premise approach versus a cloud-based approach to detonation. It doesn’t really matter what side of the fence they are on, but it’s fun seeing them get all red in the face when you challenge them.
Finally, you may hear some lips flapping about data center firewalls. Basically just really fast segmentation devices. If they try to convince you they can detect attacks on a 40gbps data center network, and flash their hot-off-the-presses NSS Lab results, ask what happens when they turn on more than 5 rules at a time. If they bother you, say you plan to run SSL on your internal networks and the device needs to inspect all traffic. But make sure an EMT is close by, as that strategy has been known to cause aneurysms in sales reps.
To Focus on Detection…
So if many organizations have given up trying to block all attacks, what the hell are they supposed to do? Spend tons of money on more appliances to detect attacks they missed at the perimeter, of course. And the security industrial complex keeps chugging along. You will see a lot of focus on network-based threat detection at the show. We ourselves are guilty of fanning the flames a bit with our new research on that topic.
The fact is, the technology is moving forward. Analyzing network traffic patterns, profiling and baselining normal communications, and then looking for stuff that’s not normal gives you a much better chance of finding compromised devices on your networks. Before your new product schematics wind up in some non-descript building in Shanghai, Chechnya, Moscow, or Tel Aviv. What’s new is the base of analysis possible with today’s better analytics. Booth personnel will bandy about terms like “big data” and “machine learning” like they understand what they even mean. But honestly baselines aren’t based only on Netflow records or DNS queries any more—they can now incorporate very granular metadata from network traffic including identity, content, frequency of communication, and various other attributes that get math folks all hot and bothered.
The real issue is making sure these detection devices can work with your existing gear and aren’t just a flash in the pan, about to be integrated as features in your perimeter security gateway. Okay, we would be pulling your leg if we said any aspect of detection won’t eventually become an integrated feature of other network security gear. That’s just the way it goes. But if you really need to figure out what’s happening on your network, visit these vendors on the floor.
While Consolidating Functions…
What hasn’t changed is that big organizations think they need separate devices for all their key functions. Or has it? Is best of breed (finally) dead? Well, not exactly, but it has more to do with politics than technology. Pretty much all the network security players have technologies that allow authorized traffic and block attacks. Back when category names mattered, those functions were called firewalls and IPS respectively. But wait—now everything is a next-generation firewall, right? But it does a lot more than a firewall. It also detonates malware (or integrates with a cloud service that does). And it looks for command-and-control traffic patterns. All within one or many boxes, leveraging a single policy, right?
But that’s a firewall. Just ask Gartner. Sigh. And no, we won’t troll you any more by calling it an Enterprise UTM, for old time’s sake.
Product categories aside, regardless of whether a network security vendor started as a firewall player or with IPS (or both, thanks to the magic of acquisitions), they are all attacking the same real estate: what we call the network security gateway. The real question is: how can you get there? So on the show floor focus on migration. You know you want to enforce both access control and attack policies on the device. You probably want to look for malware on ingress, and C&C indicators on egress. And you don’t want to wrestle with 10 different management interfaces. Challenge the SEs in the booths (you know, the folks who know what they are doing) to sketch out how they’d solve your problem on a piece of paper. Of course they’ll be wrong, but it should be fun to see what they come up with on the fly.
And Looking for Automation…
Another hot topic in network security will be automation. Because managing hundreds of firewalls is a pain in the ass. Actually, managing hundreds of any kind of complicated technology causes ulcers. So a bunch of new startups will be in the Innovation Sandbox detonating malware. No, not that kind of sandbox. ISB is RSAC’s showcase for new companies and technologies, where they will happily show you how to use an alert from your SIEM or a bad IP address from your threat intelligence provider to make changes automagically on your firewalls. They have spent a bunch of time making sure they support vintage 2007 edge routers and lots of other devices to make sure they have you covered.
But all the same, you have been flummoxed by spending 60 percent of your time opening ports for those pesky developers who cannot seem to understand that port 443 is a legitimate port, and they don’t need a special port. Automating some of those rote functions can free you up to do more important and strategic things. As long as the sales rep in the booth isn’t named John Connor, everything should be fine.
In the Cloud…
Even though you focus on network security, don’t think you can escape the cloud hype monster at RSAC. No chance. All the vendors will be talking about how their fancy 7-layer inspection technology is now available as a virtual machine. Of course unless they are old (like us), they won’t remember that network security appliances happened because granular inspection and policy enforcement in software did not scale. Details, we know. You are allowed to laugh when they position software-based network security as new and innovative.
They also don’t understand that inserting inspection points and bottlenecks in a cloud environment (public, private, or hybrid) breaks the whole cloud computing model. And they won’t be even paying lip service to SDN (Software Defined Networks) for the most part. SDN is currently a bit like voodoo for security people. So we guess avoidance is the best strategy at this point. Sigh, again.
The booth staff will faithfully stick to the talking points marketing gave them about how it’s the same, but just in the cloud… Smile politely and then come to our Pragmatic SecDevOps lab session, where we will tell you how to really automate and protect those cloud-based thingies that are popping up everywhere like Tribbles.
Posted at Tuesday 21st April 2015 2:30 pm
(0) Comments •
By Mike Rothman
Coming Soon to an Application Near You: DevOps
For several years you have been hearing the wonders of Agile development, and how it has done wondrous things for software development companies. Agile development isn’t a product – it is a process change, a new way for developers to communicate and work together. It’s effective enough to attract almost every firm we speak with away from traditional waterfall development. Now there is another major change on the horizon, called DevOps. Like Agile it is mostly a process change. Unlike Agile it is more operationally focused, relying heavily on tools and automation for success. That means not just your developers will be Agile – your IT and security teams will be, too!
The reason DevOps is important at RSA Conference – the reason you will hear a lot about it – is that it offers a very clear and positive effect on security. Perhaps for the first time, we can automate many security requirements – embedding them into the daily development, QA, and operational tasks we already perform. DevOps typically goes hand in hand with continuous integration and continuous deployment. For software development teams this means code changes go from idea to development to live production in hours rather than months. Sure, users are annoyed the customer portal never works the same way twice, but IT can deliver new code faster than sales and marketing wanted it, which is itself something of a miracle. Deployment speed makes a leap in the right direction, but the new pipeline provides an even more important foundation for embedding security automation into processes. It’s still early, but you will see the first security tools which have been reworked for DevOps at this year’s RSA conference.
I Can Hardly Contain Myself
Containers. They’re cool. They’re hot. They… wait, what are they exactly? The new developer buzzword is Docker – the name of both the company and the product – which provides a tidy container for applications and all the associated stuff an application needs to do its job. The beauty of this approach comes from hiding much of the complexity around configuration, supporting libraries, OS support, and the like – all nicely abstracted away from users within the container. In the same way we use abstract concepts like ‘compute’ and ‘storage’ as simple quantities with cloud service providers, a Docker container is an abstract run-anywhere unit of ‘application’. Plug it in wherever you want and run it. Most of the promise of virtualization, without most of the overhead or cost.
Sure, some old-school developers think it’s the same “write once, crash anywhere” concept Java did so well with 20 years ago, and of coures security pros fear containers as the 21st-century Trojan Horse. But containers do offer some security advantages: they wrap accepted version of software up with secure configuration settings, and narrowly define how to interact with the container – all of which reduces the dreaded application “threat surface”. You are even likely to find a couple vendors who now deploy a version of their security appliance as a Docker container for virtualized or cloud environments.
All Your Code-base Belong to Us
As cloud services continue to advance outsourced security services are getting better, faster, and cheaper than your existing on-premise solution. Last year we saw this at the RSA Conference with anti-malware and security analytics. This year we will see it again with application development. We have already seen general adoption of the cloud for quality assurance testing; now we see services which validate open source bundles, API-driven patching, cloud-based source code scanning, and more dynamic application scanning services. For many the idea of letting anyone outside your company look at your code – much less upload it to a multi-tenant cloud server – is insane. But lower costs have a way of changing opinions, and the automated, API-driven cloud model fits very well with the direction development teams are pulling.
Posted at Monday 20th April 2015 7:00 pm
(0) Comments •
By Mike Rothman
Data security is the toughest coverage area to write up this year. It reminds us of those bad apocalypse films, where everyone runs around building DIY tanks and improvising explosives to “save the children,” before driving off to battle the undead hordes and—leaving the kids with a couple spoons, some dirt, and a can of corned beef hash.
We have long argued for information-centric security—protecting data needs to be an equal or higher priority than defending infrastructure itself. Thanks to a succession of major breaches and a country or two treating our corporate intellectual property like a Metallica song during Napster’s heyday, CEOs and Directors now get it: data security matters. It not only matters—it permeates everything we do across the practice of security (except for DDoS).
But that also means data security appears in every section in this year’s RSAC Guide. But it doesn’t mean anyone has the slightest clue of how to stop the hemorrhaging.
Anyone Have a Bigger Hammer?
From secret-stealing APTs, to credit-card-munching cybercrime syndicates, our most immediate response is… more network and endpoint security.
That’s right—the biggest trends in data security are network and endpoint security. Better firewalls, sandboxes, endpoint whitelisting, and all the other stuff in those two buckets. When a company gets breached the first step (after hiring an incident response firm to quote in the press release, saying this was a “sophisticated attack”) is to double down on new anti-malware and analytics.
It makes sense. That’s how the bad guys most frequently get in. But it also misses the point.
Years ago we wrote up something called the “Data Breach Triangle.” A breach requires three things: an exploit (a way in), something to steal (data) and an egress (way out). Take away any side of that triangle, and no breach. But stopping the exploit is probably the hardest, most expensive side to crack—especially because we have spent the last thirty years working on it… unsuccessfully.
The vast majority of data security you’ll see at this conference, from presentations to the show floor, will be more of the same stuff we have always seen, but newer and shinier. As if throwing more money at the same failed solutions will really solve the problem. Look—you need network and endpoint security, but doubling down doesn’t seem to be changing the odds. Perhaps a little diversification is in order.
The Cloud Ate My Babies
Data security is still one of the top two concerns we run into when working with clients on cloud projects—the other is compliance. Vendors are listening, so you will see no shortage of banners and barkers offering to protect your data in the cloud.
Which is weird, because if you pick a decent cloud provider the odds are that your data is far safer with them than in your self-managed data center. Why? Economics. Cloud providers know they can easily lose vast numbers of customers if they are breached. The startups aren’t always there, but the established providers really don’t mess around—they devote far more budget and effort to protecting customer data than nearly any enterprise we have worked with.
Really, how many of you require dual authorization to access any data? Exclusively through a monitored portal, with all activity completely audited and two-factor authentication enforced? That’s table stakes for these guys.
Before investing in extra data security for the cloud, ask yourself what you are protecting it from. If the data is regulated you may need extra assurance and logging for compliance. Maybe you aren’t using a major provider. But for most data, in most situations, we bet you don’t need anything too extreme. If a cloud data protection solution mostly protects you from an administrator at your provider, you might want to just give them a fake number.
One area trending down is the concern over data loss from portable devices. It is hard to justify spending money here when we can find almost no cases of material losses or public disclosures from someone using a properly-secured phone or tablet. Especially on iOS, which is so secure the FBI is begging Congress to force Apple to add a back door (we won’t make a joke here—we don’t want to get our editor fired).
You will still see it on the show floor, and maybe a few sessions (probably panels) where there’s a lot of FUD, but we mostly see this being wrapped up into Mobile Device Management and Cloud Security Gateways, and by the providers themselves. It’s still on the list—just not a priority.
Encrypt, Tokenize, or Die (well, look for another job)
Many organizations are beginning to realize they don’t need to encrypt every piece of data in data centers and at cloud providers, but there are still a couple massive categories where you’d better encrypt or you can kiss your job goodbye. Payment data, some PII, and some medical data demand belt and suspenders.
What’s fascinating is that we see encryption of this data being pushed up the stack into applications. Whether in the cloud or on-premise, there is increasing recognition that merely encrypting some hard drives won’t cut it. Organizations are increasingly encrypting or tokenizing at the point of collection. Tokenization is generally preferred for existing apps, and encryption for new ones.
Unless you are looking at payment networks, which use both.
You might actually see this more in sessions than on the show floor. While there are some new encryption and tokenization vendors, it is mostly the same names we have been working with for nearly 10 years. Because encryption is hard.
Don’t get hung up on different tokenization methods; the security and performance of the token vault itself matters more. Walk in with a list of your programming languages and architectural requirements, because each of these products has very different levels of support for integrating with your projects. The lack of a good SDK in the language you need, or a REST API, can set you back months.
Cloud Encryption Gets Funky
Want to use a cloud provider but still control your own encryption keys? Want your cloud provider to offer a complete encryption and key management service? Want to NSA proof your cloud?
Done. Done. And sort of doable.
The biggest encryption news this year comes from the cloud providers themselves, and you will start seeing it all over the place. Box now lets you manage the encryption keys used by the platform. Amazon has two different customer-managed encryption options, one of them slowly being baked into every one of their services, and the other configurable in a way you can use to prevent government snooping. Even Microsoft is getting into the game with customer-manageed keys for Azure (we hear).
None of this makes the independent encryption vendors happy. Especially the startups.
But it is good news for customers, and we expect to see this trend increase every year. It really doesn’t always make sense to try bolting encryption onto the outside of your cloud. Performance and fundamental application functionality become issues. If your provider can offer it while you retain control? Then you are golden.
Posted at Monday 20th April 2015 2:00 pm
(0) Comments •
By Mike Rothman
Before delving into the world of cloud security we’d like to remind you of a little basic physics. Today’s lesson is on velocity vs. acceleration. Velocity is how fast you are going, and acceleration is how fast velocity increases. They affect our perceptions differently. No one thinks much of driving at 60mph. Ride a motorcycle at 60mph, or plunge down a ski slope at 50mph (not that uncommon), and you get a thrill.
But accelerate from 0mph to 60mph in 2.7 seconds in a sports car (yep, they do that), and you might need new underwear. That’s pretty much the cloud security situation right now.
Cloud computing is, still, the most disruptive force hitting all corners of IT, including security. It has pretty well become a force of nature at this point, and we still haven’t hit the peak. Don’t believe us? That’s cool—not believing in that truck barreling towards you is always a good way to ensure you make it into work tomorrow morning.
(Please don’t try that—we don’t want your family to sue us).
The most surprising cloud security phenomena are how widespread cloud computing has spread, and the increasing involvement of security teams… sort of. Last year we mentioned seeing ever more large organizations dipping their toes into cloud computing, and this year it’s hard to find any large organization without some active cloud projects. Including some with regulated data.
Companies that told us they wouldn’t use public cloud computing a year or two ago are now running multiple active projects. Not unapproved shadow IT, but honest-to-goodness sanctioned projects. Every one of these cloud consumers also tells us they are planning to move more and more to the cloud over time.
Typically these start as well-defined projects rather than move-everything initiatives. A bunch we are seeing involve either data analysis (where the cloud is perfect for bursty workloads) or new consumer-facing web projects. We call these “cloud native” projects because once the customer digs in, they design the architectures with the cloud in mind.
We also see some demand to move existing systems to the cloud, but frequently those are projects where the architecture isn’t going to change, so the customer won’t gain the full agility, resiliency, and economic benefits of cloud computing. We call these “cloud tourists” and consider these projects ripe for failure because all they typically end up doing is virtualizing already paid-for hardware, adding the complexity of remote management, and increasing operational costs to manage the cloud environment on top of still managing just as many servers and apps.
Not that we don’t like tourists. They spend a lot of money.
One big surprise is that we are seeing security teams engaging more deeply, quickly, and positively than in past years, when they sat still and watched the cloud rush past. There is definitely a skills gap, but we meet many more security pros who are quickly coming up to speed on cloud computing. The profession is moving past denial and anger, through bargaining (for budget, of course), deep into acceptance and…DevOps.
Perhaps we pushed that analogy. But the upshot is that this year we feel comfortable saying cloud security is becoming part of mainstream security. It’s the early edge, but the age of denial and willful ignorance is coming to a close.
Wherever You Go, There You Aren’t
Okay, you get it, the cloud is happening, security is engaging, and now it’s time for some good standards and checklists for us to keep the auditors happy and get those controls in place.
Wait, containers, what? Where did everybody go?
Not only is cloud adoption accelerating, but so is cloud technology. Encryption in the cloud too complex? That’s okay—Amazon just launched a simple and cheap key management service, fully integrated through their services. Nailed down your virtual server controls for VMWare? How well do those work with Docker? Okay, with which networking stack you picked for your Docker on AWS deployment, that uses a different management structure than your Docker on VMWare deployment.
Your security vendor finally offers their product as a virtual appliance? Great! How does it work in Microsoft Azure, now that you have moved to a PaaS model where you don’t control network flow? You finally got CloudTrail data into your SIEM? Nice job, but your primary competitor now offers live alerts on streaming API data via Lambda. Got those Chef and Puppet security templates set? Darn, the dev team switched everything to custom images and rollouts via autoscaling groups.
None of that make sense? Too bad—those are all real issues from real organizations.
Everything is changing so quickly that even vendors trying to keep up are constantly dancing to fit new deployment and operations models. We are past the worst cloudwashing days, but we will still see companies on the floor struggling to talk about new technologies (especially containers); how they offer value over capabilities Amazon, Microsoft, and other major providers have added to their services, and why their products are still necessary with new architectural models.
The good news is that not everything lives on the bleeding edge. The bad news is that this rate of change won’t let up any time soon, and the bleeding edge seems to become early mainstream more quickly than it used to.
This theme is more about what you won’t see than what you will. SIEM vendors won’t be talking much about how they compete with a cloud-based ELK stack, encryption vendors will struggle to differentiate from Amazon’s Key Management Service, AV vendors sure won’t be talking about immutable servers, and network security vendors won’t really talk about the security value of their product in a properly designed cloud architecture.
On the upside not everyone lives on the leading edge. But if you attend the cloud security sessions, or talk to people actively engaged in cloud projects, you will likely see some really interesting, practical ways of managing security for cloud computing that don’t rely on ‘traditional’ approaches.
Bump in the Cloud
Last year we included a section on emerging SaaS security tools, and boy has that market taken off. We call them Cloud Security Gateways and Gartner calls them Cloud Access and Security Brokers (hint, you only get to have 3-letter acronyms for product categories, even if you’re Gartner, or a kitten dies).
There are at least a dozen vendors on the market now, and on the surface most of them look exactly the same. That’s because the market has a reasonably clear set of requirements, and there are only so many ways to message that target. You want products to find out what cloud stuff you are using, monitor the stuff you approve, block the stuff you don’t, and add security when your cloud provider doesn’t meet your needs.
There actually is a fair amount of differentiation between these products, but it is hard to see from the surface. Most if not all of these folks will be on the show floor, and if you manage security for a mid-size or large organization, they are worth a look. But, as always, have an idea of what you need before you go in. Discovery is table stakes for this market, but there are many possible directions to take after that. From DLP, to security analysis and alerts (such as detecting account takeovers), all the way up to encryption and tokenization (often a messy approach, but also likely your only option if you do not trust your cloud provider).
One key question to ask is whether they integrate with cloud provider APIs (when available), and which. The alternative is to proxy all your traffic to the cloud, which is a really crappy way to solve the problem—but often the only option. Fortunately some cloud providers offer robust APIs that reduce or eliminate the need for a CSG (see what I did there?) to sniff the connection. If they say ‘yes’ then ask for specific examples.
You might see some other vendors pushing their abilities to kinda-sorta do the same thing as a CSG. Odds are you won’t be happy with their kludges, so if this is on your list, stick with folks whose houses are on the line if the product doesn’t actually work.
Calling Mr. Tufte
One thing you won’t see any shortage of is the same damn charts from every damn SIEM and analytics vendor. Seriously—we have been briefed by pretty much all of them, and they all look the same. Down to the color palette.
The upside is that they now include cloud data. Mostly just Amazon CloudTrail, because no other IaaS platform offers management plane data yet (rumor has it Microsoft is coming soon).
We understand there are only so many ways to visualize this data, but the vendors also seem to be struggling to explain how their cloud data and analytics are superior to competitors’. Pretty charts are great, but you look at these things to find actionable information—probably not because you enjoy staring at traffic graphs. Especially now that Amazon allows you to directly set security alerts and review activity in their own consoles.
Cloud Taylor Swift
You have probably noticed that we tend to focus on Amazon Web Services. That isn’t bias—simply a reflection of Amazon’s significant market dominance. After AWS we see a lot of Microsoft Azure, and then a steep dropoff after that.
The interesting trend is that we see much less demand for information on other providers. Demand has declined from previous years.
So don’t be surprised if vendors and sessions skew the same. Amazon really does have a big lead on everyone else, and only Microsoft (and maybe Google) is in the ballpark. That will show through in sessions and on the show floor.
DevOps, Automation, Blah, Blah, Blah
We hate to dump our favorite topics into a side note at the bottom of this section, but we already went long, and are covering those topics… in pretty much every other section of this Guide. DevOps and automation are as disruptive to process as cloud is to infrastructure and architecture.
It’s the future of our profession, folks—there is no shortage of things to talk about. Which you probably figured out 500 words ago, about when you stopped reading this drivel.
Posted at Sunday 19th April 2015 7:00 pm
(0) Comments •
By Mike Rothman
With lots of folks (including us) at the RSA Conference this week, we figured we’d post the deep dives we wrote for the RSAC Guide and give those of you not attending a taste of what your missing. Though we haven’t figured out how to relay the feel of the meat market at the W bar after 10 PM nor the ear deafening bass at any number of conference parties nor the sharp pain you feel in your gut after a night of being way too festive. Though we’re working on that for next year’s guide.
While everyone likes to talk about the “security market” or the “security industry,” in practice security is more a collection of markets, tools, and practices all competing for our time, attention, and dollars. Here at Securosis we have a massive coverage map (just for fun, which doesn’t say much now that you’ve experienced some of our sense of humor), which includes seven major focus areas (like network, endpoint, and data security), and dozens of different practice and product segments.
It’s always fun to whip out the picture when vendors are pitching us on why CISOs should spend money on their single-point defense widget instead of the hundreds of other things on the list, many of them mandated by auditors using standards that get updated once every decade or so.
In our next sections we dig into the seven major coverage areas and detail what you can expect to see, based in large part on what users and vendors have been talking to us about for the past year. You’ll notice there can be a bunch of overlap. Cloud and DevOps, for example, affect multiple coverage areas in different ways, and cloud is a coverage area all on its own.
When you walk into the conference, you are likely there for a reason. You already have some burning issues you want to figure out, or specific project needs. These sections will let you know what to expect, and what to look for.
The information is based in many cases on dozens of vendor briefings and discussions with security practitioners. We try to help illuminate what questions to ask, where to watch for snake oil, and what key criteria to focus on, based on successes and failures from your peers who tried it first.
Posted at Sunday 19th April 2015 5:00 pm
(0) Comments •
By Mike Rothman
Holy crap! The RSA Conference starts on Monday. Which means… you don’t have much time left to register for the 7th annual Disaster Recovery Breakfast.*
Once again we have to provide a big shout out to our DRB partners, MSLGROUP, Kulesa Faul, and LEWIS PR. We’re expecting a crapton of folks to show up at the breakfast this year, and without their support there would be no breakfast for you.
As always, the breakfast will be Thursday morning 8-11 at Jillian’s in the Metreon. It’s an open door – come and leave as you want. We will have food, beverages, and assorted recovery items (non-prescription only) to ease your day.
See you there.
To help us estimate numbers, please RSVP to rsvp (at) securosis (dot) com.
*But don’t get nuts if you forget to RSVP – the bouncers will let you in… Right, there are no bouncers.
Posted at Friday 17th April 2015 8:30 am
(1) Comments •