Login  |  Register  |  Contact
Wednesday, August 31, 2016

Nuke It from Orbit

By Rich

I had a call today, that went pretty much like all my other calls.

An organization wants to move to the cloud. Scratch that – they are moving, quickly. The team on the phone was working hard to figure out their architectures and security requirements. These weren’t ostriches sticking their heads in the sand, they were very cognizant of many of the changes cloud computing forces, and were working hard to enable their organization to move as quickly and safely as possible. They were not blockers. The company was big.

I take a lot of these calls now.

The problem was, as much as they’ve learned, as open minded as they were, the team was both getting horrible advice (mostly from their security vendors) and facing internal pressure taking them down the wrong path.

This wasn’t a complete lift and shift, but it wasn’t really cloud-native, and it’s the sort of thing I now see frequently. The organization was setting up a few cloud environments at their provider, directly connecting everything to extend their network, and each one was at a different security level. Think Dev/Test/Prod, but using their own classification.

The problem is, this really isn’t a best practice. You cannot segregate out privileged users well at the cloud management level. It adds a bunch of security weaknesses and has a very large blast radius if an attacker gets into anything. Even network security controls become quite complex. Especially since their existing vendors were promising they could just drop virtual appliances in and everything would work like just it does on-premise – no, it really doesn’t. This is before we even get into using PaaS, serverless architectures, application-specific requirements, tag and security group limitations, and so on.

It doesn’t work. Not at scale. And by the time you notice, you are very deep inside a very expensive hole.

I used to say the cloud doesn’t really change security. That the fundamentals are the same and only the implementation changes. Since about 2-3 years ago, that is no longer true. New capabilities started to upend existing approaches.

Many security principles are the same, but all the implementation changes. Process and technology. It isn’t just security – all architectures and operations change.

You need to take what you know about securing your existing infrastructure, and throw it away. You cannot draw useful parallels to existing constructs. You need to take the cloud on its own terms – actually, on your particular providers’ terms – and design around that. Get creative. Learn the new best practices and patterns. Your skills and knowledge are still incredibly important, but you need to apply them in new ways.

If someone tells you to build out a big virtual network and plug it into your existing network, and just run stuff in there, run away. That’s one of the biggest signs they don’t know what the f— they are talking about, and it will cripple you. If someone tells you to take all your existing security stuff and just virtualize it, run faster.

How the hell can you pull this off? Start small. Pick one project, set it up in its own isolated area, rework the architecture and security, and learn. I’m no better than any of you (well, maybe some of you – this is an election year), but I have had more time to adapt.

It’s okay if you don’t believe me. But only because your pain doesn’t affect me. We all live in the gravity well of the cloud. It’s just that some of us crossed the event horizon a bit earlier, that’s all.

—Rich

Incite 8/31/2016: Meetings: No Thanks

By Mike Rothman

It’s been a long time since I had an office job. I got fired from my last in November 2005. I had another job since then, but I commuted to Boston. So I was in the office maybe 2-3 days a week. But usually not. That means I rarely have a bad commute. I work from wherever I want, usually some coffee shop with headphones on, or in a quiet enough corner to take a call. I spend some time in the home office when I need to record a webcast or record a video with Rich and Adrian.

So basically I forgot what it’s like to work in an office every day. To be clear, I don’t have an office job now. But I am helping out a friend and providing some marketing coaching and hands-on operational assistance in a turn-around situation. I show up 2 or 3 days a week for part of the day, and I now remember what it’s like to work in an office.

take your meeting and shove it

Honestly, I have no idea how anyone gets things done in an office. I’m constantly being pulled into meetings, many of which don’t have to do with my role at the company. I shoot the breeze with my friends and talk football and family stuff. We do some work, which usually involves getting 8 people in a room to tackle some problem. It’s horribly inefficient, but seems to be the way things get done in corporate life.

Why have 2 people work through an issue when you can have 6? Especially since the 4 not involved in the discussion are checking email (maybe) or Facebook (more likely). What’s the sense of actually making decisions when you have to then march them up the flagpole to make sure everyone agrees? And what if they don’t? Do Not Pass Go, Do Not Collect $200.

Right, I’m not really cut out for an office job. I’m far more effective with a very targeted objective, with the right people to make decisions present and engaged. That’s why our strategy work is so gratifying for me. It’s not about sitting around in a meeting room, drawing nice diagrams on a whiteboard wall. It’s about digging into tough issues and pushing through to an answer. We’ve got a day. And we get things done in that day.

As an aside, whiteboard walls are cool. It’s like an entire wall is a whiteboard. Kind of blew my mind. I stood on a chair and wrote maybe 12 inches from the ceiling. Just because I could, and then I erased it! It’s magic. The little things, folks. The little things.

But I digress. As we continue to move forward with our cloud.securosis plans, I’m going to carve out some time to do coaching and continue doing strategy work. Then I can be onsite for a day, help define program objectives and short-term activities, and then get out before I get pulled into an infinite meeting loop. We follow up each week and assess progress, address new issues, and keep everything focused. And minimal meetings.

It’s not that I don’t relish the opportunity to connect with folks on an ongoing basis. It’s fun to catch up with my friends. I also appreciate that someone else pays for my coffee and snacks especially since I drink a lot of coffee. But I’ve got a lot of stuff to do, and meetings in your office aren’t helping with that.

–Mike

Photo credit: “no meetings” from autovac


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Managed Security Monitoring

Evolving Encryption Key Management Best Practices

Maximizing WAF Value

Recently Published Papers


Incite 4 U

  1. Deputize everyone for security: Our friend Adrian Sanabria sent up an interesting thought balloon on Motherboard, basically saying we’re doing security wrong. And we are. Or at least a lot of people are. His contention is that having security separate from IT creates a perception that security is the security team’s job – no one else’s. Adrian’s point is that you can’t have enough security folks, so you’d better get everyone in the organization thinking about it. It’s really everyone’s job. He’s right, but it’s an uphill battle. The cloud and DevOps promise to address this problem. You don’t have a choice but to build security in when you are doing 10 deployments per day. There is no room for Carbon (that means you) in that kind of workflow. Yes, you’ll have policy folks. You’ll have auditors. Separation of duties is still kind of a thing. But you probably won’t have folks with hands on keyboards making security changes. The machines do it a lot faster and better, if you architect for that. So I agree with Sanabria, we need a different mindset, but I think the path of least resistance is going to be building it from the ground up better and more secure, which is what the cloud and DevOps are all about. – MR

  2. Time to move on: Thanks to widespread misuse of the term across my profession, I have a personal rule to never call any technology ‘dead’ but it’s hard to argue with Bernard Golden’s position in Why private clouds will suffer a long, slow death. Especially because he echoes our thinking. We’ve been talking about the lack of automation, orchestration, and built-in security in private clouds for the better part of 4 years, but Bernard highlights a lack of innovation that’s also worth considering: Public “cloud providers create new functionality that legacy vendors with a private cloud could never discover the need for – and wouldn’t be able to create even if they understood the need.” Which means private cloud platforms (and the vendors who support that model), focus resources on the wrong problems. Oops. If you’ve gone through the pain of setting up OpenStack, standing up your first public cloud is like a dream come true. The leading PaaS and IaaS vendors offer the vast majority of the security you need, on demand, through public APIs. Public clouds are demonstrably secure, so as Rich likes to say, private cloud is a form of immersion therapy for server huggers. Time to get over it and move on. – AL

  3. Good luck hiring your next CISO: You think it’s hard finding talented security practitioners? Try to hire someone to lead them. You know, someone with credibility to sit in a board meeting. Someone with enough business chops to make sure security doesn’t get in the way of organizational velocity. Someone who can understand enough about the technology to call out poor architecture and even worse process. And finally someone who can develop their team and keep them engaged when lots of companies throw crazy money at junior security folks. Those folks aren’t quite unicorns. But they are close. This NetworkWorld article goes into some of the challenges, especially around compensation. It’s a relatively new role which has dramatically gained importance. So its economic value is not yet clear, and it will take time for Ms. Market to balance supply and demand to find equilibrium. There really isn’t a compelling training program for emerging CISOs, and that’s something the industry needs to think about. There is no way to address the skills gap without addressing the leadership gap within security teams. – MR

  4. Rip and replace: As we talk to more IT and development teams who are taking initial steps into the cloud and DevOps, one of the hardest parts is overcoming the existing mindset of many-long standing IT traditions. Boyd Hemphill captures several such issues in his recent post The Disposable Development Environment. Traditionally, IT staff is geared towards server longevity and keeping them running at all costs, but that is the opposite of what you should be doing in a DevOps environment. Servers in the cloud can be like on-premise ones in one respect – occasionally they get a bit flaky. But the idea of logging into a server and diagnosing problems should be stricken from your normal repertoire. It’s easier and safer to spin another one up from a known-good recipe. Hardware is no longer a restriction – you can stand up dozens of instances and shut them down in a matter of seconds. We understand it takes time to shift to a disposable environment mindset, but when you orchestrate through scripts and trusted images, you can ensure server consistency every time. – AL

  5. Nightmare on MSSP Street: Nick Selby relates a story of a company that got sold a bill of goods on a security monitoring service, and it’s not pretty. MSSP cashes the check for years, while having the sensor outside the firewall. Company has an incident, the MSSP claims they don’t have to do any monitoring, and the Tier 2 contact runs off to another meeting. While the customer is responding to an incident. It makes my blood boil that any company would do that to a customer. But it happens all the time, and we talk about buyer beware frequently. Ensure your SLAs protect you. Ensure you understand how to escalate an issue, and that you have a contact within the service provider who knows who you are. And most of all practice. Make sure your folks are ready when the brown stuff hits the fan. Because we’ve all been in this business long enough to know that it’s not a matter of if – but when. – MR

—Mike Rothman

Monday, August 29, 2016

New Paper: Understanding and Selecting RASP

By Adrian Lane

We are pleased to announce the availability of our Understanding RASP (Runtime Application Self-Protection) research paper. We would like to heartily thank Immunio for licensing this content. Without this type of support we could not bring this level of research to you, both free of charge and without requiring registration. We think this research paper will help developers and security professionals who are tackling application security from within.

Our initial motivation for this paper was questions we got from development teams during our Agile Development and DevOps research efforts. During each interview we received questions about how to embed security into the application and the development lifecycle. The people asking us wanted security, but they needed it to work within their development and QA frameworks. Tools that don’t offer RESTful APIs, or cannot deploy within the application stack, need not apply. During these discussions we were asked about RASP, which prompted us to dive in.

As usual, during this research project we learned several new things. One surprise was how much RASP vendors have advanced the application security model. Initial discussions with vendors showed several used a plug-in for Tomcat or a similar web server, which allows developers to embed security as part of their application stack. Unfortunately that falls a bit short on protection. The state of the art in RASP is to take control of the runtime environment – perhaps using a full custom JVM, or the Java JVM’s instrumentation API – to enable granular and internal inspection of how applications work. This model can provide assessments of supporting code, monitoring of activity, and blocking of malicious events. As some of our blog commenters noted, the plug-in model offers a good view of the “front door”. But full access to the JVM’s internal workings additionally enables you to deploy very targeted protection policies where attacks are likely to occur, and to see attacks which are simply not visible at the network or gateway layer.

This in turn caused us to re-evaluate how we describe RASP technology. We started this research in response to developers looking for something suitable for their automated build environments, so we spent quite a bit of time contrasting RASP with WAF because to spotlight the constraints WAF imposes on development processes. But for threat detection, these comparisons are less than helpful. Discussions of heuristics, black and white lists, and other detection approaches fail to capture some of RASP’s contextual advantages when running as part of an application. Compared to a sandbox or firewall, RASP’s position inside an application alleviates some of WAF’s threat detection constraints. In this research paper we removed those comparisons; we offer some contrasts with WAF, but do not constrain RASP’s value to WAF replacement.

We believe this technological approach will yield better results and provide the hooks developers need to better control application security.

You can download the research paper, or get a copy from our Research Library.

—Adrian Lane

Wednesday, August 17, 2016

Endpoint Advanced Protection: The State of the Endpoint Security Union

By Mike Rothman

Innovation comes and goes in security. Back in 2007 network security had been stagnant for more than a few years. It was the same old, same old. Firewall does this. IPS does that. Web proxy does a third thing. None of them did their jobs particularly well, struggling to keep up with attacks encapsulated in common protocols. Then the next generation firewall emerged, and it turned out that regardless of what it was called, it was more than a firewall. It was the evolution of the network security gateway.

The same thing happened a few years ago in endpoint security. Organizations were paying boatloads of money to maintain their endpoint protection, because PCI-DSS required it. It certainly wasn’t because the software worked well. Inertia took root, and organizations continued to blindly renew their endpoint protection, mostly because they didn’t have any other options.

But in technology inertia tends not to last more than a decade or so (yes, that’s sarcasm). When there are billions of [name your favorite currency] in play, entrepreneurs, investors, shysters, and lots of other folks flock to try getting some of the cash. So endpoint security is the new hotness. Not only because some folks think they can make a buck displacing old and ineffective endpoint protection.

The fact is that adversaries continue to improve, both in the attacks they use and the way they monetize compromised devices. One example is ransomware, which some organizations discover several times each week. We know of some organizations which tune their SIEM to watch for file systems being encrypted. Adversaries continue to get better at obfuscating attacks and exfiltration tactics. As advanced malware detection technology matures, attackers have discovered many opportunities to evade detection. It’s still a cat and mouse game, even though both cats and mice are now much better at it. Finally, every organization is still dealing with employees, who are usually the path of least resistance. Regardless of how much you spend on security awareness training, knuckleheads with access to your sensitive data will continue to enjoy clicking pictures of cute kittens (and other stuff…).

So what about prevention? That has been the holy grail for decades. To stop attacks before they compromise devices. It turns out prevention is hard, so the technologies don’t work very well. Or they work, but in limited use cases. The challenge of prevention is also compounded by the shysters I mentioned above, who claim nonsense like “products that stop all zero days” – of course with zero, or bogus, evidence. Obviously they have heard you never let truth get in the way of marketing. Yes, there has been incremental progress, and that’s good news. But it’s not enough.

On the detection side, someone realized more data could help detect attacks. Both close to the point of compromise, and after the attack during forensic investigation. So endpoint forensics is a thing now. It even has its own category, ETDR (Endpoint Threat Detection and Response), as named by the analysts who label these technology categories. The key benefit is that as more organizations invest in incident response, they can make use of the granular telemetry offered by these solutions. But they don’t really provide visibility for everyone, because they require security skills which are not ubiquitous. For those who understand how malware really works, and can figure out how attacks manipulate kernels, these tools provide excellent visibility. Unfortunately these capabilities are useless to most organizations.

But we have still been heartened to see a focus on more granular visibility, which provides skilled incident responders (who we call ‘forensicators’) a great deal more data to figure out what happened during attacks. Meanwhile operating system vendors continue to improve their base technologies to be more secure and resilient. Not only are offerings like Windows 10 and OS X 10.11 far more secure, top applications (primarily office automation and browsers) have been locked down and/or re-architected for stronger security. We also have seen add-on tools to further lock down operating systems, such as Microsoft’s EMET).

State of the Union: Sadness

We have seen plenty of innovation. But the more things change, the more they stay the same. It’s a different day, but security professionals will still be spending a portion of it cleaning up compromised endpoints. That hasn’t changed. At all.

The security industry also faces the intractable security skills shortage. As mentioned above, granular endpoint telemetry doesn’t really help if you don’t have staff who understand what the data means, or how similar attacks can be prevented. And most organizations don’t have that skill set in-house.

Finally, users are still users, so they continue to click on things. Basically until you take away the computers. It is really the best of times and the worst of times. But if you ask most security folks, they’ll tell you it’s the worst.

Thinking Differently about Endpoint Protection

But it’s not over. Remember that “Nothing is over until we say it is.” (hat tip to Animal House – though be aware there is strong language in that clip). If something is not working, you had better think differently, unless you want to be having the same discussions in 10 years.

We need to isolate the fundamental reason it’s so hard to protect endpoints. Is it that our ideas of how are wrong? Or is the technology not good enough? Or have adversaries changed so dramatically that all the existing ways to do endpoint security (or security in general) need to be tossed out? Fortunately technology which can help has existed for a few years. It’s just that not enough organizations have embraced the new endpoint protection methods. And many of the same organizations continue to be operationally challenged in security, which doesn’t help – you’re pretty well stuck if you cannot keep devices patched, or take too long to figure out someone is running a remote access trojan on your endpoints (and networks).

So in this Endpoint Advanced Protection series, we will revisit and update the work we did a few years ago in Advanced Endpoint and Server Protection. We will discuss the endpoint advanced protection lifecycle, which includes gaining visibility, reducing attack surface, preventing threats, detecting malicious activity, investigating and responding to attacks, and remediation.

We woud like to thank Check Point, who has agreed to potentially license this content when we finish developing it. Through our licensees we can offer this research for a good [non-]price, and have the freedom to make Animal House references in our work.

So in the immortal words of Bluto, “Let’s do it!”

—Mike Rothman

Thursday, August 04, 2016

Thoughts on Apple’s Bug Bounty Program

By Rich

It should surprise no one that Apple is writing their own playbook for bug bounties. Both bigger, with the largest potential payout I’m aware of, and smaller, focusing on a specific set of vulnerabilities with, for now, a limited number of researchers. Many, including myself, are definitely free to be surprised that Apple is launching a program at all. I never considered it a certainty, nor even necessarily something Apple had to do.

Personally, I cannot help but mention that this news hits almost exactly 10 years after Securosis started… with my first posts on, you guessed it, a conflict between Apple and a security researcher.

For those who haven’t seen the news, the nuts and bolts are straightforward. Apple is opening a bug bounty program to a couple dozen select researchers. Payouts go up to $200,000 for a secure boot hardware exploit, and down to $25,000 for a sandbox break. They cover a total of five issues, all on iOS or iCloud. The full list is below. Researchers have to provide a working proof of concept and coordinate disclosure with Apple.

Unlike some members of our community, I don’t believe bug bounties always make sense for the company. Especially for ubiquitous, societal, and Internet-scale companies like Apple. First, they don’t really want to get into bidding wars with governments and well-funded criminal organizations, some willing to pay a million dollars for certain exploits (including some in this program). On the other side is the potential deluge of low-quality, poorly validated bugs that can suck up engineering and communications resources. That’s a problem more than one vendor mentions to me pretty regularly.

Additionally negotiation can be difficult. For example, I know of situations where a researcher refused to disclose any details of the bug until they were paid (or guaranteed payment), without providing sufficient evidence to support their claims. Most researchers don’t behave like this, but it only takes a few to sour a response team on bounties.

A bug bounty program, like any corporate program, should be about achieving specific objectives. In some situations finding as many bugs as possible makes sense, but not always, and not necessarily for a company like Apple.

Apple’s program sets clear objectives. Find exploitable bugs in key areas. Because proving exploitability with a repeatable proof of concept is far more labor-intensive than merely finding a vulnerability, pay the researchers fair value for their work. In the process, learn how to tune a bug bounty program and derive maximum value from it. High-quality exploits discovered and engineered by researchers and developers who Apple believes have the skills and motivations to help advance product security.

It’s the Apple way. Focus on quality, not quantity. Start carefully, on their own schedule, and iterate over time. If you know Apple, this is no different than how they release their products and services.

This program will grow and evolve. The iPhone in your pocket today is very different from the original iPhone. More researchers, more exploit classes, and more products and services covered.

My personal opinion is that this is a good start. Apple didn’t need a program, but can certainly benefit from one. This won’t motivate the masses or those with ulterior motives, but it will reward researchers interested in putting in the extremely difficult work to discover and work through engineering some of the really scary classes of exploitable vulnerabilities.

Some notes:

  • Sources at Apple mentioned that if someone outside the program discovers an exploit in one of these classes, they could then be added to the program. It isn’t completely closed.
  • Apple won’t be publishing a list of the invited researchers, but they are free to say they are in the program.
  • Apple may, at its discretion, match any awarded dollars the researcher donates to charity. That discretion is to avoid needing to match a donation to a controversial charity, or one against their corporate culture.
  • macOS isn’t included yet. It makes sense to focus on the much more widely used iOS and iCloud, both of which are much harder to find exploitable bugs on, but I really hope Macs start catching up to iOS security. As much as Apple can manage without such tight control of hardware.
  • I’m very happy iCloud is included. It is quickly becoming the lynchpin of Apple’s ecosystem. It makes me a bit sad all my cloud security skills are defensive, not offensive.
  • I’m writing this in the session at Black Hat, which is full of more technical content, some of which I haven’t seen before.

And here are the bug categories and payouts:

  • Secure boot firmware components: up to $200,000.
  • Extraction of confidential material protected by the Secure Enclave, up to $100,000.
  • Execution of arbitrary code with kernel privileges: up to $50,000.
  • Unauthorized access to iCloud account data on Apple servers: up to $50,000.
  • Access from a sandboxed process to user data outside that sandbox: up to $25,000.

I have learned a lot more about Apple over the decade since I started covering the company, and Apple itself has evolved far more than I ever expected. From a company that seemed fine just skating by on security, to one that now battles governments to protect customer privacy.

It’s been a good ten years, and thanks for reading.

—Rich

Thursday, July 28, 2016

Incident Response in the Cloud Age [new paper]

By Mike Rothman

Incident response is always tough today. But when you need to deal with faster networks, an increasingly mobile workforce, and that thing called cloud computing, IR gets even harder. Sure, there are new technologies like threat intelligence, better network and endpoint telemetry, and analytics to help you investigate faster. But don’t think you’ll be able to do the same thing tomorrow as you did yesterday. You will need to evolve your incident response process and technology to handle the cloud age, just like you have had to adapt many of your other security functions to this new reality.

CAIR Cover

Our Incident Response in the Cloud Age paper digs into impacts of the cloud, faster and virtualized networks, and threat intelligence on your incident response process. Then we discuss how to streamline response in light of the lack of people to perform the heavy lifting of incident response. Finally we bring everything together with a scenario to illuminate the concepts.

We would like to thank SS8 for licensing this paper. Our Totally Transparent Research method provides you with access to forward-looking research without paywalls.

Check out our research library or download the paper directly (PDF).

—Mike Rothman

Wednesday, July 27, 2016

Incite 7/27/2016: The 3 As

By Mike Rothman

One of the hardest things for me to realize has been that I don’t control everything. I spent years railing against the machine, and getting upset when nothing changed. Active-minded people (as opposed to passive) believe they make their own opportunities and control their destiny, sometimes by force of will. Over the past few years, I needed a way to handle this reality and not make myself crazy. So I came up with 3 “A” words that make sense to me. The first ‘A’, Acceptance, is very difficult for me because it goes against most of what I believe. When you think about it, acceptance seems so defeatist. How can you push things forward and improve them if you accept the way they are now? I struggled with this for the first 5 years I practiced mindfulness.

What I was missing was the second ‘A’, Attachment. Another very abstract concept. But acceptance of what you can’t control is really contingent on not getting attached to how it works out. I would get angry when things didn’t work out the way I thought they should have. As if I were the arbiter of everything right and proper. LOL. If you are OK with however things work out, then there is no need to rail against the machine. Ultimately I had to acknowledge that everyone has their own path, and although their path may not make sense to me on my outsider’s perch, it’s not my place to judge whether it’s the right path for that specific person. Just because it’s not what I’d do, doesn’t mean it’s the wrong choice for someone else.

AAA Neon

In order to evolve and grow, I had to acknowledge there are just some things that I can’t change. I can’t change how other people act. I can’t change the decisions they make. I can’t change their priorities. Anyone with kids has probably banged heads with them because the kids make wrong-headed decisions and constantly screw up such avoidable situations. If only they’d listen, right? RIGHT? Or is that only me?

This impacts every relationship you have. Your spouse or significant other will do things you don’t agree with. At work you’ll need to deal with decisions that don’t make sense to you. But at the end of the day, you can stamp your feet all you want, and you’ll end up with sore feet, but that’s about it. Of course in my role as a parent, advisor, and friend, I can make suggestions. I can offer my perspectives and opinions about what I’d do. But that’s about it. They are going to do whatever they do.

This is hardest when that other person’s path impacts your own. In all aspects of our lives (both personal and professional) other people’s decisions have a significant effect on you. Both positive and negative. But what made all this acceptance and non-attachment work for me was that I finally understood that I control what I do. I control how I handle a situation, and what actions I take as a result. This brings us to the 3rd ‘A’, Adapt. I maintain control over my own situation by adapting gracefully to the world around me. Sometimes adapting involves significant alterations of the path forward. Other times it’s just shaking your head and moving on.

I did my best to do all of the above as I moved forward in my personal life. I do the same on a constant basis as we manage the transition of Securosis. My goal is to make decisions and act with kindness and grace in everything I do. When I fall short of that ideal, I have an opportunity to accept my own areas of improvement, let go, and not beat myself up (removing Attachment), and Adapt to make sure I have learned something and won’t repeat the same mistake again.

We all have plenty of opportunity to practice the 3 As. Life is pretty complicated nowadays, with lots of things you cannot control. This makes many people very unhappy. But I subscribe to the Buddhist proverb, “Pain is inevitable. Suffering is optional.” Acceptance, removing attachment, and adapting accordingly help me handle these situations. Maybe they can help you as well.

–Mike

Photo credit: “AAA” from Dennis Dixson


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Managed Security Monitoring

Evolving Encryption Key Management Best Practices

Incident Response in the Cloud Age

Understanding and Selecting RASP

Maximizing WAF Value

Recently Published Papers


Incite 4 U

  1. Ant security man: I enjoyed the Ant-Man movie. Very entertaining. Though I’m not such a big fan of real ants. They are annoying and difficult to get rid of. Like kids. But I guess I shouldn’t say that out loud. Anyway, ants bumping into each other can yield interesting information about the density of anything the ants are looking for. So you could have a virtual ant (a sensor in IT parlance) looking for a certain pattern of activity, which might indicate an attack. And you could see a bunch of these virtual ants gathering within a certain network segment or application stack, which might indicate something which warrants further investigation. Would this work? I have no idea – this is based on some MIT dude’s doctoral thesis. But given how terrible most detection remains, perhaps we need to get smaller to be more effective. – MR

  2. SQL security in NoSQL: Jim Scott over at LinkedIn offers a great presentation on how architects need to change their mindset when Evolving from RDBMS to NoSQL + SQL platforms. The majority of the post covers how to free yourself from relational constraints and mapping needs to NoSQL capabilities. With most disruptive technologies (including the cloud & mobile), “lift and shift” is rarely a good idea, and re-architecting your applications free of the dogma associated with older platforms is the way to go. Surprisingly, that does not seem to apply to SQL – Hive, Impala and other technologies add SQL queries atop Hadoop, making SQL the preferred type of query engine. Additionally we are seeing the recreation of views and view-based data masks – in this case with the Drill module – to remove sensitive data from data sets. There are many ways to provide masking with NoSQL platforms, but Drill is a simple tool to help developers shield sensitive data without changing queries. The view presented depends on the user’s credentials, making security invisible to the user. – AL

  3. Cloud migration challenges? Start from scratch instead: At SearchSecurity Dave Shackleford outlined cloud migration challenges, including making sure only the ‘right’ data is moved off-premise, a bunch of limitations involving the cloud provider’s available controls, and ensuring they have an audited data processing environment. Dave concludes the security team should be involved in migration planning, which is true. But we’d say the entire idea of migration is a bit askew. In reality you are likely to start over as you move key applications to the cloud, so you can take advantage of its unique architecture and services. We understand that you need to accept and work within real-world constraints, but rather than to trying to replicate your data center in the cloud you should be recreating applications to leverage the cloud as much as possible. – MR

  4. Take this, it’s good for you: Our friend Vinnie Liu interviewed the CSO of Dun & Bradstreet on integrating Agile techniques into security management and deployments. This is a textbook case, worth reading. All too often firms get Agile right when it comes to development, and then find every other organization in the company is decidedly not Agile. Mr. Rose relates how many of the last few years’ security tools are pretty crappy; they have had to evolve both in their core capabilities, and how they worked, as teams become more Agile. We talk a lot about the cutting edge of technologies, but much of the industry is still coming to grips with how to integrate security into IT and development. A bit like getting a flu shot, you know you need to, but there is some inevitable pain in the process. – AL

  5. Stop the presses! Ransomware works! Sometimes I just need to poke fun at the masters of the obvious out there. Evidently the MS-ISAC (which represents state and county governments in the US) has proclaimed that Ransomware is the top threat. To be clear, it’s malware. So that’s a bit like saying malware is the top threat. OK, it’s special malware, which uses a diabolical method of stealing money, by encrypting data and holding the key hostage. It’s new and can more damaging, but it’s still malware. Their guidance is to make sure your files are backed up, and that’s a good idea as well. Not just because you could get popped by ransomware, but also because you should just have backups. That’s simple operational stuff. Ugh. Though I guess I should give the MS-ISAC some props for educating smaller government IT shops about basic security stuff. So here are your props. – MR

—Mike Rothman

Friday, July 15, 2016

Summary: News…. and pulling an AMI from Packer and Jenkins

By Rich

Rich here.

Before I get into tech content, a quick personal note. I just signed up for my first charity athletic event, and will be riding 250 miles in 3 days to support challenged athletes. I’ve covered the event costs, so all donations go right to the cause. Click here if you are interested in supporting the Challenged Athletes Foundation (and my first attempt at fundraising since I sold lightbulbs for the Boy Scouts. Seriously. Lightbulbs. Really crappy ones which burned out in months, making it very embarrassing to ever hit that neighborhood again. Then again, that probably prepared me for a career in security sales).

Publishing continues to be a little off here at Securosis as we all wrestle with summer vacations and work trips. That said, instead of the Tool of the Week I’m going with a Solution of the Week that s time, because I ran into what I think is a common technical issue I couldn’t find covered well anyplace else.

With that, let’s jump right in…

Top Posts for the Week

Solution of the Week

As I was building the deployment pipeline lab for our cloud security training at Black Hat, I ran into a small integration issue that I was surprised I could not find documented anyplace else. So I consider it my civic duty to document it here.

The core problem comes when you use Jenkins and Packer to build Amazon Machine Images (AMIs). I previously wrote about Jenkins and Packer. The flow is that you make a code (or other) change, which triggers Jenkins to start a new build, which uses Packer to create the image. The problem is that there is no built-in way to pull the image ID out of Packer/Jenkins and pass it on to the next step in your process.

Here is what I came up with. This won’t make much sense unless you actually use these tools, but keep it as a reference in case you ever go down this path. I assume you already have Jenkins and Packer working together.

When Packer runs it outputs the image ID to the console, but that isn’t made available as a variable you can access in any way. Jenkins is also weird about how you create variables to pass on to other build steps. This process pulls the image ID from the stored console output, stores it in a file in the workspace, then allows you to trigger other builds and pass the image ID as a parameter.

  • Install the following additional plugins in Jenkins:
    • Post-Build Script Plugin
    • Parameterized Trigger plugin
  • Get your API token for Jenkins by clicking on your name > configure.
  • Make sure your job cleans the workspace before each build (it’s an environment option).
  • Create a post-build task and choose “Execute a set of scripts”.
  • Adjust the following code and replace the username and password with your API credentials. Then paste it into the “Execute Shell” field. This was for a throwaway training instance I’ve already terminated so these embedded credentials are worthless. Give me a little credit please:

    wget –auth-no-challenge –user= –password= http://127.0.0.1:8080/job/Website/lastBuild/consoleText export IMAGE_ID=$(grep -P -o -m 1 ‘(?<=AMI:\s)ami-.{8}’ consoleText) echo IMAGE_ID=$IMAGE_ID >> params.txt

The wget calls the API for Jenkins, which provides the console text, which includes the image ID (which we grep out). Jenkins can run builds on slave nodes, but the console text is stored on the master, which is why it isn’t directly accessible some other way.

  • The image ID is now in the the params.txt file in the workspace, so any other post build steps can access it. If you want to pass it to another job you can use the Parameterized Trigger plugin to pass the file. In our training class we add other AWS-specific information in that file to run automated deployment using some code I wrote for rolling updates.

This isn’t hard, and I saw blog posts saying “pull it from the console text”, but without any specifics of how to access the text or what to do with the ID afterwards so you can access it in other post-build steps or jobs. In our case we do a bunch more, including launching an instance from the image for testing with Gauntlt, and then the rolling update itself if all tests pass.

Securosis Blog Posts This Week

Other Securosis News and Quotes

Training and Events

—Rich

Thursday, June 30, 2016

Managed Security Monitoring: Selecting a Service Provider

By Mike Rothman

Based on the discussion in our first post, you have decided to move toward a managed security monitoring service. Awesome! That was the easy part. Now you need to figure out what kind of deployment model makes sense, and then do the hard work of actually selecting the best service provider for you.

That’s an important distinction to get straight up front. Vendor selection is about your organization. We know it can be easier to just go with a brand name. Or a name in the right quadrant to pacify senior management. Or the cheapest option. But none of those might be the best choice for your requirements. So the selection process requires an open mind and doing the work. You may end up with the brand name. Or the cheapest one. But at least you’ll know you got the best fit.

Deployment Options

The deployment decision really comes down to two questions:

  1. Who owns the security monitoring platform? Who buys the monitoring platform? Is it provided as part of a service, or do you have to buy it up front? Who is in charge of maintenance? Who pays for upgrades? What about scaling up? Are you looking at jumping onto a multi-tenant monitoring platform, property of your service provider?
  2. Where is the SOC? Who staffs it? The other key question concerns operation of the security monitoring platform. Is the central repository and console on your premises? Does it run in your service provider’s data center? Does it run in the cloud? Who fields the staff, especially if some part of the platform will run at your site?

MSM Deployment models

To break down the chart above, here are the options, which depend on how you answered the questions above:

  1. Traditional: The customer buys and operates the security monitoring platform. Alternatively the provider might buy the platform and charge the customer monthly, but that doesn’t affect operations. Either way the monitoring platform runs on the customer premises, staffed by the customer. This is not managed security monitoring.
  2. Hybrid: The customer owns the monitoring platform, which resides on-premise at the customer, but the service provider manages it. The provider handles alerts and is responsible for maintenance and uptime of the system.
  3. Outsourced: The service provider owns the platform that resides on the customer’s premises. Similar to the hybrid model, the provider staffs the SOC and assumes responsibility for operation and maintenance.
  4. Single-tenant: The service provider runs the monitoring platform in their SOC (or the cloud), but each customer gets its own instance, and there is no comingling of security data.
  5. Multi-tenant: The service provider has a purpose-built system to support many clients within the same environment, running in their SOC or the cloud. The assumption is that there are application security controls built into the system to ensure customer data stays is accessible only to authorized users, but that’s definitely something to check as part of your due diligence on the provider’s architecture.

Selecting Your Provider

We could probably write a book about selecting (and managing) a security monitoring service provider, and perhaps someday we will. But for now here are a few things to think about:

  • Scale: You want a provider who can support you now and scale with you later. Having many customers roughly your size, as well as a technology architecture capable of supporting your plans, should be among your first selection criteria.
  • Viability: Similarly important is your prospective provider’s financial stability. Given the time and burden of migration, and the importance of security monitoring, having a provider go belly up would put you in a precarious situation. Many managed security monitoring leaders are now part of giant technology companies, so this isn’t much of an issue any more. But if you are working with a smaller player, make sure you are familiar with their financials.
  • Technology architecture: Does the provider use their own home-grown technology platform to deliver the service? Is it a commercial offering they customized to meet their needs as a provider – perhaps adding capabilities such as multi-tenancy? Did they design their own collection device, and does it support all your security/network/server/database/application requirements? Where do they analyze and triage alerts? Is it all within their system, or do they run a commercial monitoring platform? How many SOCs do they have, and how do they replicate data between sites? Understand exactly how their technology works so you can assess whether it fits your particular use cases and scalability requirements.
  • Staff Expertise: It’s not easy to find and retain talented security analysts, so be sure to vet the background of the folks the provider will use to handle your account. Obviously you can’t vet them all, but understand the key qualifications of the analyst team – things like like years of experience, years with the provider, certifications, ongoing training requirements, etc. Also make sure to dig into their hiring and training regimens – over time they will need to hire new analysts and quickly get them productive, to deal with industry growth and the inevitable attrition.
  • Industry specialization: Does this provider have many clients in your industry? This is important because there are many commonalities to both traffic dynamics and attack types within an industry, and you should leverage the provider’s familiarity. Given the maturity of most managed security offerings, it is reasonable to expect a provider to have a couple dozen similar customers in your industry.
  • Research capabilities: One reason to consider a managed service is to take advantage of resources you couldn’t afford yourself, which a provider can amortize across their customers. Security research and the resulting threat intelligence are good examples. Many providers have full-time research teams investigating emerging attacks, profiling them, and keeping their collection devices up to date. Get a feel for how large and capable a research team a provider has, how their services leverage their research, and how you can interact with the research team to get the answers you need.
  • Customization: A service provider delivers a reasonably standard service – leveraging a core set of common features is key to their profitability. That means you might not get as much customizability with a managed offering. Or it might be expensive. Some providers may argue, but be very wary of those offering to highly customize their environment just for you, because it’s hard to make that model work at scale.
  • Service Level Agreements: Finally make sure your SLAs provide realistic assurances. Look for a dedicated account team, reasonable response times, clear escalation procedures, criteria for scope expansion/contraction, and a firm demarcation of responsibility before you sign anything. Once the deal is signed you have no leverage to change terms, so use your leverage during courting to make sure your SLAs reflect the way you do business. Ultimately you will need to trust that your provider will do their job, and resolve issues as they emerge.

You may also want to consider taking the service for a spin as part of your selection process. Similar to the Proof of Concept (PoC) process we outlined above, start small by collecting data from a handful of devices and running through the typical use cases driving your purchase. With a service offering it is as much about the interface and user experience as anything else, but be sure to test the alerting process, as well as escalation procedures for when the provider doesn’t meet your service level.

Checking References

There are at least two sides to every story. We have seen very successful security monitoring engagements, with customers large and small. We have also seen train wrecks. Of course the customer can be as responsible as the service provider when things go off the rails, but ultimately it’s your responsibility to perform sufficient due diligence when selecting a provider to learn the good, the bad, and the ugly regarding your potential provider.

That means talking to both happy and unhappy customers. Obviously a provider is unlikely to introduce you to disgruntled customers, but they are always happy to find happy customers who chose them over another provider. Leverage all the vendors competing for your business to assemble a set of both positive and not-so-positive references for potential providers.

Specifically, dig into a few areas:

  • Deployment & migration: Make sure you understand the process to move to this provider’s platform. How will they deploy collectors? Can they import your existing data? What kind of project management oversight governs deployment and cutover? These are key questions to bring up during your reference calls. Ask for a very specific migration plan up front.
  • Responsiveness: What kind of experience have customers had getting alerts and investigating issues? Have their analysts been helpful? Do they provide enough information to perform your own investigation? When the bad guys come knocking you won’t have time to fuss with bureaucracy or issues getting in your way. You’ll need the data, and to get your investigation moving – the provider must not hinder that process. Build responsiveness metrics into your agreement, along with escalation policies and penalties for violations.
  • Expertise: Do they know what they are talking about? Did they do a bait and switch with the analysts monitoring customer networks? How precise and accurate are their alerts? Everything looks great during the sales cycle, but you want to make sure the A team (or at least the B+ team) is working your account on a daily basis.
  • SLA violations: Not everything goes well. So learn how the provider deals with issues. Are they responsive? Do they work until the problem is solved? Have they been sued for breach of contract by other customers? This is where discussions with former clients can be very useful. There is usually a reason they are former clients, so find out. The provider should have a standard SLA for you to review.
  • Account management: How does the relationship evolve over time? Is the account rep just there to sell you more services? Does the provider check in periodically to see how things are going? Do they take your feedback on product shortcomings and feature requests seriously? A service provider is really a partnership, so make sure this provider actually acts like a partner to their customers.

Mismatched Expectations

As when an on-premise security monitoring implementation goes awry, the root cause can usually be traced back to mismatched expectations. With a monitoring service always keep in mind what the service does, and don’t expect it to be something it’s not. Don’t count on deep customization or deep off-menu capabilities, unless they are agreed to up front.

Using a service provider for security monitoring can help provide resources and capabilities you don’t have in-house. That said, you need to perform due diligence to ensure you have both the right choice, and the right structure in place to manage them.

—Mike Rothman

Building a Threat Intelligence Program [New Paper]

By Mike Rothman

Threat Intelligence has made a significant difference in how organizations focus resources on their most significant risks. Yet far too many organizations continue to focus on very tactical use cases for external threat data. These help, but they underutilizing the intelligence’s capabilities and potential. The time has come to advance threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value. A program must account for ongoing attack indicator changes and keep up with evolution in adversaries’ tactics.

Our Building a Threat Intelligence Program paper offers guidance for designing a program and systematically leveraging threat intelligence. This paper is all about turning tactical use cases into a strategic TI capability to enable your organization to detect attacks faster.

TIPR Cover

We would like to thank our awesome licensees, Anomali, Digital Shadows, LookingGlass Cyber Solutions and BrightPoint Security for supporting our Totally Transparent Research. It enables us to think objectively about how to leverage new technology in systematic programs to make your security consistent and reproducible.

You can get the paper in our research library.

—Mike Rothman

Incite 6/29/16: Gone Fishin’ (Proverbially)

By Mike Rothman

It was a great Incite. I wrote it on the flight to Europe for the second leg of my summer vacation. I said magical stuff. Such depth and perspective, I even amazed myself. When I got to the hotel in Florence and went to post the Incite on the blog, it was gone. That’s right: G. O. N. E.

And it’s not going to return. I was sore for a second. But I looked at Mira (she’s the new love I mentioned in a recent Incite) and smiled. I walked outside our hotel and saw the masses gathered to check out the awe-inspiring Duomo. It was hard to be upset, surrounded by such beauty.

It took 3 days to get our luggage after Delta screwed up a rebooking because our flight across the pond was delayed, which made us upset. But losing an Incite? Meh. I was on vacation, so worrying about work just wasn’t on the itinerary.

Over the years, I usually took some time off during the summer when the kids were at camp. A couple days here and there. But I would work a little each day. Convincing myself I needed to stay current, or I didn’t want things to pile up and be buried upon my return. It was nonsense. I was scared to miss something. Maybe I’d miss out on a project or a speaking gig.

Gone fishin'

It turns out I can unplug, and no one dies. I know that because I’m on my way back after an incredible week in Florence and Tuscany, and then a short stopover in Amsterdam to check out the city before re-entering life. I didn’t really miss anything. Though I didn’t really totally unplug either. I checked email. I even responded to a few. But only things that were very critical and took less than 5 minutes.

Even better, my summer vacation isn’t over. It started with a trip to the Jersey shore with the kids. We visited Dad and celebrated Father’s Day with him. That was a great trip, especially since Mira was able to join us for the weekend. Then it was off to Europe. And the final leg will be another family trip for the July 4th holiday. All told, I will be away from the day-to-day grind close to 3 weeks.

I highly recommend a longer break to regain sanity. I understand that’s not really feasible for a lot of people. Fortunately getting space to recharge doesn’t require you to check out for 3 weeks. It could be a long weekend without your device. It could just be a few extra date nights with a significant other. It could be getting to a house project that just never seems to get done. It’s about breaking out of routine, using the change to spur growth and excitement when you return.

So gone fishin’ is really a metaphor, about breaking out of your daily routine to do something different. Though I will take that literally over the July 4 holiday. There will be fishing. There will be beer. And it will be awesome.

For those of you in the US, have a safe and fun July 4. For those of you not, watch the news – there are always a few Darwin Awards given out when you mix a lot of beer with fireworks.

–Mike

Photo credit: “Gone Fishing” from Jocelyn Kinghorn


Security is changing. So is Securosis. Check out Rich’s post on how we are evolving our business.

We’ve published this year’s Securosis Guide to the RSA Conference. It’s our take on the key themes of this year’s conference (which is really a proxy for the industry), as well as deep dives on cloud security, threat protection, and data security. And there is a ton of meme goodness… Check out the blog post or download the guide directly (PDF).

The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour. Your emails, alerts, and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Managed Security Monitoring

Evolving Encryption Key Management Best Practices

Incident Response in the Cloud Age

Understanding and Selecting RASP

Maximizing WAF Value

Recently Published Papers


Incite 4 U

  1. More equals less? Huh? Security folks are trained that ‘more’ is rarely a good thing. More transactions means more potential fraud. More products means more integration and maintenance cost. Even more people can challenge efficiency. But does more code deploys mean fewer security headaches? Of course the folks from Puppet want you to believe that’s that case, because they are highlighted in this article referring to some customer successes. It turns out our research (including building pipelines and prototyping our own applications) shows that automation and orchestration do result in fewer security issues. It’s about reducing human error. To be clear, if you set up the deployment pipeline badly and screw up the configurations, automation will kill your app. But if you do it right there are huge gains to be had in efficiency, and in reduced attack surface. – MR

  2. A bird in the hand: Jim Bird has a new O’Reilly book called DevOpsSec: Securing Software through Continuous Delivery (PDF). It’s a good primer on the impact of continuous deployment on secure code development. Jim discusses several success stories of early DevOps security initiatives; outlining the challenges of integrating security into the process, the culture, and the code. Jim has contributed a ton of research back to the community over the years, and he is asking for feedback and corrections on the book. So download a free copy, and please help him out. – AL

  3. Stimulating the next security innovations: I mentioned in the last Incite that DARPA was funding some research into the next iteration of DDoS technologies. Not to be outdone, the Intelligence Advanced Research Projects Activity (IARPA) office is looking for some ideas on the evolution of intruder deception. Rich has been interested in these technologies for years, and this is one of the disruptions he laid out in the Future of Security research. It’s clear we won’t be able to totally stop attackers, but we can and should be able to set more traps for them. At least make their job a little harder, and then you aren’t the path of least resistance. And kudos to a number of government agencies putting money up to stimulate innovation needed to keep up with bad folks. – MR

  4. Relevance falling: The PCI Guru asks Is the PCI-DSS even relevant any more?, a question motivated by the better-late-than-never FTC investigation of breaches at major retailers. He argues that with ubiquitous point-to-point and end-to-end encryption (P2PE and E2EE respectively) and tokenization removing credit cards from transactions, the value of a CC# goes down dramatically. Especially because CC# is no longer used for other business processes. We think this assertion is accurate – replace credit cards numbers with systemic tokens from issuing banks, and PCI-DSS’s driver goes out the window. By the way, this is what happens with mobile payments and chipped credit cards: real CC#s no longer pass between merchants and processors. To be clear, without the credit card data – and so long as the Primary Account Reference (PAR) token is not present in the transaction stream – encryption no longer solves a security problem. Neither will PCI-DSS, so we expect it to die gradually, as the value of credit card numbers becomes nil. – AL

  5. Mailing it in: Everyone has too much to do, and not enough skilled resources to do it all. But when you contract with a professional services firm to perform something like an incident response, it would be nice if they actually did the work and professionally documented what they found. It’s not like you aren’t trying to figure out what happened during an attack – both to communicate to folks who lost data, and to make important funding and resource allocation decisions so you can move forward. But what happens when the consultant just does a crappy job? You sue them, of course. Ars Technica offers a good article on how a firm sued TrustWave after their allegedly crappy job was rescued by another services firm. I wasn’t there and didn’t see the report, so I can’t really judge the validity of the claims. But I think a customer standing up to a response firm and calling them out is positive. Consultants beware: you just can’t mail it in – it’s not fair to the customer, and likely to backfire. – MR

—Mike Rothman

Monday, June 27, 2016

Managed Security Monitoring: Use Cases

By Mike Rothman

Many security professionals feel the deck is stacked against them. Adversaries continue to improve their techniques, aided by plentiful malware kits and botnet infrastructures. Continued digitization at pretty much every enterprise means everything of interest in on some system somewhere. Don’t forget the double whammy of mobile and cloud, which democratizes access without geographic boundaries, and takes the one bastion of control, the traditional data center, out of your direct control. Are we having fun yet?

Of course the news isn’t all bad – security has become very high profile. Getting attention and resources can sometimes be a little too easy – life was simpler when we toiled away in obscurity bemoaning that senior management didn’t understand or care about security. That’s clearly not the case today, as you get ready to present the security strategy to the board of directors. Again. And after that’s done you get to meet with the HR team trying to fill your open positions. Again.

In terms of fundamentals of a strong security program, we have always believed in the importance of security monitoring to shorten the window between compromise and detection of compromise. As we posted in our recent SIEM Kung Fu paper:

Security monitoring needs to be a core, fundamental, aspect of every security program.

There are a lot of different concepts of what security monitoring actually is. It certainly starts with log aggregation and SIEM, although many organizations are looking to leverage advanced security analytics (either built into their SIEM or using third-party technology) to provide better and faster detection. But that’s not what we want to tackle in this new series, titled Managed Security Monitoring. It’s not about whether to do security monitoring, it’s a question of the most effective way to monitor resources.

Given the challenges of finding and retaining staff, the increasingly distributed nature of data and systems that need to be monitored, and the rapid march of technology, it’s worth considering whether a managed security monitoring service makes sense for your organization. The fact is that, under the right circumstances, a managed service presents an interesting alternative to racking and stacking another set of SIEM appliances. We will go through drivers, use cases, and deployment architectures for those considering managed services. And we will provide cautions for areas where a service offering might not meet expectations.

As always, our business model depends on forward-looking companies who understand the value of objective research. We’d like to thank IBM Security Systems for agreeing to potentially license this paper once completed. We’ll publish the research using our Totally Transparent Research methodology, which ensures our work is done in an open and accessible manner.

Drivers for Managed Security Monitoring

We have no illusions about the amount of effort required to get a security monitoring platform up and running, or what it takes to keep one current and useful, given the rapid adaptation of attackers and automated attack tools in use today. Many organizations feel stuck in a purgatory of sorts, reacting without sufficient visibility, yet not having time to invest to gain that much-needed visibility into threats. A suboptimal situation, often the initial trigger for discussion of managed services. Let’s be a bit more specific about situations where it’s worth a look at managed security monitoring.

  • Lack of internal expertise: Even having people to throw at security monitoring may not be enough. They need to be the right people – with expertise in triaging alerts, validating exploits, closing simple issues, and knowing when to pull the alarm and escalate to the incident response team. Reviewing events, setting up policies, and managing the system, all take skills that come with training and time with the security monitoring product. Clearly this is not a skill set you can just pick up anywhere – finding and keeping talented people is hard – so if you don’t have sufficient expertise internally, that’s a good reason to check out a service-based alternative.
  • Scalability of existing technology platform: You might have a decent platform, but perhaps it can’t scale to what you need for real-time analysis, or has limitations in capturing network traffic or other voluminous telemetry. And for organizations still using a first generation SIEM with a relational database backend (yes, they are still out there), you face a significant and costly upgrade to scale the system. With a managed service offering scale is not an issue – any sizable provider is handling billions of events per day and scalability of the technology isn’t your problem – so long as the provider hits your SLAs.
  • Predictable Costs: To be the master of the obvious, the more data you put into a monitoring system, the more storage you’ll need. The more sites you want to monitor and the deeper you want visibility into your network, the more sensors you need. Scaling up a security monitoring environment can become costly. One advantage of managed offerings is predictable costs. You know what you’re monitoring and what it costs. You don’t have variable staff costs, nor do you have out-of-cycle capital expenses to deal with new applications that need monitoring.
  • Technology Risk Transference: You have been burned before by vendors promising the world without delivering much of anything. That’s why you are considering alternatives. A managed monitoring service enables you to focus on the functionality you need, instead of trying to determine which product can meet your needs. Ultimately you only need to be concerned with the application and the user experience – all that other stuff is the provider’s problem. Selecting a provider becomes effectively an insurance policy to minimize your technology investment risk. Similarly, if you are worried about your ops team’s ability to keep a broad security monitoring platform up and running, you can transfer operational risk to the provider, who assumes responsibility for uptime and performance – so long as your SLAs are structured properly.
  • Geographically dispersed small sites: Managed services also interest organizations needing to support many small locations without a lot of technical expertise. Think retail and other distribution-centric organizations. This presents a good opportunity for a service provider who can monitor remote sites.
  • Round the clock monitoring: As security programs scale and mature, some organizations decide to move from an 8-hour/5-day monitoring schedule to a round-the-clock approach. Soon after making that decision, the difficult of staffing a security operations center (SOC) 24/7 sets in. A service provider can leverage a 24/7 staffing investment to deliver round-the-clock services to many customers.

Of course you can’t outsource thinking or accountability, so ultimately the buck stops with the internal team, but under the right circumstances managed security monitoring services can address skills and capabilities gaps.

Favorable Use Cases

The technology platform used by the provider may be the equal of an in-house solution, as many providers use commercial monitoring platforms as the basis for their managed services. This is a place for significant diligence during procurement, as we will discuss in our next post. As mentioned above, there are a few use cases where managed security monitoring makes a lot of sense, including:

  • Device Monitoring/Alerting: This is the scaling and skills issue. If you have a ton of network and security devices, but you don’t have the technology or people to properly monitor them, managed security monitoring can help. These services are generally architected to aggregate data on your site and ship it to the service provider for analysis and alerting, though a variety of different options are emerging for where the platform runs and who owns it. Central to this use case is a correlation system to identify issues, a means to find new attacks (typically via a threat intelligence capability) and a bunch of analysts who can triage and validate issues quickly, and then provide an actionable alert.
  • Advanced Detection: With the increasing sophistication of attackers, it can be hard for an organization’s security team to keep pace. A service provider has access to threat intelligence, presumably multiple clients across which to watch for emerging attacks, and the ability to amortize advanced security analytics across customers. Additionally specialized (and expensive) malware researchers can be shared among many customers, making it more feasible for a service provider to employ those resources than many organizations.
  • Compliance Reporting: Another no-brainer for a managed security monitoring alternative is basic log aggregation and reporting – typically driven by a compliance requirement. This isn’t a very complicated use case, and it fits service offerings well. It also gets you out of the business of managing storage and updating reports when a requirement/mandate changes. The provider should take care of all that for you.
  • CapEx vs. OpEx: As much as it may hurt a security purist, buying decisions come down to economics. Depending on your funding model and your organization’s attitude toward capital expenses, leasing a service may be a better option than buying outright. Of course there are other ways to turn a capital purchase into an operational expense, and we’re sure your CFO will have plenty of ideas on that front, but buying a service can be a simple option for avoiding capital expenditure. Obviously, given the long and involved process to select a new security monitoring platform, you must make sure the managed service meets your needs before economic considerations come into play – especially if there’s a risk of Accounting’s preferences driving you to spend big on an unsuitable product. No OpEx vs. CapEx tradeoff can make a poorly matched service offering meet your requirements.

There are other offerings and situations where managed security monitoring makes sense, which have nothing to do with the nice clean buckets above. We have seen implementations of all shapes and sizes, and we need to avoid overgeneralizing. But the majority of service implementations fit these general use cases.

Unfavorable Use Cases

Of course there are also situations where a monitoring service may not be a good fit. That doesn’t mean you can’t use a service because of extenuating circumstances, typically having to do with a staffing and skills gap. But generally these situations don’t make for the best fit for a service:

  • Dark Networks: Due to security requirements, some networks are dark, meaning no external access is available. These are typically highly sensitive military and/or regulated environments. Clearly this is problematic for a security monitoring service because the provider cannot access the customer network. To address skills gaps you’d instead consider a dedicated onsite resource and either buying a security monitoring platform yourself or leasing it from the provider.
  • Highly Sensitive IP: On networks where the intellectual property is particularly valuable, the idea of providing access to external parties is usually a non-starter. Again, this situation would call for dedicated on-site resources helping to run your on-premise security monitoring platform.
  • Large Volumes of Data: If your organization is very large and has a ton of logs and other telemetry for security monitoring, this can challenge a service offering that requires data to be moved to a cloud-based service, including network forensics and packet analytics. In this case an on-premise monitoring service will likely be the best solution. Note the new hybrid offerings which capture data and perform security analytics on-premise using resources in a shared SOC. We’ll discuss these hybrid offerings in our next post.

As with the favorable use cases, the unfavorable use cases are strong indicators but not absolute. It really depends on the specific requirements of your situation, your ability to invest in technology, and the availability of skilled resources.

These generalizations should give you a starting point to consider a managed security monitoring service. Our next post will get into specifics of selection criteria, service levels, and deployment models.

—Mike Rothman

Friday, June 24, 2016

Summary: Modifying rsyslog to Add Cloud Instance Metadata

By Rich

Rich here.

Quick note: I basically wrote an entire technical post for Tool of the Week, so feel free to skip down if that’s why you’re reading.

Ah, summer. As someone who works at home and has children, I’m learning the pains of summer break. Sure, it’s a wonderful time without homework fights and after-school activities, but it also means all 5 of us in the house nearly every day. It’s a bit distracting. I mean do you have any idea how to tell a 3-year-old you cannot ditch work to play Disney Infinity on the Xbox?

Me neither, which explains my productivity slowdown.

I’ve actually been pretty busy at ‘real work’, mostly building content for our new Advanced Cloud Security course (it’s sold out, but we still have room in our Hands-On class). Plus a bunch of recent cloud security assessments for various clients. I have been seeing some interesting consistencies, and will try to write those up after I get these other projects knocked off. People are definitely getting a better handle on the cloud, but they still tend to make similar mistakes.

With that, let’s jump right in…

Top Posts for the Week

Tool of the Week

I’m going to detour a bit and focus on something all you admin types are very familiar with: rsyslog. Yes, this is the default system logger for a big chunk of the Linux world, something most of us don’t think that much about. But as I build out a cloud logging infrastructure I found I needed to dig into it to make some adjustments, so here is a trick to insert critical Amazon metadata into your logs (usable on other platforms, but I can only show so many examples).

Various syslog-compatible tools generate standard log files and allow you to ship them off to a remote collector. That’s the core of a lot of performance and security monitoring. By default log lines look something like this:

 Jun 24 00:21:27 ip-172-31-40-72 sudo: ec2-user : TTY=pts/0 ; PWD=/var/log ; USER=root ; COMMAND=/bin/cat secure

That’s the line outputting the security log from a Linux instance. See a problem?

This log entry includes the host name (internal IP address) of the instance, but in the cloud a host name or IP address isn’t nearly as canonical as in traditional infrastructure. Both can be quite ephemeral, especially if you use auto scale groups and the like. Ideally you capture the instance ID or equivalent on other platforms, and perhaps also some other metadata such as the internal or external IP address currently associated with the instance. Fortunately it isn’t hard to fix this up.

The first step is to capture the metadata you want. In AWS just visit:

 http://169.254.169.254/latest/meta-data/

To get it all. Or use something like:

 curl http://169.254.169.254/latest/meta-data/instance-id

to get the instance ID. Then you have a couple options. One is to change the host name to be the instance ID. Another is to append it to entries by changing the rsyslog configuration (/etc/rsyslog.conf on CentOS systems), as in the below to add a %INSTANCEID% environment variable to the hostname (yes, this means you need to set INSTANCEID as an environment variable, and I haven’t tested this because I need to post the Summary before I finish, so you might need a little more text manipulation to make it work… but this should be close):

 template(name="forwardFormat" type="string"
          string="<%PRI%>%TIMESTAMP:::date-rfc3339% %INSTANCEID%-%HOSTNAME% %syslogtag:1:32%%msg:::sp-if-no-1st-sp%%msg%"
         )

There are obviously a ton of ways you could slice this, and you need to add it to your server build configurations to make it work (using Ansible/Chef/Puppet/packer/whatever). But the key is to capture and embed the instance ID and whatever other metadata you need. If you don’t care about strict syslog compatibility, you have more options. The nice thing about this approach is that it will capture all messages from all the system sources you normally log, and you don’t need to modify individual message formats.

If you use something like the native Amazon/Azure/Google instance logging tools… you don’t need to bother with any of this. Those tools tend to capture the relevant metadata for you (e.g., using Amazon’s CloudWatch logs agent, Azure’s Log Analyzer, or Google’s StackDriver). Check the documentation to make sure you get them correct. But many clients want to leverage existing log management, so this is one way to get the essential data.

Securosis Blog Posts this Week

Other Securosis News and Quotes

Another quiet week…

Training and Events

—Rich

Wednesday, June 15, 2016

Shining a Light on Shadow Devices [New Paper]

By Mike Rothman

Visible devices are only some of the network-connected devices in your environment. There are hundreds, quite possibly thousands, of other devices you don’t know about on your network. You don’t scan them periodically, and you have no idea of their security posture. Each one can be attacked, and might provide an adversary with opportunity to gain presence in your environment. Your attack surface is much larger than you thought. In our Shining a Light on Shadow Devices paper, we discuss the attacks on these devices which can become an issue on your network, along with some tactics to provide visibility and then control to handle all these network-connected devices.

SHD Cover

We would like to thank ForeScout Technologies for licensing the content in this paper. Our unique Totally Transparent Research model enables us to think objectively about future attack vectors and speculate a bit on the impact to your organization, without paywalls or other such gates restricting access to research you may need.

You can get the paper from the landing page in our research library.

—Mike Rothman

Monday, June 13, 2016

Understanding and Selecting RASP: Buyers Guide

By Adrian Lane

Before we jump into today’s post, we want to thank Immunio for expressing interest in licensing this content. This type of support enables us to bring quality research to you, free of charge. If you are interested in licensing this Securosis research as well, please let us know. And we want to thank all of you who have been commenting throughout this series – we have received many good comments and questions. We have in fact edited most of the posts to integrate your feedback, and added new sections to address your questions. This research is certainly better for it! And it’s genuinely helpful that the community at large can engage is an open discussion, so thanks again to all you who have participated.

We will close out this series by directing your attention to several key areas for buyers to evaluate, in order to assess suitability for your needs. With new technologies it is not always clear where the ‘gotchas’ are. We find many security technologies meet basic security goals, but after they have been on-premise for some time, you discover management or scalability nightmares. To help you avoid some of these pitfalls, we offer the following outline of evaluation criteria. The product you choose should provide application protection, but it should also be flexible enough to work in your environment. And not just during Proof of Concept (PoC) – every day.

  • Language Coverage: Your evaluation should ensure that the RASP platforms you are considering all cover the programming languages and platforms you use. Most enterprises we speak with develop applications on multiple platforms, so ensure that there is appropriate coverage for all your applications – not just the ones you focus on during the evaluation process.
  • Blocking: Blocking is a key feature. Sure, some of you will use RASP for monitoring and instrumentation – at least in the short term – but blocking is a huge part of RASP’s value. Without blocking there is no protection – even more to the point, get blocking wrong and you break applications. Evaluating how well a RASP product blocks is essential. The goal here is twofold: make sure the RASP platform is detecting the attacks, and then determine if its blocking action negatively affects them. We recommend penetration testing during the PoC, both to verify that common attack vectors are handled, and to gauge RASP behavior when attacks are discovered. Some RASPs simply block the request and return an error message to the user. In some cases RASP can alter a request to make it benign, then proceed as normal. Some products alter user sessions and redirect users to login again, or jump through additional hoops before proceeding. Most RASP products provide customers a set of options for how they should respond to different types of attacks. Most vendors consider attack detection techniques part of their “secret sauce”, so we are unable to offer insight into the differences. But just as important is how well application continuity is preserved when responding to threats, which you can monitor directly during evaluation.
  • Policy Coverage: It’s not uncommon for one or more members of a development team to be proficient with application security. That said, it’s unreasonable to expect developers to understand the nuances of new attacks and the details behind every CVE. Vulnerability research, methods of detection, and appropriate methods to block attacks are large parts of the value each RASP vendor provides. Your vendor spends days – if not weeks – developing each policy embedded into their tool. During evaluation, it’s important to ensure that critical vulnerabilities are addressed. But it is arguably more important to determine how – and how often – vendors update policies, and verify they include ongoing coverage. A RASP product cannot better than its policies, so ongoing support is critical as new threats are discovered.
  • Policy Management: Two facets of policy management come up most often during our discussions. The first is identification of which protections map to specific threats. Security, risk, and compliance teams all ask, “Are we protected against XYZ threat?” You will need to show that you are. Evaluate policy lookup and reporting. The other is tuning how to respond to threats. As we mentioned above under ‘Blocking’, most vendors allow you to tune responses either by groups of issues, or on a threat-by-threat basis. Evaluate how easy this is to use, and whether you have sufficient options to tailor responses.
  • Performance: Being embedded into applications enables RASP to detect threats at different locations within your app, with context around the operation being performed. This context is passed. along with the user request, to a central enforcement point for analysis. The details behind detection vary widely between vendors, so performance varies as well. Each user request may generate dozens of checks, possibly including multiple external references. This latency can easily impact user experience, so sample how long analysis takes. Each code path will apply a different set of rules, so you will need to test several different paths, measuring both with and without RASP. You should do this under load to ensure that detection facilities do not bottleneck application performance. And you’ll want to understand what happens when some portion of RASP fails, and how it responds – does it “fail open”?
  • Scalability: Most web applications scale by leveraging multiple application instances, distributing user requests distributed via a load balancer. As RASP is typically built into the application, it scales right along with it, without need for additional changes. But if RASP leverages external threat intelligence, you will want to verify this does not hamper scalability. For RASP platforms where the point of analysis – as opposed to the point of interception – is outside your application, you need to verify how the analysis component scales. For RASP products that work as a cloud service using non-deterministic code inspection, evaluate how their services scale.
  • API Compatibility: Most interest in RASP is prompted by a desire to integrate into application development processes, automating security deployment alongside application code, so APIs are a central feature. Ensure the RASP products you consider are compatible with Jenkins, Ansible, Chef, Puppet, or whatever automated build tools you employ. On the back end make sure RASP feeds information back into your systems for defect tracking, logging, and Security Information and Event Management (SIEM). This data is typically available in JSON, syslog, and other formats, but ensure each product provides what you need.

That concludes our series on RASP. As always, we encourage comments, questions and critique, so please let us know what’s on your mind.

—Adrian Lane