Login  |  Register  |  Contact
Wednesday, July 15, 2015

Incite 7/15/15 — On Top of the Worlds

By Mike Rothman

I discussed my love of exploring in the last Incite, and I have been fortunate to have time this summer to actually explore a bit. The first exploration was a family vacation to NYC. Well, kind of NYC. My Dad has a place on the Jersey shore, so we headed up there for a couple days and took day trips to New York City to do the tourist thing.

For a guy who grew up in the NY metro area, it’s a bit weird that I had never been to the Statue of Liberty. The twins studied the history of the Statue and Ellis Island this year in school, so I figured it was time. That was the first day trip, and we were fortunate to be accompanied by Dad and his wife, who spent a bunch of time in the archives trying to find our relatives who came to the US in the early 1900s. We got to tour the base of Lady Liberty’s pedestal, but I wasn’t on the ball enough to get tickets to climb up to the crown. There is always next time.

WTC

A few days later we went to the new World Trade Center. I hadn’t been to the new building yet and hadn’t seen the 9/11 memorial. The memorial was very well done, a powerful reminder of the resilience of NYC and its people. I made it a point to find the name of a fraternity brother who passed away in the attacks, and it gave me an opportunity to personalize the story for the kids. Then we headed up to the WTC observation deck. That really did put us on top of the world. It was a clear day and we could see for miles and miles and miles. The elevators were awesome, showing the skyline from 1850 to the present day as we rose 104 stories. It was an incredible effect, and the rest of the observation deck was well done. I highly recommend it for visitors to NY (and locals playing hooky for a day).

Then the kids went off to camp and I hit the road again. Rich was kind enough to invite me to spend the July 4th weekend in Boulder, where he was spending a few weeks over the summer with family. We ran a 4K race on July 4th, and drank what seemed to be our weight in beer (Avery Brewing FTW) afterwards. It was hot and I burned a lot of calories running, so the beer was OK for my waistline. That’s my story and I’m sticking to it.

The next day Rich took me on a ‘hike’. I had no idea what he meant until it was too late to turn back. We did a 2,600’ elevation change (or something like that) and summited Bear Peak. We ended up hiking about 8.5 miles in a bit over 5 hours. At one point I told Rich I was good, about 150’ from the summit (facing a challenging climb). He let me know I wasn’t good, and I needed to keep going. I’m glad he did because it was both awesome and inspiring to get to the top.

Mike on Bear Peak

I’ve never really been the outdoorsy type, so this was way outside my comfort zone. But I pushed through. I got to the top, and as Rich told me would happen before the hike, everything became crystal clear. It was so peaceful. The climb made me appreciate how far I’ve come. I had a similar feeling when I crossed the starting line during my last half marathon. I reflected on how unlikely it was that I would be right there, right then. Unlikely according to both who I thought I was and what I thought I could achieve.

It turns out those limitations were in my own mind. Of my own making. And not real. So now I have been to the top of two different worlds, exploring and getting there via totally different paths. Those experiences provided totally different perspectives. All I know right now is that I don’t know. I don’t know what the future holds. I don’t know how many more hills I’ll climb or races I’ll run or businesses I’ll start or places I’ll live, or anything for that matter. But I do know it’s going to be very exciting and cool to find out.

–Mike

Photo credit: “One World Trade Center Observatory (5)” originally uploaded by Kai Brinker and Mike Selfie on top of Bear Peak.


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. It takes a data scientist to know one: Data science is hot, hot, hot. Especially in security, where the new hotness is analytics to detect space alien attackers. And the data scientists have the keys to find them. Of course, then you actually have to hire these folks. And it’s not like when I ran marketing teams, and knew the jobs of my team as well as they did. So if you’re not a math person, how do you hire a math person? The good news is that one of my favorite math people, Jay Jacobs (now of BitSight) has listed 5 things to think about when hiring a data scientist. His first suggestion is to give them data and let them do their stuff. Which makes a huge amount of sense. That’s what I did for every job I interviewed for. I either prepared a research report or presentation, or built a marketing plan. You also need to ask questions (even if you think they are dumb questions), understand what they’ve done, and see if they can communicate the value of their efforts in business terms. Jay’s last point is the most critical. Data scientists are kind of like unicorns. If you hold out for the perfect one, you will be looking for a long time. As in every emerging field, you need to balance substance and experience with intelligence and drive, because the function will change and you will need your hires to grow along with it. – MR

  2. Tortoise and Hare: Our own Dave Lewis’ recent post on Forbes – The Opportunity Presented By Shadow IT – mirrors a trend I am seeing with CISOs. Several CISOs I heard from during a recent panel said much the same thing. They had come to view rogue IT as an opportunity to learn. It showed them their users’ (their real customers’) pain points, and where resources should be allocated to address these issues. It showed the delta between IT-governed rollouts and rogue IT, and made very clear the cost differential between the two. Shadow IT showed where security controls went unnoticed, and which users fought or ignored/avoided ‘real’ IT altogether. Dave’s point that the rogue project put the company at risk is on the mark, but it should be clear that a lack of agility within IT – across all industries – is an issue which IT and operations teams need to work on. The status quo is not working. But that’s not news – the status quo has been broken for a long time. – AL

  3. Sucking less at security operations: When I’m doing a talk, I usually get big laughs when I state the obvious: most organizations suck at security ops. Of course the laughs are a bit forced: “Is he talking about me?” Odds are I am, because security ops, like consistent patch and configuration management, is hard. Hygiene is not sexy, but neither is flossing your teeth. Until you lose all your teeth, as my dentist constantly reminds me. SecurityWeek ran a good reminder of the challenges of patching consistently a while ago. But it’s worth revisiting, especially given that almost every major software company has some kind of patching process for their stuff. Of course, as we enter cloud-based reality, patching and ops take on different connotations (and we have a lot to say about that), but for now you need to continue paying attention to the security ops side of the house. Which is a reminder that never gets old, mostly because we as an industry still can’t seem to figure it out. – MR

  4. Bit Split Reduce: Homomorphic encryption is essentially encrypted data that you can still do real work with, including sorting and summing values. A recent Wired article, MIT’s Bitcoin-Inspired ‘Enigma’ Lets Computers Mine Encrypted Data discusses a new take. We have seen many of these claims in the past, including many variants which force cryptographic compromises to enable computation. And we’ve seen the real thing too, but only in laboratory experiments – the processing overhead is about 100k times higher than normal data processing, so not feasible for normal usage. The MIT team’s approach sounds like a combination of the ‘bitsplitting’ storage strategies used by some cloud providers to obfuscate customer data, and big data style distributed processing. With a big data MapReduce function, they use the reduce part to arrange or filter data, protecting its integrity by assigning each node tiny data elements that – on their own – are meaningless. In the aggregate they can produce real results. But the real question is “Is this secure?” Unfortunately I have no clue from the white paper, because security issues are more likely to pop up in practical application, rather than in general concepts. That said, statements like “Thanks to some mathematical tricks the Enigma creators implemented” make me very nervous… so the jury is still out, and will remain so until we have something we can test. – AL

  5. It’s bad. Trust me. Ever the contrarian, Shack goes after the valuation in the wake of a breach bogeyman. A key message in most security vendor pitches is that breaches are bad for market cap. But what if that’s not really the case? What if the data shows that over time a breach can actually be good for business, if only to shine a spotlight on broken processes and force the business to be much more strategic and effective about how they do things? Like most transformation catalysts, it really sucks at the time. Anyone who has lived through a breach response and the associated public black eye knows it sucks. But if that results in positive change and a stronger company at the end of the process, maybe it’s not the worst thing. Nah, never mind. That’s crazy talk. What would all the vendors talk about if they couldn’t scare you with FUD? They’d actually have to address the fact their products don’t help (for the most part). Oh, did I actually write that down? Oops. – MR

—Mike Rothman

EMV and the Changing Payment Space: the Basics

By Adrian Lane

This is the second post in our series on the “liability shift” proposed by EMVCo – the joint partnership of Visa, Mastercard, and Europay. Today we will cover the basics of what the shift is about, requirements for merchants, and what will happen to those who do not comply. But to help understand we will also go into a little detail about payment providers behind the scenes.

To set the stage, what exactly are merchants being asked to adopt? The EMV migration, or the EMV liability shift, or the EMV chip card mandate – pick your favorite marketing term – is geared toward US merchants who use payment terminals designed to work only with magnetic stripe cards. The requirement to adopt terminals capable of validating payment cards with embedded EMV compliant ‘smart’ chips. This rule goes into effect on October 1, 2015, and – a bit like my tardiness in drafting this research series – I expect may merchants to be a little late adopting the new standards.

Merchants are being asked to replace their old magstripe-only specific terminals with more advanced, and significantly more expensive, EMV chip compatible terminals. EMVCo has created three main rules to drive adoption:

  1. If an EMV ‘chipped’ card is used in a fraudulent transaction with one of the new EMV compliant terminals, just like today the merchant will not be liable.
  2. If a magnetic stripe card is used in a fraudulent transaction with a new EMV compliant terminals, just like today the merchant will not be liable.
  3. If a magnetic stripe card is used in a fraudulent transaction with one of the old magstripe-only terminals, the merchant – instead of the issuing bank – will be liable for the fraud.

That’s the gist of it: a merchant that uses an old magstripe terminal pays for any fraud. There are a few exceptions to the basic rules – for example the October date I noted above only applies to in-store terminals, and won’t apply to kiosks and automated systems like gas pumps until 2017.

So what’s all the fuss about? Why is this getting so much press? And why has there been so much pushback from merchants against adoption? Europe has been using these terminals for over a decade, and it seems like a straightforward calculation: projected fraud losses from card-present magstripe cards over some number of years vs. the cost of new terminals (and software and supporting systems). But it’s not quite that simple. Yes, cost and complexity are increased for merchants – and for the issuing banks when they send customers new ‘chipped’ credit cards. But it is not actually clear that merchant will be free of liability. I will go into reasons later in this series, but for now I can say that EMV does not fully secure the Primary Account Number, or PAN (the credit card number to you and me) sufficiently to protect merchants. It’s also not clear what data will be shared with merchants, and whether they can fully participate in affiliate programs and other advanced features of EMV. And finally, the effort to market security under threat of US federal regulation masks the real advantages for merchants and card brands.

But before I go into details some background is in order. People within the payment industry who read this know it all, but most security professionals and IT practitioners – even those working for merchants – are not fully conversant with the payment ecosystem and how data flows. Further, it’s not useful for security to focus solely on chips in cards, when security comes into play in many other places in the payment ecosystem. Finally, it’s not easy to understand the liability shift without first understanding where liability might shift from. As these things all go hand in hand – liability and insecurity – so it’s time to talk about the payment ecosystem, and some other areas where security comes into play.

When a customer swipes a card, it is not just the merchant who is involved in processing the transaction. There are potentially many different banks and service providers who help route the request and who send money to the right places. And the merchant never contacts your bank – also know as the “issuing bank” directly. When you swipe your card at the terminal, the merchant may well rely on a payment gateway to connect to their bank. In other cases the gateway may not link directly to the merchant’s bank; instead it may enlist a payment processor to handle transactions. The payment processor may be the merchant bank or a separate service provider. The processor collects funds from the customer’s bank and provides transaction approval. Here is a bit more detail on the major players.

  • Issuing Bank: The issuer typically maintains customer relationships (and perhaps affinity branding) and issues credit cards. They offer affiliate branded payment cards, such as for charities. There are thousands of issuers worldwide. Big banks have multiple programs with many third parties, credit unions, small regional banks, etc. And just to complicate things, many ‘issuers’ outsource actual issuance to other firms. These third parties, some three hundred strong, are all certified by the card brands. Recently cost and data mining have been driving some card issuance back in-house. The banks are keenly aware of the value of customer data, and security concerns (costs) can make outsourcing less attractive. Historically most smart card issuance was outsourced because EMV was new and complicated, but advances in software and services have made it easier for issuing banks. But understand that multiple parties may be involved.
  • Payment Gateway: Basically a leased gateway linking a merchant to a merchant bank for payment processing. Their value is in maintaining networks and orchestrating process and communication. They check with the merchant bank whether the CC is stolen or overdrafted. They may check with anti-fraud detection software or services to validate transactions. Firms like PayJunction are both gateway and processor, and there are hundreds of Internet-only gateways/processors.
  • Payment Processor: A company appointed by a merchant to handle credit card transactions. It may be an acquiring bank or a designated service provider that deposits funds into merchant accounts. They help collect funds from issuers.
  • Acquiring Bank: They provide a form of capital to merchants by floating payment and then reconciling customer payments and accept deposits on the back end. Many process credit and debit payments directly; others outsource that service to their own payment processor. They also accept credit card transactions from card issuing banks. They exchange funds with issuing banks on behalf of merchants. Basically they handle transaction authorization, routing, and settling. The acquirer is really the merchant’s partner, and assumes the risk of merchant insolvency and non-payment.
  • Merchant Bank: The merchant’s bank. Usually the same as the acquiring bank.
  • Merchant Account: A contract between the merchant and the acquiring bank. The arrangement is actually a line of credit.
  • Card Brand: Visa, Mastercard, AmEx, and similar. Sometimes called an ‘association’.
  • ISO: Independent Sales Organizations for various banking relationships. They are not a card brand, but are vouched for by the brand as an official ‘associate’, and authorized to provide third-party support services for issuance, point-of-swipe devices, and acquiring functions. These firms are part of the association, usually with direct banking relationships.

These are the principal players. Our next post will cover data flow on the merchant side and talk about some security issues that persist despite EMV.

—Adrian Lane

Tuesday, July 14, 2015

Threat Detection Evolution: Quick Wins

By Mike Rothman

As we wrap up this series on Threat Detection Evolution, we’ll work through a quick scenario to illustrate how these concepts come together to impact on your ability to detect attacks. Let’s assume you work for a mid-sized super-regional retailer with 75 stores, 6 distribution centers, and an HQ. Your situation may be a bit different, especially if you work in a massive enterprise, but the general concepts are the same.

Each of your locations is connected via an Internet-based VPN that works well. You’ve been gradually upgrading the perimeter network at HQ and within the distribution centers by implementing NGFW technology and turning on IPS on the devices. Each store has a low-end security gateway that provides separate networks for internal systems (requiring domain authentication) and customer Internet access. There are minimal IT staff and capabilities outside HQ. A technology lead is identified for each location, but they can barely tell you which lights are blinking on the boxes, so the entire environment is built to be remotely managed.

In terms of other controls, the big project over the past year has been deploying whitelisting on all fixed function devices in distribution centers and stores, including PoS systems and warehouse computers. This was a major undertaking to tune the environment so whitelisting did not break systems, but after a period of bumpiness the technology is working well. The high-profile retail attacks of 2014 freed up budget for the whitelisting project, but aside from that your security program is right out of the PCI-DSS playbook: simple logging, vulnerability scanning, IPS, and AV deployed to pass PCI assessment; but not much more.

Given the sheer number of breaches reported by retailer after retailer, you know that the fact you haven’t suffered a successful compromise is mostly good luck. Getting ahead of PoS attacks with whitelisting has helped, but you’ve been doing this too long to assume you are secure. You know the simple logging and vulnerability scanning you are doing can easily be evaded, so you decide it’s time to think more broadly about threat detection. But with so many different technologies and options, how do you get started? What do you do first?

Getting Started

The first step is always to leverage what you already have. The good news is that you’ve been logging and vulnerability scanning for years. The data isn’t particularly actionable, but it’s there. So you can start by aggregating it into a common place. Fortunately you don’t need to spend a ton of money to aggregate your security data. Maybe it’s a SIEM, or possibly an offering that aggregates your security data in the cloud. Either way you’ll start by putting all your security data in one place, getting rid of duplicate data, and normalizing your data sources, so you can start doing some analysis on a common dataset.

Once you have your data in one place, you can start setting up alerts to detect common attack patterns in your data. The good news is that all the aggregation technologies (SIEM and cloud-based monitoring) offer options. Some capabilities are more sophisticated than others, but you’ll be able to get started with out-of-the-box capabilities. Even open source tools offer alerting rules to get you started. Additionally, security monitoring vendors invest significantly in research to define and optimize the rules that ship with their products.

One of the most straightforward attack patterns to look for involves privilege escalation after obvious reconnaissance. Yes, this is simple detection, but it illustrates the concept. Now that you have server and IPS logs in one place, you can look for increased network port scans (usually indicating reconnaissance) and then privilege escalation on a server on one of the networks being searched. This is a typical rule/policy that ships with a SIEM or security monitoring service. But you could just as easily build this into your system to get started. Odds are that once you start looking for these patterns you’ll find something. Let’s assume you don’t because you’ve done a good job so far on security fundamentals.

After starting by going through your first group of alerts, next you can look for assets in your environment which you don’t know about. That entails either active or passive discovery of devices on the network. Start by scanning your entire address space to see what’s there. You probably shouldn’t do that during business hours, but a habit of checking consistently – perhaps weekly or monthly – is helpful. In between active scans you can also passively listen for network devices sending traffic, by either looking at network flow records or deploying a passive scanning capability specifically to look for new devices.

Let’s say you discover your development shop has been testing out private cloud technologies to make better use of hardware in the data center. The only reason you noticed was passive discovery of a new set of devices communicating with back-end datastores. Armed with this information, you can meet with that business leader to make sure they took proper precautions to securely deploy their systems.

Between alerts generated from new rules and dealing with the new technology initiative you didn’t know about, you feel pretty good about your new threat detection capability. But you’re still looking for stuff you already know you should look for. What really scares you is what you don’t know to look for.

More Advanced Detection

To look for activity you don’t know about, you need to first define normal for your environment. Traffic that is not ‘normal’ provides a good indicator of potential attack. Activity outliers are a good place to start because network traffic and transaction flows tend to be reasonably stable in most environments. So you start with anomaly detection by spending a week or so training your detection system, setting baselines for network traffic and system activity.

Once you start getting alerts based on anomalies, you will spend a bit of time refining thresholds and decreasing the noise you see from alerts. This tuning time may be irritating, but it’s a necessary evil to optimize the system and ensure your alerts identify activity you need to investigate. And it turns out to be a good thing you set up the baselines, because you were able to detect emerging adversary activity in a distribution center. The attackers got in by targeting a warehouse manager with a phishing message, and they were burrowing deeper into your environment when you saw strange traffic from that distribution center, targeting the Finance group to access payment information.

As you expected, there was malicious activity within your environment. You just didn’t have the optics to see it until you deployed your new detection capability. With the new detection system and some time wading through the initial alerts, you got a quick and substantial win from your investment.

Threat Intelligence

On the back of your high-profile win detecting attackers, you now want to start taking advantage of attacks you haven’t seen. That means integrating threat intelligence to benefit from the misfortune of others. You first need to figure out what external data sources make sense for your environment. Your detection/monitoring vendor offers an open source threat intelligence service, so that first decision was pretty easy. At least for initial experimenting, lower cost options are better.

Over time, as you refine your use of threat intel, it may make sense to integrate other commercially available data – especially relating to trading communities because adversaries often target companies in the same industry. But for now your initial vendor feed will do the trick. So you turn on the feed and start working through alerts. Again, this requires an investment of time to tune the alerts, but can yield specific results. Let’s say you are able to detect a traffic pattern typical of an emerging malware attack kit based on alerts from your IPS. Without those specific indicators, you wouldn’t have known that traffic was malicious.

Once you get comfortable with your vendor-supplied threat intel and have your system sufficiently tuned you can start thinking about other sources. Given your presence in the retail space, and the fact that you already sold senior management on the need to participate in the Retail Information Sharing and Analysis Center (ISAC), using their indicators is a logical next step.

Keep in mind that the objective for leveraging this external data is to start looking for attacks you don’t know exist because you haven’t seen them. Nothing is perfect, so you’ll want to also keep using out-of-the-box alerts and baselines on your monitoring systems. But if you can get ahead of the game a bit by looking for emerging attacks, you can shorten the window between attack and detection.

Taking Detection to the Next Level

The good news is that your new detection capability has shown value almost immediately. But as we discussed, it required significant tuning and demands considerable care and feeding over time. And you still face significant resource constraints, both at headquarters and in distribution centers and stores. So it makes sense to look for places where you can automate remediation.

Automation based on your evolved detection capability is about containing damage. So you want to get potentially compromised devices out of harm’s way as quickly as possible. You can quarantine devices as soon as they behave suspiciously. You can directly integrate your monitoring system with either network switches or some type of Network Access Control for this level of automation. Further, you could integrate with egress firewalls to block traffic to destinations with poor IP reputations and packets that look like command and control activity.

The key to any automation is trust. You need to trust automation in before you can let it block traffic or quarantine devices. Obviously the downside to blocking legitimate traffic can be severe, so you first need to be comfortable with the validity of alerts, and then with your integration, before you are ready to actually block traffic or quarantine devices programmatically.

We suggest a slow road to automation, recognizing the need to both tune and refine your detection system, and to integrate it with active network controls. Of course automation’s potential is awesome. Imagine being able to see a device acting outside of normal parameters, take it off the network, start an investigation, and block any other traffic to destinations the suspect device was communicating to – all automatically. Yes, it takes time and sophistication to get there. But it’s possible today, and the technologies are maturing rapidly.

With that we wrap up our Threat Detection Evolution series. We explained the need for more advanced data collection and analytics, and to integrate external threat intelligence to improve time to detection for new attacks. Remember that detection is an ongoing process, and requires consistent tuning and optimization. But the investment can dramatically shorten the window between attack and detection, and that’s about the best you can do in today’s environment of advanced attackers and defenders with limited both skills and resources.

—Mike Rothman

Wednesday, July 01, 2015

Incite 7/1/2015: Explorers

By Mike Rothman

When I take a step back I see I am pretty lucky. I’ve seen a lot of very cool places. And experienced a lot of different cultures through my business travels. And now I’m at a point in life where I want to explore more. Not just do business hotels and see the sights from the front seat of a colleague’s car or taxi. I want to explore and see all the cool things this big world has to offer.

It hasn’t always been this way. For the first two decades of my career, I was so focused on getting to the next rung on the career ladder that I forgot to take in the sights. And forget about smelling the roses. That would take time away from my plans for world domination. In hindsight that was ridiculous. I’m certainly not going to judge others who still strive for world domination, but that does not interest me any more.

I’m also at a point in life where my kids are growing up, and I only have a few more years to show them what I’ve learned is important to me. They’ll need to figure out what’s important to them, but in the meantime I have a chance to instill a love of exploration. An appreciation of cultures. And a yearning to see and experience the world. Not from the perspective of their smartphone screen, but by getting out there and experiencing life.

Dora is an explorer

XX1 left for a teen tour last Saturday. Over the next month she’ll see a huge number of very cool things in the Western part of the US. The itinerary is fantastic, and made me wonder if I could take a month off to tag along. It’s not cheap and I’m very fortunate to be able to provide her with that opportunity. All I can do is hope that she becomes an explorer, and explores throughout her life. I have a cousin who just graduated high school. He’s going to do two years of undergrad in Europe to learn international relations – not in a classroom on a sheltered US campus (though there will be some of that), but out in the world. He’s also fortunate and has already seen some parts of the world, and he’s going to see a lot more over the next four years. It’s very exciting.

You can bet I’ll be making at least two trips over there so we can explore Europe together. And no, we aren’t going to do backpacks and hostels. This boy likes hotels and nice meals.

Of course global exploring isn’t for everyone. But it’s important to me, and I’m going to try my damnedest to impart that to my kids. But I have multiple goals. First, I think individuals who see different cultures and different ways of thinking are less likely to judge people with different views. Every day we sees the hazards of judgmental people who can’t understand other points of view and think the answer is violence and negativity.

But it’s also clear that we move in a global business environment. Which means to prosper they will need to understand different cultures and appreciate different ways of doing things. It turns out the only way to really gain those skills is to get out there and explore.

Coolest of all is the fact that we all need travel buddies. I can’t wait for the days when I explore with my kids – not as a parent/child thing, but as friends going to check out cool places.

–Mike

Photo credit: “Dora the Explorer” originally uploaded by Hakan Dahlstroem


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Polishing the crystal ball: Justin Somaini offers an interesting perspective on The Future of Security Solutions. He highlights a lot of disruptive forces poised to fundamentally change how security happens over the next couple of. To make the changes somewhat tangible and less overwhelming, Justin breaks the security world into a few buckets: Network Controls Management, Monitoring and Threat Response, Software Development, Application Management, Device Management, and Risk Management/GRC. Those buckets are as good as any others. We could quibble a bit about where the computing stack resides, which is really about the data. But he highlights a lot of concepts we published in our own Future of Security research. Suffice it to say, it really makes no difference whose version of the future world you believe, because we will all be wrong somehow. Just understand that things are changing for security folks, and you’ll either go headlong into the change or get run over. – MR

  2. Less bad: Bruce Schneier offered a personal look into his selection of full disk encryption options for Windows machines. Surprised he didn’t write his own? Don’t be. Design principles and implementation details make this a hard problem to simplify, and that’s what most users need. He calls his selection “the least bad option”, but honestly it’s noteworthy that the industry has (mostly) progressed past some kid fresh out of school forming a new company based on an algorithm he cobbled together during his graduate studies. Historically you couldn’t audit this superduper new encryption code, because it was someone’s intellectual property and might compromise security if anyone else could see it. The good news is that most of you will be fine with any of Bruce’s options, because you just need to make sure the contents of your drive can’t be copied by whoever steals your laptop. As long as you’re not worried about governments breaking into your stuff, you’re good. If you are worried about governments, then you understand how hard it is to defend against an adversary with vast resources, and why “the least bad option” is really the only option for you. – AL

  3. Due care and the profit motive: Given the breach du jour we seem to read about every day, Trey Ford on the Rapid7 blog reiterates a reasonable question he heard at a recent convention from a government employee: “How do you build a standard of due care?” The Feds think putting Mudge in charge of a CyberUL initiative is a good place to start. I can’t disagree – yet. But I still believe we (as an industry) cannot legislate our way out of the issues of crap security and data protection. Trey mentions the need for information sharing (a NTSB of sorts for breaches) and cyberinsurance underwriting based on data instead of voodoo. I agree on both counts, but add that we need a profit driver to focus the innovation on options that make sense for enterprises, large and small. NIST puts out a bunch of great stuff, but it’s not always relevant to everyone. But if they had to pay their own way, Mr. Market says they’d figure out something that works for a large swath of businesses. Or they’d go away. We have threat intel as a business, and have always talked about the need for metrics/benchmarking businesses to help organizations know how they compare to others, and to optimize their limited resources accordingly. Needing to generate money to keep the lights on tends to help organizations narrow their efforts down to what matters, which legislation doesn’t. – MR

  4. The failure of documentation: I had a peer to peer (P2P) session at the RSA Conference this year on moving security into the Agile development process. But that is not what happened – instead security played a small part, and general process failures a much larger one. In fact it was a room filled mostly with people who had recently tried to move to Agile, and were failing miserably. The number one complaint? “How do we handle documentation?” QA, design, and all the other groups demand their specifications. I stepped on my instinct to say “You’re doing it wrong” – documentation is one of the things you are striving to get rid of, but a lack of agility across the rest of the company trips up many Agile efforts. A handful of people in the room had adopted continuous integration and continuous deployment, which offer one or more solutions to the group’s problems. I am not saying all problems are solved by DevOps – just that the failure common modes in that P2P discussion can be traced back to the silos we created in the days of waterfall, and need to be broken up for Agile processes to thrive. Darknet’s discussion on Agile Security raises the same concerns, and reached a similar conclusion. Security – and the rest of the team for that matter – needs to be better integrated with development. Which we have known for a long time. – AL

  5. Bootstrapping the IR report: Too many incident response reports are pretty short. Slide 1: We got owned. Slide 2: Please don’t fire me. Ugh. Okay, maybe not quite that short, but it’s not like the typical practitioner has models and guides to help document an incident – and, more importantly, to learn from what happened. So thank Lenny Zeltser, who posted a template which combines a bunch of threat, intrusion, and response models into a somewhat coherent whole. It is obviously valuable to have a template for documentation, and you can refine the pieces that work for you after a response or ten. Additionally you can use his template to guide your response if you don’t have an established incident response process. Which is really the first thing you should create. But failing that, Lenny’s template can help you understand the information you should be gathering and its context. – MR

—Mike Rothman

Tuesday, June 30, 2015

New Series: EMV, Tokenization, and the Changing Payment Space

By Adrian Lane

October 1st, 2015, is the deadline for merchants to upgrade “Point of Sale” and “Point of Swipe” terminals to recommended EMV compliant systems. To quote Wikipedia, “EMV (Europay MasterCard Visa), is a technical standard for smart payment cards and for payment terminals and automated teller machines which can accept them.” These new terminals can validate an EMV specific chip in a customer’s credit card on swipe, or validate a secure element in a mobile device when it is scanned by a terminal. The press is calling this transition “The EMV Liability Shift” because merchants who do not adopt the new standard for payment terminals are being told that they – not banks – will be responsible for fraudulent transactions. There are many possible reasons for this push.

But why should you care? I know some of you don’t care – or at least don’t think you should. Maybe your job does not involve payments, or perhaps your company doesn’t have payment terminals, or you could be a merchant who only processes “card not present” transactions. But the reality is that mobile payments and their supporting infrastructure will be a key security battleground in the coming years.

Talking about the EMV shift and payment security is difficult; there is a lot of confusion about what this shift means, what security is really being delivered, and the real benefits for merchants. Some of the confusion stems from the press focusing on value statement marketing by card brands, rather than digging into what these specifications and rollouts really involve. Stated another way, the marketed consumer value seldom matches the business intent driving the effort. So we are kicking off this new research series to cover the EMV shift, its impact on security and operations for merchants, and what they need to do beyond the specifications for security and business continuity – as part of the shift and beyond.

Every research paper we write at Securosis has the core goal of helping security practitioners get their jobs done. It’s what we do. And that’s usually a clear task when we are talking about how to deploy DLP, what DAM can and cannot do, or how to get the most out of your SIEM platform. With this series, it’s more difficult. First, payment terminals are not security appliances, but transaction processing devices which depend on security to work properly. The irony is that – from the outside – technologies that appear security-focused are only partially related to security. They are marketed as security solutions, but really intended to solve business problems or maintain competitive advantages. Second, the ecosystem is highly complex, with many different companies providing services along the chain, each having access to payment information. Third, we will discuss some security issues you probably haven’t considered – perhaps in the news or on the horizon, but likely not yet fully in your sphere of influence. Finally, many of the most interesting facets of this research, including details we needed to collect so we could write this series, are totally off the record. We will do our best to provide insights into issues merchants and payment service providers are dealing with behind the scenes (without specifically describing the scenarios that raised the issues) to help you make decisions on payment deployment options.

To amass sufficient background for this series we have spoken with merchants (both large and mid-sized), merchant banks, issuing banks, payment terminal manufacturers, payment gateway providers, card manufacturers, payment security specialists, and payment security providers. Each stakeholder has a very different view of the payment world and how they want it to work. We remain focused on helping end users get their (security) jobs done, but some of this research is background to help you understand how the pieces all fit together – and just as importantly, the business issues driving these changes.

  1. The Stated Goals: We will set the stage by explaining what EMV is, and what they are demanding of merchants. We will discuss how EMV and “smart card” technologies have changed the threat landscape in Europe and other parts of the world, and the card brands’ vision for the US. This is the least interesting part of the story, but it is necessary to understand the differences between what is being requested and what is being required – between security benefits and other things marketed as security benefits.
  2. The Landscape: We will sketch out the complicated payment landscape and where the major players fit. We do not expect readers to know the difference between an issuing bank and a merchant bank, so we will briefly explain the major players (merchants, gateways, issuers, acquirers, processors, and affiliates); showing where data, tokens, and other encrypted bits move. We will introduce each party along with their role. Where appropriate we will share public viewpoints on how each player would like access to consumer and payment data for various business functions.
  3. The Great EMV Migration: We will discuss the EMV-mandated requirements in some detail, the security problems they are intended to address, and how merchants should comply. We will examine some of the issues surrounding adoption, along with how deployment choices affect security and liability. We will also assess concerns over Chip & PIN vs. Chip & Signature, and why merchants and consumers should care.
  4. The P2P Encryption Conundrum: We will consider P2P encryption and the theory behind it. We will consider the difference between theory and practice, specifically between acquirer-based encryption solutions and P2P encryption, and the different issues when the endpoint is the gateway vs. the processor vs. the acquirer. We will explain why P2P is not part of the EMV mandate, and show how the models create weak links in the chain, possibly creating liability for merchants, and how this creates opportunities for fraud and grey areas of responsibility.
  5. The Tokens: Tokenization is a reasonably new subject in security circles, but it has demonstrated value for credit card (PAN) data security. With recent mobile payment solutions, we do not see new types of tokens to obfuscate account numbers or other pieces of financial data. We will briefly compare tokenization in merchant vs. banking systems, show how PAN data enters the system, and how it is replaced with tokens. There are three main deployment models: on-premise, Tokenization aaS, and third-party interception. We will explain how this improves security and helps reduces compliance burden and liability. We will review the impact on analysis and anti-fraud measures. Tokenization also impacts merchants, repayment, dispute resolution, and has produced services to address these requirements. We will review how Apple Pay brought tokenization to the attention of consumers, and largely blind-sided the industry. We will discuss the consumer side of payment systems; as well as how the model works, how tokens are created, where PAN data is stored, and how it fits in with merchant systems. This alternative approach brings new wrinkles to payment tokenization. The new mobile platforms and applications bring new risks to merchants, which must be considered when rolling out mobile payment solutions.
  6. Mobile Payments: We need to briefly discuss the principal security components for mobile payments, and perhaps just as importantly the operational adjustments needed to support mobile payments. We will review the Apple Pay and Starbucks mobile payment hacks, and the need for a fresh look at non-technical issues.
  7. Who Is to Blame? We will briefly address the liability shift and what happens when everything goes wrong, contrasting EMV against non-EMV deployments. Card brands offer a very succinct message to merchants: adopt EMV or accept liability. This is intended as a simple binary choice, but liability is not always that clear. We will explain the merchant liability waiver, how deployment choices help determine who is really responsible, and how liability is still an open question for some.

This research project was originally intended to be a short, focused look at EMV and the need for point-to-point encryption, but the investigation has produced some of our most interesting research over the past several years, so we will cover various related areas. Stay tuned for our next post, which will cover EMV’s goals.

—Adrian Lane

Monday, June 29, 2015

Threat Detection: Analysis

By Mike Rothman

As discussed in our last post, evolved threat detection’s first step is gathering internal and external security data. Once you have the data aggregated you need to analyze it to look for indications that you have compromised devices and/or malicious activity within your organization.

Know Your Assets

You know the old business adage: you can’t manage it if you can’t see it. In security monitoring parlance, you need to discover new assets – and changes to existing ones – to monitor them, and ultimately to figure out when a device has been compromised. A key aspect to threat detection remains discovery. The enemy of the security professional is surprise, so it is essential to always be aware of network topology and devices on the network. All devices, especially those pesky rogue wireless access points and other mobile devices, provide attack surface to adversaries.

How can you make sure you are continuously discovering these devices? You scan your address space. Of course there is active scanning, but that runs periodically. To fill in between active scans, passive scanning watches network traffic streaming by to identify devices you haven’t seen or which have changed. Once a device is identified passively, you can launch an active scan to figure out what it’s doing (and whether it is legitimate). Don’t forget to discover your entire address space – which means both IPv4 and IPv6.

Most discovery efforts focus on PCs and servers on the internal network. But that may not be enough anymore; it is typically endpoints that end up compromised, so you might want to discover both full computers and mobile devices. Finally, you will need to figure out how to discover assets in your cloud computing environments. This requires integration with cloud consoles to ensure you know about new cloud-based resources and can monitor them appropriately.

After you have a handle on the devices within your environment, the next step is to classify them. We recommend a simple classification, involving roughly 4 groupings. The most important bucket includes critical devices with access to private information and/or valuable intellectual property. Next look for devices behaving maliciously. These devices may not have sensitive information, but adversaries can move laterally from compromised devices to critical devices. Then you have dormant devices, which may have connected to a command and control infrastructure but aren’t currently doing anything malicious. Finally, there are all the other devices which aren’t doing anything suspicious – which you likely don’t have time to worry about. We introduced this categorization in the Network-based Threat Detection series – check it out if you want more detail.

Finally, we continue to harp on the criticality of a consistent process for threat detection. This includes discovery and classification. As with data collection, your technology environment is dynamic, so what you saw 10 minutes ago will have changed by 20 minutes in the future – or sooner. You need a strong process to ensure you always understand what is happening in your environment.

The C Word

Correlation has always been a challenge for security folks. It’s not because the math doesn’t work. Math works just fine. Event correlation has been a challenge because you needed to know what to look for at a very granular level. Given the kinds of attacks and advanced adversaries many organizations face, you cannot afford to count on knowing what’s coming, so it’s hard to find new and innovative attacks via traditional correlation. This has led to generally poor perceptions of SIEMs and IDS/IPS.

But that doesn’t meant correlation is useless for security. Quite the opposite. Looking for common attributes, and linking events together into meaningful models of possible attacks, provides a meaningful way to investigate security events. And you don’t want to succumb to the same attacks over and over again, so it is still important to look for indicators of attacks that have been used against you. Even better if you can detect indicators reported by other organizations, via threat intelligence, and avoid those attacks entirely.

Additionally you can (and should) stage out a number of reasonable attack patterns via threat modeling to look for common attacks. In fact, your vendor or service provider’s research team has likely built in some of these common patterns to kickstart your efforts at building out correlation rules, based on their research. These research teams also keep their correlation rules current, based on what they see in the wild.

Of course you can never know all possible attacks. So you also need to apply behavioral and other advanced analytical techniques to catch attacks you have not seen.

Looking for Outliers

Technology systems have typical activity patterns. Whether network traffic, log events, transactions, or any other kind of data source, you can establish an activity profile for how systems normally behave. Once the profile is established you look for anomalous activity, or outliers, that may represent malicious activity. Theses outliers could be anything, from any data source you collect.

With a massive trove of data, you can take advantage of advanced “Big Data” analytics (no, we don’t like that overly vague term). New technologies can reduce a huge amount of data to scan for abnormal activity patterns. You need an iterative process to refine thresholds and baseline over time. Yes, that means ongoing care and feeding of your security analytics. Activity evolves over time, so today’s normal might be anomalous in a month.

Setting up these profiles and maintaining the analytics typically requires advanced skills. The new term for these professionals is data scientists. Yes, it’s a shiny term, and practitioners are expensive. But a key aspect of detecting threats is looking for outliers, and that requires data scientists, so you’ll need to pay up. Just ensure you have sufficient resources to investigate alerts coming from your analytics engine, because if you aren’t staffed to triage and validate alerts, you waste the benefit of earlier threat detection.

Alternatively, organizations without these sophisticated internal resources should consider allowing a vendor or service provider to update and tune their correlation rules and analytics for detection. This is especially helpful as organizations embrace more advanced analytics without internal data scientists to run the math.

Visualization and Drill down

Given the challenges of finding skilled resources for triage and validation, you’ll need to supplement internal skills with technology-accelerated functions. That means better visualization, and a built-in workflow to validate and triage alerts. You want a straightforward graphical metaphor to help categorize and prioritize alerts, and then a way to dig into an alert to really understand what is happening and identify root cause.

The only way to get a feel for whether a visual metaphor will work for you is to actually use it. That’s why a proof of concept (PoC) is so important when looking at detection technologies and services. You’ll be able to pump some of your data into the tool, generate alerts, and validate them as you would in a production deployment. Even better, you’ll have skilled resources from the vendor or channel partner to help stand up the system, perform initial configuration, and work through some alerts. Take advantage of these resources to kickstart your efforts.

Integration

Standalone analytics can work, especially for very specialized use cases such as large financial institutions addressing the insider threat, we believe a more generic detection platform can make a significant impact in resource-constrained environments. Not having to perform manual triage and validation of issues can save a ton of time and supplement your internal skill sets, especially if you leverage a vendor’s security research and/or threat intelligence services.

So another key criteria for evolving threat detection is flexible integration with additional security data sources, emerging analytic techniques, advanced visualization engines, and operational workflow tools. Over time we expect the threat detection capability to morph into the core security monitoring platform collecting internal security data, absorbing threat intelligence from a number of external sources, providing analytics to detect attacks, and ultimately sending information on to operational systems and controls to change the environment.

Next we will wrap up this series with a Quick Wins scenario, presenting this theory in the context of an attack to see how evolved threat detection works in practice.

—Mike Rothman

Friday, June 19, 2015

Summary: I Am Now a Security Risk

By Rich

Rich here,

Yep, it looks very likely my personal data is now in the hands of China, or someone pretending to be China, or someone who wants it to look like China. While I can’t go into details, as many of you know I’ve done things with the federal government related to my rescue work. It isn’t secret or anything, but I never feel comfortable talking specifics because it’s part-time and I’m not authorized to represent any agency.

I haven’t been directly notified, but I have to assume that any of my records OPM had, someone… else… has. To be honest, based on what details have come out, I’d be surprised if it wasn’t multiple someone elses – this level of nation-state espionage certainly isn’t limited to any one country.

Now, on the upside, if I lose my SSN, I have it backed up overseas. Heck, I’m really bad at keeping copies of all my forms, which I seem to have to resubmit every few years, so hopefully whoever took them will set up a help desk I can call to request copies. I’d pay not to have to redo that stuff all over.

Like many of you, my data has been breached multiple times. The worst so far was the student health service at the University of Colorado, because I know my SSN and student medical records were in that one (mostly sprained ankles and a bad knee, if you were wondering – nothing exciting). That one didn’t seem to go anywhere but the OPM breach is more serious. There is a lot more info than my SSN in there, Including things like my mother’s maiden name.

This will hang over my head for the rest of my life. Long beyond the 18 months of credit monitoring I may or may not receive. I’m not worried about a foreign nation mucking with my credit, but they may well have enough to compromise my credentials for a host of services. Not by phishing me, but by walking up the long chain of identity and interconnected services until they can line up the one they want.

I am now officially a security risk for any organization I work with. Even mine.

And now on to the Summary…

We are deep into the summer, with large amounts of personal and professional travel, so this week’s will be a little short – and you probably already noticed we’ve been a bit inconsistent. Hey, we have lives, ya know!

Webcasts, Podcasts, Outside Writing, and Conferences

Securosis Posts

Research Reports and Presentations

Top News and Posts

—Rich

Threat Detection Evolution: Data Collection

By Mike Rothman

The first post in this series set the stage for the evolution of threat detection. Now that we’ve made the case for why detection must evolve, let’s work through the mechanics of what that actually means. It comes down to two functions: security data collection, and analytics of the collected data. First we’ll go through what data is helpful and where it should come from.

Threat detection requires two main types of security data. The first is internal data, security data collected from your devices and other assets within your control. It’s the stuff the PCI-DSS has been telling you to collect for years. Second is external data, more commonly known as threat intelligence. But here’s the rub: there is no useful intelligence in external threat data without context for how that data relates to your organization. But let’s not put the cart before the horse. We need to understand what security data we have before worrying about external data.

Internal Data

You’ve likely heard a lot about continuous monitoring because it is such a shiny and attractive term to security types. The problem we described in Vulnerability Management Evolution is that ‘continuous’ can have a bunch of different definitions, depending on who you are talking to. We have a rather constructionist view (meaning, look at the dictionary) and figure the term means “without cessation.” But in many cases, monitoring assets continually doesn’t really add much value over regular and reliable daily monitoring.

So we prefer consistent monitoring of internal resources. That may mean truly continuous, for high-profiles asset at great risk of compromise. Or possibly every week for devices/servers that don’t change much and don’t access high-value data. But the key here is to be consistent about when you collect data from resources, and to ensure the data is reliable.

There are many data sources you might collect from for detection, including:

  • Logs: The good news is that pretty much all your technology assets generate logs in some way, shape, or form. Whether it’s a security or network device, a server, an endpoint, or even mobile. Odds are you can’t manage to collect data from everything, so you’ll need to choose which devices to monitor, but pretty much all devices generate log data.
  • Vulnerability Data: When trying to detect a potential issue, knowing which devices are vulnerable to what can be important for narrowing down your search. If you know a certain attack targets a certain vulnerability, and you only have a handful of devices that haven’t been patched to address the vulnerability, you know where to look.
  • Configuration Data: Configuration data yields similar information to vulnerability data for providing context to understand whether a device could be exploited by a specific attack.
  • File Integrity: File integrity monitoring provides important information for figuring out which key files have changed. If a system file has been tampered with outside of an authorized change, it may indicate nefarious activity and should be checked out.
  • Network Flows: Network flow data can identify patterns of typical (normal) network activity; which enables you to look for patterns which aren’t exactly normal and could represent reconnaissance, lateral movement, or even exfiltration.

Once you decide what data to collect, you have figure out from where and how much. This involves selecting logical collection points and where to aggregate the data. This depends on the architecture of your technology stack. Many organization opt for a centralized aggregation point to facilitate end-to-end analysis, but that is contingent on the size of the organization. Large enterprises may not be able to handle the scale of collecting everything in one place, and should consider some kind of hierarchical collection/aggregation strategy where data is stored and analyzed locally, and then a subset of the data is sent upstream for central analysis.

Finally, we need to mention the role of the cloud in collection and aggregation, because almost everything is being offered either in the cloud or as a Service nowadays. The reality is that cloud-based aggregation and analysis depend on a few things. The first is the amount of data. Moving logs or flow records is not a big deal because they are pretty small and highly compressible. Moving network packets is a much larger endeavor, and hard to shift to a cloud-based service. The other key determinant is data sensitivity – some organizations are not comfortable with their key security data outside their control in someone else’s data center/service. That’s an organizational and cultural issue, but we’ve seen a much greater level of comfort with cloud-based log aggregation over the past year, and expect it to become far more commonplace inside a 2-year planning horizon.

The other key aspect of internal data collection is integration and normalization of the data. Different data sources have different data formats, which creates a need to normalize data to compare datasets. That involves compromise in terms of granularity of common data formats, and can favor an integrated approach where all data sources are already integrated into a common security data store. Then you (as the practitioner) don’t really need to worry about making all those compromises – instead you can bet that your vendor or service provider has already done the work.

Also consider the availability of resources for dealing with these disparate data sources. The key issue, mentioned in the last post, remains the skills shortage; so starting a data aggregation/collection effort that depends on skilled resources to manage normalization and integration of data may not be the best idea. This doesn’t really have much to do with the size of the organization – it’s really about the sophistication of staff – security data integration is an advanced function that can be beyond even large organizations with less mature security efforts.

Ultimately your goal is visibility into your entire technology infrastructure. An end-to-end view of what’s happening in your environment, wherever your data is, gives you a basis for evolving your detection capabilities.

External Data

We have published a lot of research on threat intel to date, most recently a series on Applied Threat Intelligence, which summarized the three main use cases we see for external data.

There are plenty of sources of external data nowadays. The main types are:

  • Commercial integrated: It seems every security vendor has a research group providing some type of intelligence. This data is usually very tightly integrated into the product or service you buy from the vendor. There may be a separate charge for intelligence, beyond the base cost of the product or service.
  • Commercial standalone: Standalone threat intel is an emerging security market. These vendors typically offer an aggregation platform to collect external data and integrate it into your controls and monitoring systems. Some also gather industry-specific data because attacks tend to cluster in specific industries.
  • ISAC: Information Sharing and Analysis Centers are industry-specific organizations that aggregate data from an industry and share it among members. The best known ISAC is for the financial industry, although many other industry associations are spinning up their own ISACs as well.
  • OSINT: Finally there is open source intel, publicly available sources for things like malware samples and IP reputation, which can be queried and/or have intel integrated directly into user systems.

How does this external data play into an evolved threat detection capability? As mentioned above, external data without context isn’t very helpful. You don’t know which of the alerts or notifications apply to your environment, so you just create a lot of extra work to figure it out. And the idea is not to create more work.

How can you provide that context? Use the external threat data to look for specific instances of an attack. As we described in Threat Intelligence and Security Monitoring, you can use indicators from other attacks to pinpoint that activity in your network, even if you’ve never seen the attack before. Historically you were restricted to only alerting on conditions/correlations you knew about, so this is a big deal.

To use a more tangible example, let refer back to the concept of retrospection. Let’s say you didn’t know about a heretofore unknown attack (like Duqu 2.0), and received a set of indicators from a threat intelligence provider. You could then look for those indicators within your network. Even if you don’t find that specific attack immediately, you could set your monitoring system (typically a SIEM or an IPS) to watch for those indicators. Basically you can jump time, looking for attacks that haven’t happened to you – yet.

Default to the Process

As usual, it all comes back to process. We mapped that out in TI+SM and Threat Intelligence and Incident Response. You need a process to procure, collect, and utilize threat intelligence, from whatever sources it comes.

Then use external data as triggers or catalysts to mine your internal data using advanced analytics, to see if you find those indicators in your network. We’ll dig into that part of the process next.

—Mike Rothman

Thursday, June 11, 2015

My 2015 Personal Security Guiding Principles and the New Rand Report

By Rich

In 2009, I published My Personal Security Guiding Principles. They hold up well, but my thinking has evolved over six years. Some due to personal maturing, and a lot due to massive changes in our industry.

It’s time for an update. The motivation today comes thanks to Juniper and Rand. I want to start with my update, so I will cover the report afterwards.

Here is my 2015 version:

  1. Don’t expect human behavior to change. Ever.
  2. Simple doesn’t scale.
  3. Only economics really changes security.
  4. You cannot eliminate all vulnerabilities.
  5. You are breached. Right now.

In 2009 they were:

  1. Don’t expect human behavior to change. Ever.
  2. You cannot survive with defense alone.
  3. Not all threats are equal, and all checklists are wrong.
  4. You cannot eliminate all vulnerabilities.
  5. You will be breached.

The big changes are dropping numbers 2 and 3. I think they still hold true, and they would now come in at 6 and 7 if I wasn’t trying to keep to 5 total. The other big change is #5, which was You will be breached. and is now You are breached.

Why the changes? I have always felt economics is what really matters in inciting security change, and we have more real-world examples showing that it’s actually possible. Take a look at Apple’s iOS security, Amazon Web Services, Google, and Microsoft (especially Windows). In each case we see economic drivers creating very secure platforms and services, and keeping them there.

Want to fix security in your organization? Make business units and developers pay the costs of breaches – don’t pay for them out of central budget. Or at least share some liability.

As for simple… I’m beyond tired of hearing how “If company X just did Y basic security thing, they wouldn’t get breached that particular way this particular time.” Nothing is simple at scale; not even the most basic security controls. You want secure? Lock things down and compartmentalize to the nth degree, and treat each segment like its own little criminal cell. It’s expensive, but it keeps groups of small things manageable. For a while.

Lastly, let’s face it, you are breached. Assume the bad guys are already behind your defenses and then get to work. Like one client I have, who treats their entire employee network as hostile, and makes them all VPN in with MFA to connect to anything.

Motivated by Rand

The impetus for finally writing this up is a Rand report sponsored by Juniper. I still haven’t gotten through the entire thing, but it reads like a legitimate critical analysis of our entire industry and profession from the outside, not the usual introspection or vendor-driven nonsense FUD.

Some choice quotes from the summary:

  • Customers look to extant tools for solutions even though they do not necessarily know what they need and are certain no magic wand exists.
  • When given more money for cybersecurity, a majority of CISOs choose human-centric solutions.
  • CISOs want information on the motives and methods of specific attackers, but there is no consensus on how such information could be used.
  • Current cyberinsurance offerings are often seen as more hassle than benefit, useful in only specific scenarios, and providing little return.
  • The concept of active defense has multiple meanings, no standard definition, and evokes little enthusiasm.
  • A cyberattack’s effect on reputation (rather than more-direct costs) is the biggest cause of concern for CISOs. The actual intellectual property or data that might be affected matters less than the fact that any intellectual property or data are at risk.
  • In general, loss estimation processes are not particularly comprehensive.
  • The ability to understand and articulate an organization’s risk arising from network penetrations in a standard and consistent matter does not exist and will not exist for a long time.

Most metrics? Crap. Loss metrics? Crap. Risk-based approaches? All talk. Tools? No one knows if they work. Cyberinsurance? Scam.

Overall conclusion? A marginally functional shitshow.

Those are my words. I’ve used them a lot over the years, but this report lays it out cleanly and clearly. It isn’t that we are doing everything wrong – far from it – but we are stuck in an endless cycle of blocking and tackling, and nothing will really change until we take a step back.

Personally I am quite hopeful. We have seen significant progress over the past decade, and I fell like we are at an inflection point for change and improvement.

—Rich

Incite 6/10/2015: Twenty Five

By Mike Rothman

This past weekend I was at my college reunion. It’s been twenty five years since I graduated. TWENTY FIVE. It’s kind of stunning when you think about it. I joked after the last reunion in 2010 that the seniors then were in diapers when I was graduating. The parents of a lot of this year’s seniors hadn’t even met. Even scarier, I’m old enough to be their parent. It turns out a couple friends who I graduated with actually have kids in college now. Yeah, that’s disturbing.

It was great to be on campus. Life is busy, so I only see some of my college friends every five years. But it seems like no time has passed. We catch up about life and things, show some pictures of our kids, and fall right back into the friendships we’ve maintained for almost thirty years. Facebook helps people feel like they are still in touch, but we aren’t. Facebook isn’t real life – it’s what you want to show the world. Fact is, everything changes, and most of that you don’t see. Some folks have been through hard times. Others are prospering.

Dunbar's Ithaca NY

Even the campus has evolved significantly over the past five years. The off-campus area is significantly different. Some of the buildings, restaurants, & bars have the same names; but they aren’t the same. One of our favorite bars, called Rulloff’s, shut down a few years back. It was recently re-opened and pretty much looked the same. But it wasn’t. They didn’t have Bloody Marys on Thursday afternoon. The old Rulloff’s would have had galloons of Bloody Mix preparing for reunion, because that’s what many of us drank back in the day. The new regime had no idea. Everything changes.

Thankfully a bar called Dunbar’s was alive and well. They had a drink called the Combat, which was the root cause of many a crazy night during college. It was great to go into D-bars and have it be pretty much the same as we remembered. It was a dump then, and it’s a dump now. We’re trying to get one of our fraternity brothers to buy it, just to make sure it remains a dump. And to keep the Combats flowing.

It was also interesting to view my college experience from my new perspective. Not to overdramatize, but I am a significantly different person than I was at the last reunion. I view the world differently. I have no expectations for my interactions with people, and am far more accepting of everyone and appreciative of their path. Every conversation is an opportunity to learn, which I need. I guess the older I get, the more I realize I don’t know anything.

That made my weekend experience all the more gratifying. The stuff that used to annoy me about some of my college friends was no longer a problem. I realized it has always been my issue, not theirs. Some folks could tell something was different when talking to me, and that provided an opportunity to engage at a different level. Others couldn’t, and that was fine by me; it was fun to hear about their lives.

In 5 years more stuff will have changed. XX1 will be in college herself. All of us will undergo more life changes. Some will grow, others won’t. There will be new buildings and new restaurants. And I’ll still have an awesome time hanging out in the dorms until the wee hours drinking cocktails and enjoying time with some of my oldest friends. And drinking Combats, because that’s what we do.

–Mike

Photo credit: “D-bars” taken by Mike in Ithaca NY


The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back.


Securosis Firestarter

Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail.


Heavy Research

We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too.

Threat Detection Evolution

Network-based Threat Detection

Applied Threat Intelligence

Network Security Gateway Evolution

Recently Published Papers


Incite 4 U

  1. Vulnerabilities are not intrusions: Richard Bejtlich is a busy guy. As CSO of FireEye, I’m sure his day job keeps him pretty busy, as well as all his external responsibilities to gladhand big customers. So when he writes something on his personal blog you know he’s pissed off. And he’s really pissed that it seems parties within the US federal government doesn’t understand the different between vulnerabilities and intrusions. In the wake of the big breach at the Office of Personnel Management (yeah, the Fed HR department), people are saying that the issue was the lack of implementation of CDM (continuous diagnostic monitoring). But that just tells you what’s vulnerable, and we all know that’s not a defense against advanced adversaries. Even the lagging Einstein system would have had limited success, but at least it’s focusing on the right stuff: who is in your network. Richard has been one of the most fervent evangelicals of hunting for adversaries, and his guidance is pretty straightforward: “find the intruders in the network, remove them, and then conduct counter-intrusion campaigns to stop them from accomplishing their mission when they inevitably return.” Easier said than done, of course. But you never will get there if your answer is a vulnerability management program. – MR

  2. De-Googled: The Internet is a means for people to easily find information, but many large firms use the Internet to investigate you, and leverage it to monitor pretty much everything users do online. Every search, every email, every purchase, every blog comment, all the time – from here to eternity. I know a lot of privacy advocates who read the blog. Heck, I talk to many of them at security conferences, and read their comments on the stuff we post. If that’s you, a recent post from ExpressVPN on How to delete everything Google knows about you should be at the top of your reading list. It walks you through a process to collect and then delete your past Google history. I can’t vouch for the accuracy of the steps – frankly I am too busy to try it out – but it’s novel that Google provided the means, and someone has documented the obfuscated steps to delete your history. Bravo! Of course if you continue to use the embedded Google search bar, or Google+, or Gmail, or any of the other stuff Google offers, you will still be tracked. – AL

  3. What point are you trying to make? There have always been disagreements over the true cost of a lost data record. Ponemon has been publishing numbers in the hundreds of dollars per record for years (this year’s number was $350), and Verizon Business recently published a $0.58 number in the 2015 DBIR. So CSO asks if it’s $350 or $0.58? The answer is neither. There is no standard cost. There is only what it costs you, and how much you want to bury in that number to create FUD internally. Ponemon includes pretty much everything (indirect costs) and then some. Verizon includes pretty much nothing and bases their numbers off insurance claims, which can be supported by objective data. Security vendors love Ponemon’s numbers. Realists think Verizon’s are closer. Again, what are you trying to achieve? If it’s to scare the crap out of the boardroom, Ponemon is your friend. If it’s to figure out what you’ll get from your cyber-insurance policy, you need the DBIR. As we have always said, you can make numbers dance and tell whatever story you want them to. Choose wisely. – MR

  4. Barn door left open: Apache ZooKeeper is a configuration management and synchronization tool commonly used in Hadoop clusters. It’s a handy tool to help you manage dynamic databases, but it moves critical data between nodes, so the privacy and integrity of its data are critical to safe and secure operations. Evan Gilman of PagerDuty posted a detailed write-up of a ZooKeeper session encryption bug found in an Intel extension to Linux kernel modules and XEN hypervisors which essentially disables checksums. In a nutshell, the Intel support for AES within encryption module aesni-intel, which is used for VPNs and SSL traffic, will – under certain circumstances – disable checksums on the TCP headers. That’s no bueno. The bug should be simple to fix, but at this time there is no patch from Intel. Thanks to the guys at PagerDuty for taking the time to find and document this bug for the rest of us! – AL

  5. Cyber all the VC things…: Mary Meeker survived the Internet bubble as the Internet’s highest profile stock analyst, and then moved west to work with VC big shots Kleiner Perkins. She still writes the annual Internet Trends report and this year security has a pretty prominent place. Wait, what? So, in case you were wondering whether security is high-profile enough, it is. We should have been more careful about what we wished for. She devoted two pages to security in the report. Of course her thoughts are simplistic (Mobile devices are used to harvest data and insiders cause breaches. Duh.) and possibly even wrong. (Claiming MDM is critical for preventing breaches. Uh, no.) But she pinpoints the key issue: the lack of security skills. She is right on the money with that one. Overall, we should be pleased with the visibility security is getting. And it’s not going to stop any time soon. – MR

—Mike Rothman

Wednesday, June 10, 2015

Threat Detection Evolution: Why Evolve? [New Series]

By Mike Rothman

As we discussed recently in Network-based Threat Detection, prevention isn’t good enough any more. Every day we see additional proof that adversaries cannot be reliably stopped. So we have started to see the long-awaited movement of focus and funding from prevention, to detection and investigation. That said, for years security practitioners have been trying to make sense of security data to shorten the window between compromise and detection – largely unsuccessfully.

Not to worry – we haven’t become the latest security Chicken Little, warning everyone that the sky is falling. Mostly because it fell a long time ago, and we have been working to pick up the pieces ever since. It can be exhausting to chase alert after alert, never really knowing which are false positives and which indicate real active adversaries in your environment. Something has to change – it is time to advance the practice of detection, to provide better and more actionable alerts. This requires thinking more broadly about detection, and starting to integrate the various different security monitoring systems in use today.

So it’s time to bring our recent research on detection and threat intelligence together within the context of Threat Detection Evolution. As always, we are thankful that some forward-looking organizations see value in licensing our content to educate their customers. AlienVault plans to license the resulting paper at the conclusion of the series, and we will build the content using our Totally Transparent Research methodology.

(Mostly) Useless Data

There is no lack of security data. All your devices stream data all the time. Network devices, security devices, servers, and endpoints all generate a ton of log data. Then you collect vulnerability data, configuration data, and possibly network flows or even network packets. You look for specific attacks with tools like intrusion detection devices and SIEM, which generate lots of alerts.

You probably have all this security data in a variety of places, with separate policies to generate alerts implemented within each monitoring tool. It’s hard enough to stay on top of a handful of consoles generating alerts, but when you get upwards of a dozen or more, getting a consistent view of your environment isn’t really feasible.

It’s not that all this data is useless. But it’s not really useful either. There is value in having the data, but you can’t really unlock its value without performing some level of integration, normalization, and analytics on the data. We have heard it said that finding attackers is like finding a needle in a stack of needles. It’s not a question of whether there is a needle there – you need to figure out which needle is the one poking you.

This amount of traffic and activity generates so much data that it is trivial for adversaries to hide in plain sight, obfuscating their malicious behavior in a morass of legitimate activity. You cannot really figure out what’s important until it’s too late. And it’s not getting easier – cloud computing and mobility promise to disrupt the traditional order of how technology is delivered and information is consumed by employees, customers, and business partners, so there will be more data and more activity to further complicate threat detection.

Minding the Store…

In the majority of our discussions with practitioners, sooner or later we get around to the challenge of finding skilled resources to implement the security program. It’s not a funding thing – companies are willing to invest, given the high profile of threats. The challenge is resource availability, and unfortunately there is no easy fix. The security industry is facing a large enough skills gap that there is no obvious answer.

Why can’t security practitioners be identified? What are the constraints on training more people to do security? It is actually pretty counter-intuitive, because security isn’t a typical job. It’s hard for a n00b to come in and be productive their first couple years. Even those with formal (read: academic) training in security disciplines need a couple years of operational experience before they start to become productive. And a particular mindset is required to handle a job where true success is a myth. It’s not a matter of whether an organization will be breached – it’s when, and that is hard for most people to deal with day after day.

Additionally, if your organization is not a Global 1000 company or major consulting firm, finding qualified staff is even harder because you have many of the same problems as a large enterprise, but far less budget and available skills to solve it.

Clearly what we are doing is insufficient to address the issue moving forward. So we need to look at the problem differently. It’s not a challenge that can be fixed by throwing people at it, because there aren’t enough people. It’s not a challenge that can be fixed by throwing products at it either, because organizations both large and small have been trying for years with poor results. Our industry needs to evolve its tactics to focus on doing the most important things more efficiently.

Efficiency and Integration

When you don’t have enough staff you need to make your existing staff far more efficient. That typically involves two different tactics:

  1. Minimize False Positives and Negatives: The thing that burns up more time than anything else is chasing alerts into ratholes and then finding out that they are out to be false positives. So making sure alerts represent real risk is the best efficiency increase you can manage. Obviously you also want to minimize false negatives because when you miss an attack you will spend a ton of time cleaning it up. Overall you need to focus on minimizing errors to get better utilization out of your limited staff.
  2. Automate: The other aspect of increasing efficiency is automation of non-strategic functions where possible. There isn’t a lot of value in making individual IPS rule changes based on reliable threat intel or vulnerability data. Once you can trust your automation, you can have your folks do tasks that aren’t suited to automation, like triaging possible attacks.

The other way to make better use of your staff is integration. The security business has grown incrementally to address specific problems. For example, when first connecting to the Internet you needed a firewall to provide access control for inbound connections. Soon enough your network was being attacked, so you deployed an IPS to address those attacks. Then you wanted to control employee web use, so you installed a web filter. Then you needed to see which devices where vulnerable and bought a vulnerability scanner, and so on and so forth.

This security sprawl continues in earnest today, with new advanced technologies to be deployed on the network, on endpoints, within your data center, and in the cloud. Of course you can’t turn off the old controls, so a smaller organization may need to manage 7-10 different security products and services. Larger organizations can have dozens. Obviously an integrated solution provides leverage by not having all those policies separated out, and providing a streamlined user experience for faster response.

The Goal: Risk-based Prioritization

To delve a bit into the land of motherhood and apple pie, organizations have been trying to allocate scarce resources based on potential impact to the organization. Yes, the mythical unicorn of security: prioritized alerts with context on what is actually at risk within your organization. There is no generic answer. What presents risk to one organization might not to another. What’s important to one organization clearly differs from others. Your threat detection approach needs to reflect these differences.

An evolved view of threat detection isn’t just about finding attacks. It’s about finding the attacks that present the biggest risk to your organization, and enabling an efficient and effective response. This involves integrating a bunch of existing security data sources (both internal and external) and monitors, then performing contextual analysis on that data to prioritization based on importance to your organization.

So how can you do that? We’re glad you asked – that is our subject for this series. First touching on data collection, and then the analytics necessary to detect threats accurately and efficiently. We will wrap up with a Quick Win scenario, which we use to describe tactics you can use right now to kick-start your efforts and build toward evolved threat detection.

—Mike Rothman

Contribute to the Cloud Security Alliance Guidance: Community Drives, Securosis Writes

By Rich

This week we start one of the cooler projects in the history of Securosis. The Cloud Security Alliance contracted Securosis to write the next version of the CSA Guidance.

(Okay, the full title is “Guidance for Critical Areas of Focus in Cloud Computing”). The Guidance is a foundational document at the CSA, used by a ton of organizations to define security programs when they start jumping into the world of cloud. It’s currently on version 3, which is long in the tooth, so we are starting version 4.

One of the problems with the previous version is that it was compiled from materials developed by over a dozen working groups. The editors did their best, but there are overlaps, gaps, and readability issues. To address those the CSA hired us to come in and write the new version. But a cornerstone of the CSA is community involvement, so we have come up with a hybrid approach for the next version. During each major stage we will combine our Totally Transparent Research process with community involvement. Here are the details:

  • Right now the CSA is collecting feedback on the existing Guidance. The landing page is here, and it directs you to a Google document of the current version where anyone can make suggestions. This is the only phase of the project in Google Docs, because we only have a Word version of the existing Guidance.
  • We (Securosis) will take the public feedback and outline each domain for the new version. These will be posted for feedback on GitHub (exact project address TBD).
  • After we get input on the outlines we will write first drafts, also on GitHub. Then the CSA will collect another round of feedback and suggestions.
  • Based on those, we will write a “near final” version and put that out for final review.

GitHub not only allows us to collect input, but also to keep the entire writing and editing process public.

In terms of writing, most of the Securosis team is involved. We have also contracted two highly experienced professional tech writers and editors to maintain voice and consistency. Pure community projects are often hard to manage, keep on schedule, and keep consistent… so we hope this open, transparent approach, backed by professional analysts and writers with cloud security experience, will help keep things on track, while still fully engaging the community.

We won’t be blogging this content, but we will post notes here as we move between major phases of the project. For now, take a look at the current version and let the CSA know about what major changes you would like to see.

—Rich

Tuesday, June 09, 2015

Network Security Gateway Evolution [New Series]

By Mike Rothman

(Note: We’re restarting this series over the next week, so we are reposting the intro to get things moving again. – Mike )

When is a firewall not a firewall? I am not being cute – that is a serious question. The devices that masquerade as firewalls today provide much more than just an access control on the edge of your network(s). Some of our influential analyst friends dubbed the category next generation firewall (NGFW), but that criminally undersells the capabilities of these devices.

The “killer app” for NGFW remains enforcement of security policies by application (and even functions within applications), rather than merely by ports and protocols. This technology has matured since we last covered the enterprise firewall space in Understanding and Selecting an Enterprise Firewall. Virtually all firewall devices being deployed now (except very low-end gear) have the ability to enforce application-level policies in some way. But, as with most new technologies, having new functionality doesn’t mean the capabilities are being used well. Taking full advantage of application-aware policies requires a different way of thinking about network security, which will take time for the market to adapt to.

At the same time many network security vendors continue to integrate their previously separate FW and IPS devices into common architectures/platforms. They have also combined network-based malware detection and some light identity and content filtering/protection features. If this sounds like UTM, that shouldn’t be surprising – the product categories (UTM and NGFW) provide very similar functionality, just handled differently under the hood.

Given this long-awaited consolidation, we see rapid evolution in the network security market. Besides additional capabilities integrated into NGFW devices, we also see larger chassis-based models, smaller branch office devices, and even virtualized and cloud-based configurations to extend these capabilities to every point in the network. Improved threat intelligence integration is also available to block current threats.

Now is a good time to revisit our research from a couple years ago. The drivers for selection and procurement have changed since our last look at the field. But, as mentioned above, these devices are much more than firewalls. So we use the horribly pedestrian Network Security Gateway moniker to describe what network security devices look like moving forward. We are pleased to launch the Network Security Gateway Evolution series, describing how to most effectively use the devices for the big 3 network security functions: access control (FW), threat prevention (IPS), and malware detection.

Given the forward-looking nature of our research, we will dig into a few additional use cases we are seeing – including data center segmentation, branch office protection, and protecting those pesky private/public cloud environments.

As always, we develop our research using our Totally Transparent Research methodology, ensuring no hidden influence on the research.

The Path to NG

Before we jump into how the NSG is evolving, we need to pay our respects to where it has been. The initial use case for NGFW was sitting next to an older port/protocol firewall and providing visibility int which applications are being used, and by whom. Those reports showing, in gory detail, all the nonsense employees get up to on the corporate network (much of it using corporate devices) at the end of the product test, tend to be quite pretty enlightening for the network security team and executives.

Once your organization saw the light with real network activity, you couldn’t unsee it. So you needed to take action, enforcing policies on those applications. This action leveraged capabilities such as blocking email access via a webmail interface, detecting and stopping file uploads to Dropbox, and detecting/preventing Facebook photo uploads. It all sounds a bit trivial nowadays, but a few years ago organizations had real trouble enforcing this kind of policies on web traffic.

Once the devices were enforcing policy-based control over application traffic, and then matured to offer feature parity with existing devices in areas like VPN and NAT, we started to see significant migration. Some of the existing network security vendors couldn’t keep up with these NGFW competitive threats, so we have seen a dramatic shift in the enterprise market share over the past few years, creating a catalyst for multi-billion M&A.

The next step has been the move from NGFW to NSG through adding non-FW capabilities such as threat prevention. Yes, that means not only enforcement of positive policies (access control), but also detecting attacks like a network intrusion prevention device (IPS) works. The first versions of these integrated devices could not compare to a ‘real’ (standalone) IPS, but as time marches on we expect NSGs to reach feature parity for threat prevention. Likewise, these gateways are increasingly integrating detection malware files as they enter the network, in order to provide additional value.

Finally, some companies couldn’t replace their existing firewalls (typically for budget or political reasons), but had more flexibility to replace their web filters. Given the ability of NSGs to enforce policies on web applications, block bad URLs, and even detect malware, standalone web filters took a hit. As with IPS, NSGs do not yet provide full feature parity with standalone web filters yet. But many companies don’t need the differentiating features of a dedicated web filter – making an NSG a good fit.

The Need for Speed

We have shown how NSGs have and will continue to integrate more and more functionality. Enforcing all these policies at wire speed requires increasing compute power. And it’s not like networks are slowing down. So first-generation NGFW reached scaling constraints pretty quickly. Vendors continue to invest in bigger iron, including more capable chassis and better distributed policy management, to satisfy scalability requirements.

As networks continue to get faster, will the devices be able to keep pace, retaining all their capabilities on a single device? And do you even need to run all your stuff on the same device? Not necessarily. This raises an architectural question we will consider later in the series. Just because you can run all these capabilities on the same device, doesn’t mean you should…

Alternatively you can run a NSG in “firewall” mode, just enforcing basic access control policies. Or you can deploy another NSG in “threat prevention” mode, looking for attacks. Does that sound like your existing architecture? Of course – and there is value in separating functions, depending on the scale of the environment. More important is the ability to manage all these policies from a single console, and to change the box’s capabilities through software, without needing a forklift.

Graceful Migration

We will also cover how you can actually migrate to this evolved network security platform. Budgets aren’t unlimited, so unless your existing network security vendor isn’t keeping pace (there are a few of those), your hand may not be forced into immediate migration. That gives you time to figure out the best timing to introduce these new capabilities. We will wrap up this series by with a process for figuring out how and when to introduce these new capabilities, deployment architectures, and how to select your key vendor.

The next post will dig into the firewall features of NSG, and how they continue to evolve, and why it matters to you.

—Mike Rothman

Tuesday, May 26, 2015

We Don’t Know Sh—. You Don’t Know Sh—.

By Rich

Once again we have a major security story slumming in the headlines. This time it’s Hackers on a Plane, but without all that Samuel L goodness. But what’s the real story? It’s time to face the fact that the only people who know are the ones who aren’t talking, and everything you hear is most certainly wrong.

Watch or listen:


—Rich

Friday, May 22, 2015

Summary: Ginger

By Rich

Rich here.

As a redhead (what little is left) I have spent a large portion of my life answering questions about red hair. Sometimes it’s about pain tolerance/wound healing (yes, there are genetic differences), but most commonly I get asked if the attitude is genetic or environmental.

You know, the short temper/bad attitude.

Well, here’s a little insight for those of you that lack the double recessive genes.

Yesterday I was out with my 4-year-old daughter. The one with the super red, super curly hair. You ever see Pixar’s Brave? Yeah, they would need bigger computers to model my daughter’s hair, and a movie projector with double the normal color gamut.

In a 2-hour shopping trip, at least 4 people commented on it (quite loudly and directly), and many more stared. I was warned by no less than two probable-grandmothers that I should “watch out for that one… you’ll have your hands full”. There was one “oh my god, what wonderful hair!” and another “how do you like your hair”.

At REI and Costco.

This happens everywhere we go, all the time. My son also has red hair, and we get nearly the same thing, but without the curls it’s not quite as bad. I also have an older daughter without red hair. She gets the “oh, your hair is nice too… please don’t grow up to be a serial killer because random strangers like your sister more”. At least that’s what I hear.

Strangers even come up and start combing their hands through her hair. Strangers. In public. Usually older women. Without asking.

I went through a lot of this myself growing up, but it’s only as an adult, with red-haired kids, that I see how bad it is. I thought I was a bit of an a-hole because, as a boy, I had more than my fair share of fights due to teasing over the hair. Trust me, I’ve heard it all. Yeah, fireball, very funny you —-wad, never heard that one before. I suppose I blocked out how adults react when I tried to buy a camping flashlight with my dad.

Maybe there is a genetic component, but I don’t think scientists could possible come up with a deterministic ethical study to figure it out. And if my oldest, non-red daughter ever shivs you in a Costco, now you’ll know why.

We have been so busy the past few weeks that this week’s Summary is a bit truncated. Travel has really impacted our publishing, sorry.

Securosis Posts

Favorite Outside Posts

  • Mike: Advanced Threat Detection: Not so SIEMple: Aside from the pithy title, Arbor’s blog post does a good job highlighting differences between the kind of analysis SIEM offers and the function of security analytics…
  • Rich: Cloudefigo. This is pretty cool: it’s a cloud security automation project based on some of my previous work. One of the people behind it, Moshe, is one of our better Cloud Security Alliance CCSK instructors.

Research Reports and Presentations

Top News and Posts

—Rich