Securosis

Research

Building a Threat Intelligence Program: Using TI

As we dive back into the Threat Intelligence Program, we have summarized why a TI program is important and how to (gather intelligence. Now we need a programmatic approach for using TI to improve your security posture and accelerate your response & investigation functions. To reiterate (because it has been a few weeks since the last post), TI allows you to benefit from the misfortune of others, meaning it’s likely that other organizations will get hit with attacks before you, so you should learn from their experience. Like the old quote, “Wise men learn from their mistakes, but wiser men learn from the mistakes of others.” But knowing what’s happened to others isn’t enough. You must be able to use TI in your security program to gain any benefit. First things first. We have plenty of security data available today. So the first step in your program is to gather the appropriate security data to address your use case. That means taking a strategic view of your data collection process, both internally (collecting your data) and externally (aggregating threat intelligence). As described in our last post, you need to define your requirements (use cases, adversaries, alerting or blocking, integrating with monitors/controls, automation, etc.), select the best sources, and then budget for access to the data. This post will focus on using threat intelligence. First we will discuss how to aggregate TI, then on using it to solve key use cases, and finally on tuning your ongoing TI gathering process to get maximum value from the TI you collect. Aggregating TI When aggregating threat intelligence the first decision is where to put the data. You need it somewhere it can be integrated with your key controls and monitors, and provide some level of security and reliability. Even better if you can gather metrics regarding which data sources are the most useful, so you can optimize your spending. Start by asking some key questions: To platform or not to platform? Do you need a standalone platform or can you leverage an existing tool like a SIEM? Of course it depends on your use cases, and the amount of manipulation & analysis you need to perform on your TI to make it useful. Should you use your provider’s portal? Each TI provider offers a portal you can use to get alerts, manipulate data, etc. Will it be good enough to solve your problems? Do you have an issue with some of your data residing in a TI vendor’s cloud? Or do you need the data to be pumped into your own systems, and how will that happen? How will you integrate the data into your systems? If you do need to leverage your own systems, how will the TI get there? Are you depending on a standard format like STIX/TAXXI? Do you expect out-of-the-box integrations? Obviously these questions are pretty high-level, and you’ll probably need a couple dozen follow-ups to fully understand the situation. Selecting the Platform In a nutshell, if you have a dedicated team to evaluate and leverage TI, have multiple monitoring and/or enforcement points, or want more flexibility in how broadly you use TI, you should probably consider a separate intelligence platform or ‘clearinghouse’ to manage TI feeds. Assuming that’s the case, here are a few key selection criteria to consider when selecting a stand-alone threat intelligence platforms: Open: The TI platform’s task is to aggregate information, so it must be easy to get information into it. Intelligence feeds are typically just data (often XML), and increasingly distributed in industry-standard formats such as STIX, which make integration relatively straightforward. But make sure any platform you select will support the data feeds you need. Be sure you can use the data that’s important to you, and not be restricted by your platform. Scalable: You will use a lot of data in your threat intelligence process, so scalability is essential. But computational scalability is likely more important than storage scalability – you will be intensively searching and mining aggregated data, so you need robust indexing. Unfortunately scalability is hard to test in a lab, so ensure your proof of concept testbed is a close match for your production environment, and that you can extrapolate how the platform will scale in your production environment. Search: Threat intelligence, like the rest of security, doesn’t lend itself to absolute answers. So make TI the beginning of your process of figuring out what happened in your environment, and leverage the data for your key use cases as we described earlier. One clear requirement for all use cases is search. Be sure your platform makes searching all your TI data sources easy. Scoring: Using Threat Intelligence is all about betting on which attackers, attacks, and assets are most important to worry about, so a flexible scoring mechanism offers considerable value. Scoring factors should include assets, intelligence sources, and attacks, so you can calculate a useful urgency score. It might be as simple as red/yellow/green, depending on the sophistication of your security program. Key Use Cases Our previous research has focused on how to address these key use cases, including preventative controls (FW/IPS), security monitoring, and incident response. But a programmatic view requires expanding the general concepts around use cases into a repeatable structure, to ensure ongoing efficiency and effectiveness. The general process to integrate TI into your use cases is consistent, with some variations we will discuss below under specific use cases. Integrate: The first step is to integrate the TI into the tools for each use case, which could be security devices or monitors. That may involve leveraging the management consoles of the tools to pull in the data and apply the controls. For simple TI sources such as IP reputation, this direct approach works well. For more complicated data sources you’ll want to perform some aggregation and analysis on the TI before updating rules running on the tools. In that case you’ll expect your TI platform for integrate with the tools. Test and Trust: The key concept here is trustable automation. You want to make sure any rule changes driven by TI go

Share:
Read Post

Incite 9/23/2015: Friday Night Lights

I didn’t get the whole idea of high school football. When I was in high school, I went to a grand total of zero point zero (0.0) games. It would have interfered with the Strat-o-Matic and D&D parties I did with my friends on Friday listening to Rush. Yeah, I’m not kidding about that. A few years ago one of the local high school football teams went to the state championship. I went to a few games with my buddy, who was a fan, even though his kids didn’t go to that school. I thought it was kind of weird, but it was a deep playoff run so I tagged along. It was fun going down to the GA Dome to see the state championship. But it was still weird without a kid in the school.   Then XX1 entered high school this year. And the twins started middle school and XX2 is a cheerleader for the 6th grade football team and the Boy socializes with a lot of the players. Evidently the LAX team and the football team can get along. Then they asked if I would take them to the opener at another local school one Friday night a few weeks ago. We didn’t have plans that night, so I was game. It was a crazy environment. I waited for 20 minutes to get a ticket and squeezed into the visitor’s bleachers. The kids were gone with their friends within a minute of entering the stadium. Evidently parents of tweens and high schoolers are strictly to provide transportation. There will be no hanging out. Thankfully, due to the magic of smartphones, I knew where they were and could communicate when it was time to go. The game was great. Our team pulled it out with a TD pass in the last minute. It would have been even better if we were there to see it. Turns out we had already left because I wanted to beat traffic. Bad move. The next week we went to the home opener and I didn’t make that mistake again. Our team pulled out the win in the last minute again and due to some savvy parking, I was able to exit the parking lot without much fuss. It turns out it’s a social scene. I saw some buddies from my neighborhood and got to check in with them, since I don’t really hang out in the neighborhood much anymore. The kids socialized the entire game. And I finally got it. Sure it’s football (and that’s great), but it’s the community experience. Rooting for the high school team. It’s fun. Do I want to spend every Friday night at a high school game? Uh no. But a couple of times a year it’s fun. And helps pass the time until NFL Sundays. But we’ll get to that in another Incite. –Mike Photo credit: “Punt” originally uploaded by Gerry Dincher Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Pragmatic Security for Cloud and Hybrid Networks Cloud Networking 101 Introduction Building Security into DevOps Introduction Building a Threat Intelligence Program Gathering TI Introduction Network Security Gateway Evolution Introduction Recently Published Papers EMV Migration and the Changing Payments Landscape Applied Threat Intelligence Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development The Future of Security Incite 4 U Monty Python and the Security Grail: Reading Todd Bell’s CSO contribution “How to be a successful CISO without a ‘real’ cybersecurity budget” was enlightening. And by enlightening, I mean WTF? This quite made me shudder: “Over the years, I have learned a very important lesson about cybersecurity; most cybersecurity problems can be solved with architecture changes.” Really? Then he maps out said architecture changes, which involve segmenting every valuable server and using jump boxes for physical separation. And he suggests application layer encryption to protect data at rest. The theory behind the architecture works, but very few can actually implement. I guess this could be done for very specific projects, but across the entire enterprise? Good luck with that. It’s kind of like searching for the Holy Grail. It’s only a flesh wound, I’m sure. Though there is some stuff of value in here. I do agree that fighting the malware game doesn’t make sense and assuming devices are compromised is a good thing. But without a budget, the

Share:
Read Post

Incite 8/26/2015: Epic Weekend

Sometimes I have a weekend when I am just amazed. Amazed at the fun I had. Amazed at the connections I developed. And I’m aware enough to be overcome with gratitude for how fortunate I am. A few weekends ago I had one of those experiences. It was awesome. It started on a Thursday. After a whirlwind trip to the West Coast to help a client out with a short-term situation (I was out there for 18 hours), I grabbed a drink with a friend of a friend. We ended up talking for 5 hours and closing down the bar/restaurant. At one point we had to order some food because they were about to close the kitchen. It’s so cool to make new friends and learn about interesting people with diverse experiences. The following day I got a ton of work done and then took XX1 to the first Falcons pre-season game. Even though it was only a pre-season game it was great to be back in the Georgia Dome. But it was even better to get a few hours with my big girl. She’s almost 15 now and she’ll be driving soon enough (Crap!), so I know she’ll prioritize spending time with her friends in the near term, and then she’ll be off to chase her own windmills. So I make sure to savor every minute I get with her. On Saturday I took the twins to Six Flags. We rode roller coasters. All. Day. 7 rides on 6 different coasters (we did the Superman ride twice). XX2 has always been fearless and willing to ride any coaster at any time. I don’t think I’ve seen her happier than when she was tall enough to ride a big coaster for the first time. What’s new is the Boy. In April I forced him onto a big coaster up in New Jersey. He wasn’t a fan. But something shifted over the summer, and now he’s the first one to run up and get in line. Nothing makes me happier than to hear him screaming out F-bombs as we careen down the first drop. That’s truly my happy place. If that wasn’t enough, I had to be on the West Coast (again) Tuesday of the following week, so I burned some miles and hotel points for a little detour to Denver to catch both Foo Fighters shows. I had a lot of work to do, so the only socializing I did was in the pit at the shows (sorry Denver peeps). But the concerts were incredible, I had good seats, and it was a great experience.   So my epic weekend was epic. And best of all, I was very conscious that not a lot of people get to do these kinds of things. I was so appreciative of where I am in life. That I have my health, my kids want spend time with me, and they enjoy doing the same things I do. The fact that I have a job that affords me the ability to travel and see very cool parts of the world is not lost on me either. I guess when I bust out a favorite saying of mine, “Abundance begins with gratitude,” I’m trying to live that every day. I realize how lucky I am. And I do not take it for granted. Not for one second. –Mike Photo credit: In the pit picture by MSR, taken 8/17/2015 Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We’ve raised almost $6000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and… hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building a Threat Intelligence Program Gathering TI Introduction EMV and the Changing Payment Space Mobile Payment Systemic Tokenization The Liability Shift Migration The Basics Introduction Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Can ‘em: If you want better software quality, fire your QA team – that’s what one of Forrester’s clients told Mike Gualtieri. That tracks to what we have been seeing from other firms, specifically when the QA team is mired in an old way of doing things and won’t work with developers to write test scripts and integrate them into the build process. This is one of the key points we learned earlier this year on

Share:
Read Post

Applied Threat Intelligence [New Paper]

  Threat Intelligence remains one of the hottest areas in security. With its promise to help organizations take advantage of information sharing, early results have been encouraging. We have researched Threat Intelligence deeply; focusing on where to get TI and the differences between gathering data from networks, endpoints, and general Internet sources. But we come back to the fact that having data is not enough – not now and not in the future. It is easy to buy data but hard to take full advantage of it. Knowing what attacks may be coming at you doesn’t help if your security operations functions cannot detect the patterns, block the attacks, or use the data to investigate possible compromise. Without those capabilities it’s all just more useless data, and you already have plenty of that. Our Applied Threat Intelligence paper focuses on how to actually use intelligence to solve three common use cases: preventative controls, security monitoring, and incident response. We start with a discussion of what TI is and isn’t, where to get it, and what you need to deal with specific adversaries. Then we dive into use cases.   We would like to thank Intel Security for licensing the content in this paper. Our licensees enable us to provide our research at no cost to you, so we should all thank them. As always, we developed this paper using our objective Totally Transparent Research methodology. Visit the Applied Threat Intelligence landing page in our research library, or download the paper directly (PDF). Share:

Share:
Read Post

Incite 8/12/2015: Transitions

The depths of summer heat in Atlanta can only mean one thing: the start of the school year. The first day of school is always the second Monday in August, so after a week of frenetic activity to get the kids ready, and a day’s diversion for some Six Flags roller coaster goodness, the kids started the next leg of their educational journey. XX1 started high school, which is pretty surreal for me. I remember her birth like it was yesterday, but her world has got quite a bit bigger. She spent the summer exploring the Western US and is now in a much bigger school. Of course her world will continue to get bigger with each new step. It will expand like a galaxy if she lets it. The twins also had a big change of scene, starting middle school. So they were all fired up about getting lockers for the first time. A big part of preparing them was to make sure XX2’s locker was decorated and that the Boy had an appropriately boyish locker shelf. The pink one we had left over from XX1 was no bueno. Dark purple shelves did the trick.   Their first day started a bit bumpy for the twins, with some confusion about the bus schedule – much to our chagrin, when we headed out to meet the bus, it was driving right past. So we loaded them into the car and drove them on the first day. But all’s well that ends well, and after a couple days they are settling in. As they transition from one environment to the next, the critical thing is to move forward understanding that there will be discomfort. It’s not like they have a choice about going to the next school. Georgia kind of mandates that. But as they leave the nest to build their own lives they’ll have choices – lots of them. Stay where they are, or move forward into a new situation, likely with considerable uncertainty. A quote I love is: “In any given moment we have two options: to step forward into growth or to step back into safety.” If you have been reading the Incite for any length of time you know I am always moving foward. It’s natural for me, but might not be for my kids or anyone else. So I will continue ensuring they are aware that during each transition that they can decide what to do. There are no absolutes; sometimes they will need to pause, and other times they should jump in. And if they take Dad’s lead they will keep jumping into an ever-expanding reality. –Mike Photo credit: “Flickrverse, Expanding Ever with New Galaxies Forming” originally uploaded by cobalt123 Thanks to everyone who contributed to my Team in Training run to support the battle against blood cancers. We have raised over $5,000 so far, which is incredible. I am overwhelmed with gratitude. You can read my story in a recent Incite, and then hopefully contribute (tax-deductible) whatever you can afford. Thank you. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. Aug 12 – Karma July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building a Threat Intelligence Program Gathering TI Introduction EMV and the Changing Payment Space Mobile Payment Systemic Tokenization The Liability Shift Migration The Basics Introduction Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the Encrypted Network Monitoring the Hybrid Cloud Best Practices for AWS Security Securing Enterprise Applications Secure Agile Development Trends in Data Centric Security Leveraging Threat Intelligence in Incident Response/Management The Future of Security Incite 4 U Business relevance is still important: Forrester’s Peter Cerrato offers an interesting analogy at ZDNet about not being a CISO dinosaur, and avoiding extinction. Instead try to be an eagle, whose ancestors survived the age of the dinosaurs. How do you do that? By doing a lot of the things I’ve been talking about for, um, 9 years at this point. Be relevant to business? Yup. Get face time with executives and interface with the rank and file? Yup. Plan for failure? Duh. I don’t want to minimize the helpfulness or relevance of this guidance. But I do want make clear that the only thing new here is the analogy. – MR The Dark Tangent is right: What did I learn at Black Hat? That people can hack cars. Wait, I am pretty sure I already knew this was possible. Maybe it was the new Adobe Flash bugs? Or IoT vulnerabilities? Mobile hacks or browser vulnerabilities? Yeah, same old parade of vulnerable crap. What I really learned is that Jeff Moss is right: Software liability is coming. Few vendors – Microsoft being the notable exception – have really put in the effort to address vulnerable software. Mary Ann Davidson’s insulting rant reinforces that vendors really don’t want to fix vulnerabilities –

Share:
Read Post

Incite 7/29/2015: Finding My Cause

When you have resources you are supposed to give back. That’s what they teach you as a kid, right? There are folks less fortunate than you, so you help them out. I learned those lessons. I dutifully gave to a variety of charities through the years. But I was never passionate about any cause. Not enough to get involved beyond writing a check. I would see friends of mine passionate about whatever cause they were pushing. I figured if they were passionate about it I should give, so I did. Seemed pretty simple to me, but I always had a hard time asking friends and associates to donate to something I wasn’t passionate about. It seemed disingenuous to me. So I didn’t. I guess I’ve always been looking for a cause. But you can’t really look. The cause has to find you. It needs to be something that tugs at the fabric of who you are. It has to be something that elicits an emotional response, which you need to be an effective fundraiser and advocate. It turns out I’ve had my cause for over 10 years – I just didn’t know it until recently. Cancer runs in my family. Mostly on my mother’s side or so I thought. Almost 15 years ago Dad was diagnosed with Stage 0 colon cancer. They were able to handle it with a (relatively) minor surgery because they caught it so early. That was a wake-up call, but soon I got caught up with life, and never got around to getting involved with cancer causes. A few years later Dad was diagnosed with Chronic Lymphocytic Leukemia (CLL). For treatment he’s shied away from western medicine, and gone down his own path of mostly holistic techniques. The leukemia has just been part of our lives ever since, and we accommodate. With a compromised immune system he can’t fly. So we go to him. For big events in the South, he drives down. And I was not exempt myself, having had a close call back in 2007. Thankfully due to family history I had a colonoscopy before I was 40 and the doctor found (and removed) a pre-cancerous polyp that would not have ended well for me if I hadn’t had the test. Yet I still didn’t make the connection. All these clues, and I was still spreading my charity among a number of different causes, none of which I really cared about. Then earlier this year another close friend was diagnosed with lymphoma. They caught it early and the prognosis is good. With all the work I’ve done over the past few years on being aware and mindful in my life, I finally got it. I found my cause – blood cancers. I’ll raise money and focus my efforts on finding a cure. It turns out the Leukemia and Lymphoma Society has a great program called Team in Training to raise money for blood cancer research by supporting athletes in endurance races. I’ve been running for about 18 months now and already have two half marathons under my belt. This is perfect. Running and raising money! I signed up to run the Savannah Half Marathon in November as part of the TNT team. I started my training plan this week, so now is as good a time as any to gear up my fundraising efforts. I am shooting to run under 2:20, which would be a personal record.   Given that this is my cause, I have no issue asking you to help out. It doesn’t matter how much you contribute, but if you’ve been fortunate (as I have) please give a little bit to help make sure this important research can be funded and this terrible disease can be eradicated in our lifetime. Dad follows the research very closely as you can imagine, and he’s convinced they are on the cusp of a major breakthrough. Here is the link to help me raise money to defeat blood cancers: Mike Rothman’s TNT Fund Raising Page. I keep talking about my cause, but this isn’t about me. This is about all the people suffering from cancer and specifically blood cancers. I’m raising money for all the people who lost loved ones or had to put their lives on hold as people they care about fight. Again, if you can spare a few bucks, please click the link above and contribute. –Mike The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. July 13 – Living with the OPM Hack May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Building a Threat Intelligence Program Gathering TI Introduction EMV and the Changing Payment Space Mobile Payment Systemic Tokenization The Liability Shift Migration The Basics Introduction Network Security Gateway Evolution Introduction Recently Published Papers Endpoint Defense: Essential Practices Cracking the Confusion: Encryption & Tokenization for Data Centers, Servers & Applications Security and Privacy on the

Share:
Read Post

Building a Threat Intelligence Program [New Series]

Security practitioners have been falling behind their adversaries, who launch new attacks using new techniques daily. Furthermore, defenders remain hindered by the broken negative security model of looking for attacks they have never seen before (well done, compliance mandates), and so consistently missing these attacks. If your organization hasn’t seen the attack or updated your controls and monitors to look for these new patterns… oh, well. Threat Intelligence has made a significant difference in how organizations focus their resources. Our Applied Threat Intelligence paper highlighted how organizations can benefit from the misfortune of others and leverage this external information in use cases such as security monitoring/advanced detection, incident response, and even within some active controls to block malicious activity. These tactical uses certainly help advance security, but we ended Applied Threat Intelligence with a key point: the industry needs to move past tactical TI use cases. The typical scenario goes something like this: Get hit with attack. Ask TI vendor whether they knew about attack before you did. Buy data and pump into monitors/controls. Repeat. But that’s not how we roll. Our philosophy drives a programmatic approach to security. So it’s time to advance the use of threat intelligence into the broader and more structured TI program to ensure systematic, consistent, and repeatable value. We believe this Building a Threat Intelligence Program report can act as the map to build this program and leverage threat intelligence within your security program. That’s what this new series is all about: turning tactical use cases into a strategic TI capability. We’d like thank our potential licensee on this project, BrightPoint Security, who supports our Totally Transparent Methodology for conducting and publishing research. As always we’ll post everything to the blog first, and take feedback from folks who know more about this stuff than we do (yes, you). The Value of TI We have published a lot of research on TI, but let’s revisit the basics. What do we even mean when we say “benefiting from the misfortune of others”? Odds are that someone else will be hit by any attack before you. By leveraging their experience, you can see attacks without being directly attacked first, learning from higher profile targets. Those targets figure out how they were attacked and how to isolate and remediate the attack. With that information you can search your environment to see if that attack has already been used against you, and cut detection time. Cool, huh? If you haven’t seen the malicious activity yet, it’s likely just a matter of time; so you can start looking for those indicators within your active controls and security monitors. Let’s briefly revisit the use cases we have highlighted for Threat Intelligence: Active Controls: In this use case, threat intelligence gives you the information to block malicious activity using your active controls. Of course since you are actually blocking traffic, you’ll want to be careful about what you block versus what you merely alert on, but some activities are clearly malicious and should be stopped. Security Monitoring: An Achilles’ Heel of security monitoring is the need to know what you are looking for. TI balances the equation a bit by expanding your view. You use the indicators found by other organizations to look for malicious activity within your environment, even if you’ve never seen it. Incident Response: The last primary use case is streamlining incident response with TI. Once adversary activity is detected within your environment, you have a lot of ground to cover to find the root cause of the attack and contain it quickly. TI provides clues as to who is attacking you, their motives, and their tactics – enabling the organization to focus its response. The TI Team As mentioned above, TI isn’t new. Security vendors have been using dynamic data within their own products and services for a long time. What’s different is treating the data as something separate from the product or service. But raw data doesn’t help detect adversaries or block attacks, so mature security organizations have been staffing up threat intelligence groups, tasking them with providing context for which of the countless threats out there actually need to be dealt with now; and what needs to be done to prevent, detect, and investigate potential attacks. These internal TI organizations consume external data to supplement internal collection and research efforts, and their willingness to pay for it, which has created a new market for security data. The TI Program Organizations which build their own TI capability eventually need a repeatable process to collect, analyze, and apply the information. That’s what this series is all about. We’ll outline the structure of the program here, and dig into each aspect of the process in subsequent posts. Gathering Threat Intelligence: This step involves focusing your efforts on reliably finding intelligence sources that can help you identify your adversaries, as well as the most useful specific data types such as malware indicators, compromised devices, IP reputation, command and control indicators, etc. Then you procure the data you need and integrate it into a system/platform to use TI. A programmatic process involves identifying new and interesting data sources, constantly tuning the use of TI within your controls, and evaluating sources based on effectiveness and value. Using TI: Once you have aggregated the TI you can put it into action. The difference when structuring activity within a program is the policies and rules of engagement that govern how and when you use TI. Tactically you can be a little less structured about how data is used, but when evolving to a program this structure becomes a necessity. Marketing the Program: When performing a tactical threat intelligence initiative you focus on solving a specific problem and then moving on to the next. Broadening the use of TI requires specific and ongoing evaluation of effectiveness and value. You’ll need to define externally quantifiable success for the program, gather data to substantiate results, and communicate those results – just like any other business function. Sharing Intelligence: If there is one thing that tends to be overlooked when focusing on how the intelligence can help you, it is how sharing

Share:
Read Post

EMV and the Changing Payment Space: Migration

Moving to EMV compliant terminals is not a plug-and-play endeavor. You can’t simply plug them in, turn them on and expect everything to work. Changes are needed to the software for supporting point-of-sale systems (cash registers). You will likely need to provision keys to devices; if you manage keys internally you will also need to make sure everything is safely stored in an HSM. There are often required changes to back-office software to sync up with the POS changes. IT staff typically need to be trained on the new equipment. Merchants who use payment processors or gateways that manage their terminals for them face less disruption, but it’s still a lot of work and rollouts can take months. Much of the merchant pushback we heard was due to the cost, time, and complexity of this conversion. Merchants see basically the old payment system they have today, with one significant advantage: that cards can be validated at swipe. But merchants have not been liable for counterfeit cards, so have had little motivation to embrace this cumbersome change. PINs vs. Signatures Another issue we heard was the lack of requirement for “Chip and PIN”, meaning that in conjunction to the chipped card, users must punch in their PIN after swiping their card. This verifies that the user using the card owns it. But US banks generally do not use PINs, even for chipped cards like the ones I carry. Instead in the US signatures are typically required for purchases over a certain dollar amount, which has proven to be a poor security control. PINs could be required in the future, but the issuers have not published any such plans. Point to Point Encryption The EMV terminal specification does not mandate the use of point-to-point encryption (P2PE). That means that, as before, PAN data is transferred in the clear, along with any other data being passed. For years the security community has been asking merchants to encrypt the data from card swipe terminals to ensure it is not sniffed from the merchant network or elsewhere as the PAN is passed upstream for payment processing. Failure to activate this basic technology, which is built into the terminals, outrages security practitioners and creates a strong impression that merchants are cavalier with sensitive data; recent breaches have not improved this perception. But of course it is a bit more complicated. Many merchants need data from terminals for fraud and risk analytics. Others use the data to seed back-office customer analytics for competitive advantage. Still others do not want to be tied to a specific payment provider, such as by provisioning gateways or provider payment keys. Or the answer may be all of the above, but we do not anticipate general adoption of P2PE any time soon. Why Move? The key question behind this series is: why should merchants move to EMV terminals? During our conversations each firm mentioned a set of goals they’d like to see, and a beef with some other party in the payment ecosystem. The card brands strongly desire any changes that will make it easier for customers to use their credit cards and grease the skids of commerce, and are annoyed at merchants standing in the way of technical progress. The merchants are generally pissed at the fees they pay per transaction, especially for the level of service they receive, and want the whole security and compliance mess to go away because it’s not part of their core business. These two factors are why most merchants wanted a direct Merchant-Customer Exchange (MCX) based system that did away with credit cards and allowed merchant to have direct connections with customer bank accounts. The acquirers were angry that they have been forced to shoulder a lot of the fraud burden, and want to maintain their relationships with consumers rather than abdicating it to merchants. And so on. Security was never a key issue in any of these discussions. And nobody is talking about point-to-point encryption as part of the EMV transition, so it will not really protect the PAN. Additionally, the EMV transition will not help with one of the fastest growing types of fraud: Card Not Present transactions. And remember that PINs are not required – merely recommended, sometimes. For all these reasons it does not appear that security is driving the EMV shift. This section will be a bit of a spoiler for our conclusion, but I think you’ll see from the upcoming posts where this is all heading. There are several important points to stress here. First, EMV terminal adoption is not mandatory. Merchants are not being forced to update. But the days of “nobody wanting EMV” are past us – especially if you take a broad view of what the EMV specifications allow. Citing the lack of EMV cards issued to customers is a red herring. The vast majority of card holders have smart phones today, which can be fully capable “smart cards”, and many customers will happily use them to replace plastic cards. We see it overseas, especially in Africa, where some countries process around 50% of payments via mobile devices. Starbucks has shown definitively that consumers will use mobile phones for payment, and also do other things like order via an app. Customers don’t want better cards – they want better experiences, and the card brands seem to get this. Security will be better, and that is one reason to move. The liability waiver is an added benefit as well. But both are secondary. The payment technology change may look simple, but the real transition underway is from magnetic plastic cards to smartphones, and it’s akin to moving from horses to automobiles. I could say this is all about mobile payments, but that would be gross oversimplification. It is more about what mobile devices – powerful pocket computers – can and will do to improve the entire sales experience. New technology enables complex affinity and pricing plans, facilitates the consumer experience, provides geolocation, and offers an opportunity to bring the underlying system into the modern age (with modern security). If

Share:
Read Post

Incite 7/15/15 — On Top of the Worlds

I discussed my love of exploring in the last Incite, and I have been fortunate to have time this summer to actually explore a bit. The first exploration was a family vacation to NYC. Well, kind of NYC. My Dad has a place on the Jersey shore, so we headed up there for a couple days and took day trips to New York City to do the tourist thing. For a guy who grew up in the NY metro area, it’s a bit weird that I had never been to the Statue of Liberty. The twins studied the history of the Statue and Ellis Island this year in school, so I figured it was time. That was the first day trip, and we were fortunate to be accompanied by Dad and his wife, who spent a bunch of time in the archives trying to find our relatives who came to the US in the early 1900s. We got to tour the base of Lady Liberty’s pedestal, but I wasn’t on the ball enough to get tickets to climb up to the crown. There is always next time.   A few days later we went to the new World Trade Center. I hadn’t been to the new building yet and hadn’t seen the 9/11 memorial. The memorial was very well done, a powerful reminder of the resilience of NYC and its people. I made it a point to find the name of a fraternity brother who passed away in the attacks, and it gave me an opportunity to personalize the story for the kids. Then we headed up to the WTC observation deck. That really did put us on top of the world. It was a clear day and we could see for miles and miles and miles. The elevators were awesome, showing the skyline from 1850 to the present day as we rose 104 stories. It was an incredible effect, and the rest of the observation deck was well done. I highly recommend it for visitors to NY (and locals playing hooky for a day). Then the kids went off to camp and I hit the road again. Rich was kind enough to invite me to spend the July 4th weekend in Boulder, where he was spending a few weeks over the summer with family. We ran a 4K race on July 4th, and drank what seemed to be our weight in beer (Avery Brewing FTW) afterwards. It was hot and I burned a lot of calories running, so the beer was OK for my waistline. That’s my story and I’m sticking to it. The next day Rich took me on a ‘hike’. I had no idea what he meant until it was too late to turn back. We did a 2,600’ elevation change (or something like that) and summited Bear Peak. We ended up hiking about 8.5 miles in a bit over 5 hours. At one point I told Rich I was good, about 150’ from the summit (facing a challenging climb). He let me know I wasn’t good, and I needed to keep going. I’m glad he did because it was both awesome and inspiring to get to the top.   I’ve never really been the outdoorsy type, so this was way outside my comfort zone. But I pushed through. I got to the top, and as Rich told me would happen before the hike, everything became crystal clear. It was so peaceful. The climb made me appreciate how far I’ve come. I had a similar feeling when I crossed the starting line during my last half marathon. I reflected on how unlikely it was that I would be right there, right then. Unlikely according to both who I thought I was and what I thought I could achieve. It turns out those limitations were in my own mind. Of my own making. And not real. So now I have been to the top of two different worlds, exploring and getting there via totally different paths. Those experiences provided totally different perspectives. All I know right now is that I don’t know. I don’t know what the future holds. I don’t know how many more hills I’ll climb or races I’ll run or businesses I’ll start or places I’ll live, or anything for that matter. But I do know it’s going to be very exciting and cool to find out. –Mike Photo credit: “One World Trade Center Observatory (5)” originally uploaded by Kai Brinker and Mike Selfie on top of Bear Peak. The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the 2014 conference. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. July 13 – [] May 26 – We Don’t Know Sh–. You Don’t Know Sh– May 4 – RSAC wrap-up. Same as it ever was. March 31 – Using RSA March 16 – Cyber Cash Cow March 2 – Cyber vs. Terror (yeah, we went there) February 16 – Cyber!!! February 9 – It’s Not My Fault! January 26 – 2015 Trends January 15 – Toddler December 18 – Predicting the Past November 25 – Numbness October 27 – It’s All in the Cloud October 6 – Hulk Bash September 16 – Apple Pay August 18 – You Can’t Handle the Gartner Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Threat Detection Evolution Quick Wins Analysis Data Collection Why Evolve? Network-based Threat Detection Operationalizing Detection Prioritizing with Context Looking for Indicators Overcoming the Limits of Prevention Network Security Gateway Evolution Introduction Recently

Share:
Read Post

Threat Detection Evolution: Quick Wins

As we wrap up this series on Threat Detection Evolution, we’ll work through a quick scenario to illustrate how these concepts come together to impact on your ability to detect attacks. Let’s assume you work for a mid-sized super-regional retailer with 75 stores, 6 distribution centers, and an HQ. Your situation may be a bit different, especially if you work in a massive enterprise, but the general concepts are the same. Each of your locations is connected via an Internet-based VPN that works well. You’ve been gradually upgrading the perimeter network at HQ and within the distribution centers by implementing NGFW technology and turning on IPS on the devices. Each store has a low-end security gateway that provides separate networks for internal systems (requiring domain authentication) and customer Internet access. There are minimal IT staff and capabilities outside HQ. A technology lead is identified for each location, but they can barely tell you which lights are blinking on the boxes, so the entire environment is built to be remotely managed. In terms of other controls, the big project over the past year has been deploying whitelisting on all fixed function devices in distribution centers and stores, including PoS systems and warehouse computers. This was a major undertaking to tune the environment so whitelisting did not break systems, but after a period of bumpiness the technology is working well. The high-profile retail attacks of 2014 freed up budget for the whitelisting project, but aside from that your security program is right out of the PCI-DSS playbook: simple logging, vulnerability scanning, IPS, and AV deployed to pass PCI assessment; but not much more. Given the sheer number of breaches reported by retailer after retailer, you know that the fact you haven’t suffered a successful compromise is mostly good luck. Getting ahead of PoS attacks with whitelisting has helped, but you’ve been doing this too long to assume you are secure. You know the simple logging and vulnerability scanning you are doing can easily be evaded, so you decide it’s time to think more broadly about threat detection. But with so many different technologies and options, how do you get started? What do you do first? Getting Started The first step is always to leverage what you already have. The good news is that you’ve been logging and vulnerability scanning for years. The data isn’t particularly actionable, but it’s there. So you can start by aggregating it into a common place. Fortunately you don’t need to spend a ton of money to aggregate your security data. Maybe it’s a SIEM, or possibly an offering that aggregates your security data in the cloud. Either way you’ll start by putting all your security data in one place, getting rid of duplicate data, and normalizing your data sources, so you can start doing some analysis on a common dataset. Once you have your data in one place, you can start setting up alerts to detect common attack patterns in your data. The good news is that all the aggregation technologies (SIEM and cloud-based monitoring) offer options. Some capabilities are more sophisticated than others, but you’ll be able to get started with out-of-the-box capabilities. Even open source tools offer alerting rules to get you started. Additionally, security monitoring vendors invest significantly in research to define and optimize the rules that ship with their products. One of the most straightforward attack patterns to look for involves privilege escalation after obvious reconnaissance. Yes, this is simple detection, but it illustrates the concept. Now that you have server and IPS logs in one place, you can look for increased network port scans (usually indicating reconnaissance) and then privilege escalation on a server on one of the networks being searched. This is a typical rule/policy that ships with a SIEM or security monitoring service. But you could just as easily build this into your system to get started. Odds are that once you start looking for these patterns you’ll find something. Let’s assume you don’t because you’ve done a good job so far on security fundamentals. After starting by going through your first group of alerts, next you can look for assets in your environment which you don’t know about. That entails either active or passive discovery of devices on the network. Start by scanning your entire address space to see what’s there. You probably shouldn’t do that during business hours, but a habit of checking consistently – perhaps weekly or monthly – is helpful. In between active scans you can also passively listen for network devices sending traffic, by either looking at network flow records or deploying a passive scanning capability specifically to look for new devices. Let’s say you discover your development shop has been testing out private cloud technologies to make better use of hardware in the data center. The only reason you noticed was passive discovery of a new set of devices communicating with back-end datastores. Armed with this information, you can meet with that business leader to make sure they took proper precautions to securely deploy their systems. Between alerts generated from new rules and dealing with the new technology initiative you didn’t know about, you feel pretty good about your new threat detection capability. But you’re still looking for stuff you already know you should look for. What really scares you is what you don’t know to look for. More Advanced Detection To look for activity you don’t know about, you need to first define normal for your environment. Traffic that is not ‘normal’ provides a good indicator of potential attack. Activity outliers are a good place to start because network traffic and transaction flows tend to be reasonably stable in most environments. So you start with anomaly detection by spending a week or so training your detection system, setting baselines for network traffic and system activity. Once you start getting alerts based on anomalies, you will spend a bit of time refining thresholds and decreasing the noise you see from alerts. This tuning time may be irritating, but it’s a necessary evil to optimize

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.