Securosis

Research

Mindfulness Works

Back in November I learned I will be giving a talk on Neuro-Hacking at RSA with Jennifer Minella. We will be discussing how mindfulness practices can favorably impact the way you view things, basically allowing you to hack your brain. But I am pretty sure you can’t sell my synapses on an Eastern European carder forum. Over the last few months Jen and I have been doing a lot of research to substantiate the personal experience we have both had with mindfulness practices. We know security folks tend to be tough customers and reasonably skeptical about pretty much everything – unless there is data to back up any position. The good news is that there is plenty of data about how mindfulness can impact stress, job performance, and work/life balance. And big companies are jumping on board – Aetna is the latest to provide a series of Evidence-based Mind-Body Stress Management Programs based on mindfulness meditation and yoga. Meditation and yoga are becoming a big business (yoga pants FTW), so it is logical for big companies to jump on the bandwagon. The difference here, and the reason I don’t believe this is a fad, is the data. That release references a recent study in the Journal of Occupational Health Psychology. That’s got to be legitimate, right? Participants in the mind-body stress reduction treatment groups (mindfulness and Viniyoga) showed significant improvements in perceived stress with 36 and 33 percent decreases in stress levels respectively, as compared to an 18 percent reduction for the control group as measured with the Perceived Stress Scale. Participants in the mind-body interventions also saw significant improvements in various heart rate measurements, suggesting that their bodies were better able to manage stress. The focus of our talk is going to be solutions and demystifying some of these practices. It’s not about how security people are grumpy. We all know that. We will focus on how to start and develop a sustainable practice. Mindfulness doesn’t need to be hard or take a long time. In as little as 5-15 minutes a day you can dramatically impact your ability to deal with life. Seriously. But don’t take our word for it. Show up for the session and draw your own conclusions. We just recorded a podcast for the RSA folks, and I’ll link to it once it’s available, later this week. Jen and I will also be posting more mindfulness stuff on our respective blogs in the lead up to the conference (much to Rich’s chagrin). Photo credit: “6 Instant Ways To Stress Less And Smile More – Flip Your Perspective” originally uploaded by UrbaneWomenMag Share:

Share:
Read Post

Eliminate Surprises with Security Assurance and Testing [New Paper]

We have always been fans of making sure applications and infrastructure are ready for prime time before letting them loose on the world. It’s important not to just use basic scanner functions either – your adversaries are unlikely to limit their tactics to things you find in an open source scanner. Security Assurance and Testing enables organizations to limit the unpleasant surprises that happen when launching new stuff or upgrading infrastructure. Adversaries continue to innovate and improve their tactics at an alarming rate. They have clear missions, typically involving exfiltrating critical information or impacting the availability of your technology resources. They have the patience and resources to achieve their missions by any means necessary. And it’s your job to make sure deployment of new IT resources doesn’t introduce unnecessary risk. In our Eliminating Surprises with Security Assurance and Testing paper, we talk about the need for a comprehensive process to identify issues – before hackers do it for you. We list a number of critical tactics and programs to test in a consistent and repeatable manner, and finally go through a couple use cases to show how the process would work at both the infrastructure and application levels. To avoid surprise we suggest a security assurance and testing process to ensure the environment is ready to cope with real traffic and real attacks. This goes well beyond what development organizations typically do to ‘test’ their applications, or ops does to ‘test’ their stacks. It also is different from a risk assessment or a manual penetration test. Those “point in time” assessments aren’t necessarily comprehensive. The testers may find a bunch of issues but they will miss some. So remediation decisions are made with incomplete information about the true attack surface of infrastructure and applications. We would like to thank our friends at Ixia for licensing this content. Without the support of our clients, our open research model wouldn’t be possible. Direct Download (PDF): Eliminate Surprises with Security Assurance and Testing Share:

Share:
Read Post

Reducing Attack Surface with Application Control: Use Cases and Selection Criteria

In the first post in our Application Control series we discussed why it is hard to protect endpoints, and some of the emerging alternative technologies that promise to help us do better. Mostly because it is probably impossible do a worse job of protecting endpoints, right? We described Application Control (also known as Application Whitelisting), one of these alternatives, while being candid about the perception and reality of this technology after years of use. Our conclusion was that Application Control makes a lot of sense in a variety of use cases, and can work in more general situations, if the organization is willing to make some tradeoffs. This post describes the “good fit” use cases and mentions some of the features & functions that can make a huge difference to security and usability. Use Cases Given the breadth of ways computing devices are used in a typical enterprise, trying to use a generic set of security controls for every device doesn’t make much sense. So first you spend some time profiling the main use models of these devices and defining some standard ‘profiles’, for which you can then design appropriate defenses. There are quite a few attributes you can use to define these use cases, but here are the main ones we usually see: Operating System: You protect Windows devices differently than Macs than Linux servers, because each has a different security model and different available controls. When deciding how to protect a device, operating system is a fundamental factor. Usage Model: Next look at how the device is used. Is it a desktop, kiosk, server, laptop, or mobile device? We protect personal desktops differently than kiosks, even if the hardware and operating system are the same. Application variability: Consider what kind of applications run on the device, as well as how often they change and are updated. Geographic distribution: Where is the device located? Do you have dedicated IT and/or security staff there? What is the culture and do you have the ability to monitor and lock it down? Some countries don’t allow device monitoring and some security controls require permission from government organizations, so this must be a consideration as well. Access to sensitive data: Do the users of these devices have access to sensitive and/or protected data? If so you may need to protect them differently. Likewise, a public device in an open area, with no access to corporate networks, may be able to do with much looser security controls. Using these types of attributes you should be able to define a handful (or two) of use cases, which you can use to determine the most appropriate means of protecting each device, trading off security against usability. Let’s list a few of the key use cases where application control fits well. OS Lockdown When an operating system is at the end of its life and no longer receiving security updates, it is a sitting duck. Attackers have free rein to continue finding exploitable defects with no fear of patches to ruin their plans. Windows XP security updates officially end April 2014 – after that organizations still using XP are out of luck. (Like luck has anything to do with it…) We know you wonder why on Earth any organization serious about security – or even not so serious – would still use XP. It is a legitimate question, with reasonable answers. For one, some legacy applications still only run on XP. It may not be worth the investment – or even possible, depending on legal/ownership issues – to migrate to a modern operating system, so on XP they stay. A similar situation arises with compliance requirements to have applications qualified by a government agency. We see this a lot in healthcare, where the OS cannot even be patched without going through a lengthy and painful qualification process. That doesn’t happen, so on XP it stays. Despite Microsoft’s best efforts, XP isn’t going away any time soon. Unfortunately that means XP will still be a common target for attackers, and organizations will have little choice but to protect vulnerable devices somehow. Locking them down may be one of the few viable options. In this situation using application control in default-deny mode, allowing only authorized applications to run, works well. Fixed Function Devices Another use case we see frequently for application control is fixed function devices, such as kiosks running embedded operating systems. Think an ATM or payment station, where you don’t see the underlying operating system. These devices only run a select few applications, built specifically for the device. In this scenario there is no reason for any software besides authorized applications to run. Customers shouldn’t be browsing the Internet on an ATM machine. So application control works well to lock down kiosks. Similarly, some desktop computers in places like call centers and factory floors only run very stable and small sets of applications. Locking them down to run provides protection both from malware and employees loading unauthorized software or stealing data. In both this use case and OS lockdown you will get little to no pushback from employees about their inability to load software. Nothing in their job description indicates they should be loading software or accessing anything but the applications they need to do their jobs. In these scenarios application control is an excellent fit. Servers Another clear use case for application control is on server devices. Servers tend to be dedicated to a handful of functions, so they can be locked down to those specific applications. Servers don’t call the Help Desk to request access to iTunes, and admins can be expected to understand and navigate the validation process when they have a legitimate need for new software. Locking down servers can work very well – especially appealing because servers, as the repository of most sensitive data, are the ultimate target of most attacks. General Purpose Devices There has always been a desire to lock down general-purpose devices, which are among the most frequently compromised.

Share:
Read Post

Incite 1/15/2014: Declutter

As I discussed last week, the beginning of the year is a time for ReNewal and taking a look at what you will do over the next 12 months. Part of that renewal process should be clearing out the old so the new has room to grow. It’s kind of like forest fires. The old dead stuff needs to burn down so the new can emerge. I am happy to say the Boss is on board with this concept of renewal – she has been on a rampage, reducing the clutter around the house. The fact is that we accumulate a lot of crap over the years, and at some point we kind of get overrun by stuff. Having been in our house almost 10 years, since the twins were infants, we have stuff everywhere. It’s just the way it happens. Your stuff expands to take up all available space. So we still have stuff from when the kids were small. Like FeltKids and lots of other games and toys that haven’t been touched in years. It’s time for that stuff to go. We have a niece a few years younger than our twins, and a set of nephews (yes, twins run rampant in our shop) who just turned 3, we have been able to get rid of some of the stuff. There is nothing more gratifying than showing up with a huge box of action figures that were gathering dust in our basement, and seeing the little guys’ eyes light up. When we delivered our care package over Thanksgiving, they played with the toys for hours. The benefit of decluttering is twofold. First it gets the stuff out of our house. It clears room for the next wave of stuff tweens need. I don’t quite know that that is because iOS games don’t seem to take up that much room. But I’m sure they will accumulate something now that we have more room. And it’s an ongoing process. If we can get through this stuff over the next couple months that will be awesome. As I said, you accumulate a bunch of crap over 10 years. The other benefit is the joy these things bring to others. We don’t use this stuff any more. It’s just sitting around. But another family without our good fortune could use this stuff. If these things bring half the joy and satisfaction they brought our kids, that’s a huge win. And it’s not just stuff that you have. XX1 collected over 1,000 books for her Mitzvah project to donate to Sheltering Books, a local charity that provides books to homeless people living in shelters. She and I loaded up the van with boxes and boxes of books on Sunday, and when we delivered them there was great satisfaction from knowing that these books, which folks kindly donated to declutter their homes, would go to good use with people in need. And the books were out of my garage. So it was truly a win-win-win. Karma points and a decluttered garage. I’ll take it. –Mike Photo credit: “home-office-reorganization-before-after” originally uploaded by Melanie Edwards Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Reducing Attack Surface with Application Control The Double Edged Sword Security Management 2.5: You Buy a New SIEM Yet? Negotiation Selection Process The Decision Process Evaluating the Incumbent Revisiting Requirements Platform Evolution Changing Needs Introduction Advanced Endpoint and Server Protection Assessment Introduction Newly Published Papers What CISOs Need to Know about Cloud Computing Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U Don’t take it personally: Steven Covey has been gone for years, but his 7 habits live on and on. Our friend George Hulme did a piece for CSO Online detailing the 7 habits of effective security pros. The first is communication and the second is business acumen. I’m not sure you need to even get to #3. Without the ability to persuade folks that security is important, within the context of a critical business imperative – nothing else matters. Of course then you have squishy stuff like creativity and some repetitious stuff like “actively engaging with business stakeholders”. But that’s different than business acumen. I guess it wouldn’t have resonated as well if it was 5 habits, right? Another interesting one is problem solving. Again, not unique to security, but if you don’t like to investigate stuff and solve problems, security isn’t for you. One habit that isn’t on there is don’t take it personally. Security success depends on a bunch of other things going right, so even if you are blamed for a breach or outage, it is not necessarily your fault. Another might be “wear a mouthguard” because many security folks get kicked in the teeth pretty much every day. – MR Out-of-control ad frenzy: Safari on my iPad died three times Saturday am, and the culprit was advertisement plug-ins. My music stream halted when a McDonalds ad screeched at me from another site. I was not “lovin’ it!” The 20 megabit pipe into my home and a new iPad were unable to manage fast page loads because of the turd parade of third-party ads hogging my bandwidth. It seems that in marketers’ frenzy to know everything you do and push their crap on you, they forgot to serve you what you asked for. The yoast blog offers a nice analogy, comparing on-line ads to brick-and-mortar merchants tagging customers with stickers, but it’s more like carrying around a billboard. And that analogy

Share:
Read Post

Advanced Endpoint and Server Protection: Assessment

As we described in the introduction to the Advanced Endpoint and Server Protection series, given the inability of most traditional security controls to defend against advanced attacks, it is time to reimagine how we do threat management. This new process has 5 phases; we call the first phase Assessment. We described it as: Assessment: The first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to know how to protect it. You need to know what you have, how vulnerable, and how exposed it is. With this information you can prioritize and design a set of security controls to protect it. What’s at Risk? As we described in the CISO’s Guide to Advanced Attackers, you need to understand what attackers would be trying to access in your environment and why. Before you go into a long monologue about how you don’t have anything to steal, forget it. Every organization has something that is interesting to some adversary. If could be as simple as compromising devices to launch attacks on other sites, or as focused as gaining access to your environment to steal the schematics to your latest project. You cannot afford to assume adversaries will not use advanced attacks – you need to be prepared either way. We call this Mission Assessment, and it involves figuring out what’s important in your environment. This leads you to identify interesting targets most likely to be targeted by attackers. When trying to understand what an advanced attacker will probably be looking for, there is a pretty short list: Intellectual property Protected customer data Business operational data (proposals, logistics, etc.) Everything else To learn where this data is within the organization, you need to get out from behind your desk and talk to senior management and your peers. Once you understand the potential targets, you can begin to profile adversaries likely to be interested in them. Again, we can put together a short list of likely attackers: Unsophisticated: These folks favor smash and grab attacks, where they use publicly available exploits (perhaps leveraging attack tools such as Metasploit and the Social Engineer’s Toolkit) or packaged attack kits they buy on the Internet. They are opportunists who take what they can get. Organized Crime: The next step up the food chain is organized criminals. They invest in security research, test their exploits, and always have a plan to exfiltrate and monetize what they find. They are also opportunistic but can be quite sophisticated in attacking payment processors and large-scale retailers. They tend to be most interested in financial data but have been known to steal intellectual property if they can sell it and/or use brute force approaches like DDoS threats for extortion. Competitor: Competitors sometimes use underhanded means to gain advantage in product development and competitive bids. They tend to be most interested in intellectual property and business operations. State-sponsored: Of course we all hear the familiar fretting about alleged Chinese military attackers, but you can bet every large nation-state has a team practicing offensive tactics. They are all interested in stealing all sorts of data – from both commercial and government entities. And some of them don’t care much about concealing their presence. Understanding likely attackers provides insight into their tactics, which enables you to design and implement security controls to address the risk. But before you can design the security control set you need to understand where the devices are, as well as the vulnerabilities of devices within your environment. Those are the next two steps in the Assessment phase. Discovery This process finds the endpoints and servers on your network, and makes sure everything is accounted for. When performed early in the endpoint and server protection process, this helps avoid “oh crap” moments. It is no good when you stumble over a bunch of unknown devices – with no idea what they are, what they have access to, or whether they are steaming piles of malware. Additionally, an ongoing discovery process can shorten the window between something popping up on your network, you discovering it, and figuring out whether it has been compromised. There are a number of techniques for discovery, including actively scanning your entire address space for devices and profiling what you find. This works well enough and is traditionally the main way to do initial discovery. You can supplement active discovery with a passive discovery capability, which monitors network traffic and identifies new devices based on network communications. Depending on the sophistication of the passive analysis, devices can be profiled and vulnerabilities can be identified (as we will discuss below), but the primary goal of passive monitoring is to find new unmanaged devices faster. Passive discovery is also helpful for identifying devices hidden behind firewalls and on protected segments which active discovery cannot reach. Finally, another complicating factor for discovery – especially for servers – is cloud computing. With the ability to spin up and take down virtual instances – perhaps outside your data center – your platform needs to both track and assess cloud resources, which requires some means of accessing cloud console(s) and figuring out what instances are in use. Finally, make sure to also pull data from existing asset repositories such as your CMDB, which Operations presumably uses to track all the stuff they think is out there. It is difficult to keep these data stores current so this is no substitute for an active scan, but it provides a cross-check on what’s in your environment. Determine Security Posture Once you know what’s out there you need to figure out whether it’s secure. Or more realistically, how vulnerable it is. That typically requires some kind of vulnerability scan on the devices you discovered. There are many aspects to vulnerability scanning – at the endpoint, server, and application layers – so we won’t rehash all the research from Vulnerability Management Evolution. Check it out to understand how a

Share:
Read Post

Reducing Attack Surface with Application Control: The Double-Edged Sword [New Series]

The problems of protecting endpoints are pretty well understood. As we described in The 2014 Guide to Endpoint Security, you have stuff (private data and/or intellectual property) that others want. On the other hand, you have employees who need to do their jobs and require access to said private data and/or intellectual property. Those employees have sensitive data on their devices, so you need to protect their endpoints. It’s not like this is anything new. Protecting endpoints has been a focus of security professionals since, well, always – with decidedly unimpressive results. Why is protecting endpoints so hard? It can’t be a matter of effort, right? Billions have been spent on research to identify better ways to protect these devices. Organizations have spent tens of billions on endpoint security products and services. Yet, every minute more devices are compromised, more data is stolen, and security folks keep having to answer senior management, regulators, and ultimately customers as to why this keeps happening. The lack of demonstrable progress comes down to two intertwined causes. First, devices are built using software that has defects attackers can exploit. Nothing is perfect, especially not software, so every line of code presents attack surface. Second, employees can be fooled into taking action (such as installing software or clicking a link) that results in a successful attack. These two causes can’t really be separated. If the device isn’t vulnerable, then nothing an employee does should result in a successful attack. And likewise, if the employee doesn’t allow delivery of the attack/exploit code by clicking things, having vulnerable software is less of an issue. So if you can disrupt either causes your endpoints will be far better protected. Of course this is much easier said than done. In this new series, “Reducing Attack Surface with Application Control,” we will dig into the good and bad of application control (also known as application white listing) technology, talking about how AppControl can stop malware in its tracks and mitigate the risks of both vulnerable software and gullible users. We won’t shy away from addressing head-on the perception issues of endpoint lockdown, which cause many organizations to disregard the technology as infeasible in their environments. Finally, we will discuss use cases where AppControl makes a lot of sense and how it can favorably impact security posture, both reducing the attack surface of vulnerable devices and protecting users from themselves. Accelerating Attacker Innovation We mentioned the billions of dollars being spent on research to protect endpoint devices more effectively. It is legitimate to ask why these efforts haven’t really worked. It comes back to attackers innovating faster than defenders. And even if technology emerges to protect devices more effectively, it takes years for new technologies to become pervasive enough to blunt the impact of attackers across a broad market. The reactive nature of traditional malware defenses – in terms of finding an attack, profiling it, and developing a signature to block it on the device – makes existing mitigations too little too late. Attackers now randomly change what attacks look like using polymorphic malware, so looking for malware files cannot solve the problem. Additionally, attackers have new and increasingly sophisticated means to contact their command and control (C&C) systems and obscure data during exfiltration, making detection all the harder. Attackers also do a lot more testing now to make sure their attacks work before they use them. Endpoint security technologies can be bought for a very small investment, so attackers refine their malware to ensure it works against a majority of the defenses in use. This causes security professionals to look at different ways of breaking the kill chain, as we described in The CISO’s Guide to Advanced Attackers. You can do this a couple different ways: Impede Delivery: If the attacker cannot deliver the attack to a vulnerable device, the chain is broken. This involves effectively stopping tactics like phishing, either by blocking the email before it gets to an employee or training employees not to click things that would result in malware delivery. Stop Compromise: Even if the attack does reach a device, if it cannot execute and exploit the device, the chain is broken. This involves a different approach to protecting endpoints, and will be the main focus of this series. Block C&C: If the device is compromised, but cannot contact the command and control infrastructure to receive instructions and additional attack code, the impact of the attack is reduced. This requires the ability to analyze all outbound network traffic for C&C patterns, as well as watching for contact with networks with bad reputations. We discussed many of these tactics in our Network-based Threat Intelligence research. Block Exfiltration: The last defense is to stop the exfiltration of data from your environment. Whether via data leak prevention technology or some other means of content or egress filtering to detect protected content, if you can stop data from leaving your environment there is no loss. The earlier you break the kill chain, the better. But in the real world, you are best served by a multi-faceted approach encompassing all the options listed above. Now let’s dig into the Stop Compromise strategy for breaking the kill chain, which is really where application control fits into the security control hierarchy. Stop Code Execution. Stop Malware. The main focus of anti-virus and anti-malware technology since the beginning has been to stop malicious code from executing on a device, thus stopping compromise. What has been evolving is how the malware is detected, and what parts of devices software can access. There are currently a handful of approaches. Block the Bad: This is the traditional AV approach of matching malware signatures against code executing on the device. The problem is scale because there is so much bad that you cannot possible expect an endpoint to check for every attack since the beginning of time. Improve Heuristics: It is impossible to block all malware because it is constantly changing, so you need to focus on what

Share:
Read Post

Incite 1/8/2014: ReNew Year

Since I’m on the East Coast of the US, when the ball drops in Times Square that’s it. The old year is done. The new year begins. With some of Dublin’s finest coursing through my veins, I get a little nostalgic. I don’t think about years in terms of “good” or “bad” anymore – instead I realize that 2013 is now merely a memory that will inevitably fade away. The new year brings a time of renewal. A time to thoughtfully consider the possibilities of the coming 12 months because 2014 is a blank slate. Not exactly blank because my responsibilities didn’t disappear as the ball descended – nor have yours. But we have the power to make 2014 whatever we want. That’s exciting to me. I don’t fear change, I embrace change. Which is a good thing because change always comes every year without fail. You grow. You evolve. You change. I can’t wait to try new stuff. As Rich said in Thank You, we will do new things in 2014. Some of the will work, and some won’t. I don’t have the foggiest idea which will fall into each category. Uncertainty makes some folks uncomfortable. Not me. The idea of a certain future is not interesting at all. That would mean not getting an unexpected call to work on something that could be very very cool. But I also might not get any calls at all. You just don’t know. And that’s what makes it exciting. I can’t wait to learn new things. About technology,because security continues to evolve quickly, which means that if you sit still you are actually falling behind. I am also learning a lot about myself, which is kind of strange for a guy in his mid-40s, but it’s true. In researching the Neuro-Hacking talk I’m doing with JJ at RSA, I am adding to my current practices to improve as a person. Like everyone else, I find that being reminded of my ideals helps keep them at the forefront of my mind. So over the holiday, I treated myself to a few Hugh McLeod prints to hang in my office. The first is called Abundance and the quote on the picture is: “Abundance begins with gratitude.” It’s true. I need to remain thankful for what I have. That appreciation and a dedication to helping others will keep me on a path to achieve bigger things. The other is One Day is Dead, which is a reminder to make the most of every day and focus on living right now. This has been a frequently theme in my writing lately and will remain. I write the weekly Incite for me as much as for anyone else. It is a public journal of my thoughts and ideas each week. I also spent some time looking back through some of the archives, and it’s fascinating to see how I have changed over the past few years. But not half as fascinating as imagining how much I’ll change over the next few. So I jump into 2014 with both feet. Happy ReNew Year. –Mike Photo credit: “Renewing shoe” originally uploaded by Adam Fagen Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Security Management 2.5: You Buy a New SIEM Yet? Evaluating the Incumbent Revisiting Requirements Platform Evolution Changing Needs Introduction Advanced Endpoint and Server Protection Introduction What CISOs Need to Know about Cloud Computing Adapting Security for Cloud Computing How the Cloud is Different for Security Introduction Newly Published Papers Defending Against Application Denial of Service Security Awareness Training Evolution Firewall Management Essentials Continuous Security Monitoring API Gateways Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Incite 4 U FireEye’s Incident Response Play: Of course the one day I decide to take vacation over the holidays, the FireEye folks buy Mandiant for a cool billion-ish. Lots of folks have weighed in on the deal already so I won’t repeat their analysis. Clearly FireEye realizes they need to become more than just a malware detection box/service, because only a broad network security platform player could provide the revenue to support their current valuation. Obviously this won’t be their last deal. Were there other things they could have bought for less money that would have fit better? Probably. But Mandiant brings a ton of expertise and a security brand juggernaut to FireEye. Was it worth $1 BILLION? That depends on whether you think FireEye was worth $5 billion before the deal, because the price was mostly in FireEye stock, which is, uh, generously valued. The question is whether forensics (both services and products) has become a sustainable mega-growth segment of security. That will depend on whether the technology becomes simple enough for companies without a dedicated forensics staff to use. It ain’t there yet. – MR Me Too: It’s Tuesday as I write this, right after Mike harassed me to get my Incites in. I open up Pocket to check out what stories I have collected over the past couple weeks. Number four on my list is a post by Luke Chadwick on how his Amazon Web Services account was hacked when he accidentally left his Access Keys in some code he published online. That seems strangely familiar. It seems bad guys are indeed scraping online code repositories to find cloud service keys and then use them for mining Litecoins. It also seems even security-aware developers and analysts like myself, despite our best efforts, can mess up and accidentally make life easy for attackers. I encapsulated my lessons in my post, but the thing I learned all of two minutes ago is

Share:
Read Post

Advanced Endpoint and Server Protection [New Series]

Endpoint protection has become the punching bag of security. Every successful attack seems to be blamed on a failure of endpoint protection. Not that this is totally unjustified – most solutions for endpoint protection have failed to keep pace with attackers. In our 2014 Endpoint Security Buyers Guide, we discussed many of the issues around endpoint hygiene and mobility. We also explored the human element underlying many of attacks, and how to prepare your employees for social engineering attacks in Security Awareness Training Evolution. But realistically, hygiene and awareness won’t deter an advanced attacker long. We frequently say advanced attackers are only advanced as they need to be – they take the path of least resistance. But the converse is also true. When this class of adversaries needs advanced techniques they use them. Traditional malware defenses such as antivirus don’t stand much chance against a zero-day attack. So our new series, Advanced Endpoint and Server Protection, will dig into protecting devices against advanced attackers. We will highlight a number of new alternatives for preventing and detecting advanced malware, and examine new techniques and tools to investigate attacks and search for indicators of compromise within your environment. But first let’s provide some context for what has been happening with traditional endpoint protection because you need to understand the current state of AV technology for perspective on how these advanced alternatives help. AV Evolution Signature-based AV no longer works. Everyone has known that for years. It is not just because blocking a file you know is bad isn’t enough any more. But there are simply too many bad files to, and new ones crop up too quickly, for it to be possible to compare every file against a list of bad files. The signature-based AV algorithm still works as well as it ever did, but it is no longer even remotely adequate. Nor is it comprehensive enough to catch the varying types of attacks in the wild today. So the industry adapted, focusing on broadening the suite of endpoint protection technologies to include host intrusion prevention, which blocks known-bad actions at the kernel level. The industry also started sharing information across its broad customer base to identify IP addresses known to do bad things, and files which contain embedded malware. That shared information is known as threat intelligence, and can help you learn from attacks targeting other organizations. Endpoint security providers also keep adding modules to their increasingly broad and heavy endpoint protection suites. Things like server host intrusion prevention, patch/configuration management, and even full application white listing – all attempting to ensure no unauthorized executables run on protected devices. To be fair, the big AV vendors have not been standing still. They are adapting and working to broaden their protection to keep pace with attackers. But even with all their tools packaged together, it cannot be enough. It’s software and it will never be perfect or defect-free. Their tools will always be vulnerable and under attack. We need to rethink how we do threat management as an industry, in light of these attacks and the cold hard reality that not all of them can be stopped. We have been thinking about what the threat management process will come to look like. We presented some ideas in the CISO’s Guide to Advanced Attackers, but that was focused on what needs to happen to respond to an advanced attack. Now we want to document a broader threat management process, which we will refine through 2014. Threat Management Reimagined Threat management is a hard concept to get your arms around. Where does it start? Where does it end? Isn’t threat management really just another way of describing security? Those are hard questions without absolute answers. For the purposes of this research, threat management is about dealing with an attack. It’s not about compliance, even though most mandates are responses to attacks that happened 5 years ago. It’s not really about hygiene – keeping your devices properly configured and patched is good operational practices, not tied to a specific attack. It’s not about finding resources to actually execute on these plans, nor is it an issue of communicating the value of the security team. Those are all responsibilities of the broader security program. Threat management is a subset of the larger security program – typically the most highly visible capability. So let’s explain how we think about threat management (for the moment, anyway) and let you pick it apart. Assessment: You cannot protect what you don’t know about – that hasn’t changed. So the first step is gaining visibility into all devices, data sources, and applications that present risk to your environment. And you need to understand the current security posture of anything to protect. Prevention: Next you try to stop an attack from being successful. This is where most of the effort in security has been for the past decade, with mixed (okay, lousy) results. A number of new tactics and techniques are modestly increasing effectiveness, but the simple fact is that you cannot prevent every attack. It has become a question of reducing your attack surface as much as practical. If you can stop the simplistic attacks, you can focus on the more advanced ones. Detection: You cannot prevent every attack, so you need a way to detect attacks after they get through your defenses. There are a number of different options for detection – most based on watching for patterns that indicate a compromised device. The key is to shorten the time between when the device is compromised and when you discover it has been compromised. Investigation: Once you detect an attack you need to verify the compromise and understand what it actually did. This typically involves a formal investigation, including a structured process to gather forensic data from devices, triage to determine the root cause of the attack, and searching to determine how broadly the attack has spread within your environment. Remediation: Once you understand what happened you can put a

Share:
Read Post

New Paper: Defending Against Denial of Service Attacks

Just in case you had nothing else to do during the holiday season, you can check out our latest research on Application Denial of Service Attacks. This paper continues our research into Denial of Service attacks after last year’s Defending Against Denial of Service Attacks research. As we stated back then, DoS encompasses a number of different tactics, all aimed at impacting the availability of your applications or infrastructure. In this paper we dig much deeper into application DoS attacks. For good reason – as the paper says: These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, requiring far fewer attack nodes and less bandwidth. A side benefit of requiring fewer nodes is simplified command and control, allowing more agility in adding new application attacks. Moreover, the attackers often take advantage of legitimate application features, making defense considerably harder. We expect a continued focus on application DoS attacks over time, so we offer both an overview of the common types of attacks you will see and possible mitigations for each one. After reading this paper you should have a clear understanding of how your application availability will be attacked – and more importantly, what you can do about it. We would like to thank our friends at Akamai for licensing this content. Without the support of our clients our open research model wouldn’t be possible. To sum up, here are some thoughts on defense: Defending against AppDoS requires a multi-faceted approach that typically starts with a mechanism to filter attack traffic, either via a web protection service running in the cloud or an on-premise anti-DoS device. The next layer of defense includes operational measures to ensure the application stack is hardened, including timely patching and secure configuration of components. Finally, developers must play their part by optimizing database queries and providing sufficient input validation to make sure the application itself cannot be overwhelmed using legitimate capabilities. Keeping applications up and running requires significant collaboration between development, operations, and security. This ensures not only that sufficient defenses are in place, but also that a well-orchestrated response maintains and/or restores service as quickly as possible. It is not a matter of if but when you are targeted by an Application Denial of Service attack. Check out the landing page for the paper, or you can download the Defending Against Application Denial of Service Attacks PDF directly. Share:

Share:
Read Post

Security Assurance & Testing: Quick Wins

We started this Security Assurance and Testing (SA&T) series with the need for testing and which tactics make sense within an SA&T program. But it is always helpful to see how the concepts apply to more tangible situations. So we will now show how the SA&T program can provide a quick win for the security team, with two (admittedly contrived) scenarios that show how SA&T can be used – both at the front end of a project, and on an ongoing basis, to ensure the organization is well aware of its security posture. Infrastructure Upgrade For this first scenario let’s consider an organization’s move to a private cloud environment to support a critical application. This is a common situation these days. The business driver is better utilization of data center resources and more agility for deploying hardware resources to meet organizational needs. Obviously this is a major departure from the historical rack and provision approach. This is attractive to organizations because it enables better operational orchestration, allowing for new devices (‘instances’ in cloud land) to be spun up and taken down automatically according to the application’s scalability requirements. The private cloud architecture folks aren’t totally deaf to security, so some virtualized security tools are implemented to enforce network segmentation within the data center and block some attacks from insider threats. Without an SA&T program you would probably sign off on the architecture (which does provide some security) and move on to the next thing on your list. There wouldn’t be a way to figure out whether the environment is really secure until it went live, and then attackers will let you know quickly enough. Using SA&T techniques you can potentially identify issues at the beginning of implementation, saving everyone a bunch of heartburn. Let’s enumerate some of the tests to get a feel for what you may find: Infrastructure scalability: You can capture network traffic to the application, and then replay it to test scalability of the environment. After increasing traffic into the application, you might find that the cloud’s auto-scaling capability is inadequate. Or it might scale a bit too well, spinning up new instances too quickly, or failing to take down instances quickly enough. All these issues impact ability and value of the private cloud to the organization, and handling them properly can save a lot of heartburn for Ops. Security scalability: Another infrastructure aspect you can test is its security – especially virtualized security tools. By blasting the environment with a ton of traffic, you might discover your virtual security tools crumble rather than scaling – perhaps because VMs lack custom silicon – and fall over. This failure normally either “fails open”, allowing attacks, or “fails closed”, impacting availability. You may need to change your network architecture to expose your security tools only to the amount of traffic they can handle. Either way, better to identify a potential bottleneck before it impairs either availability or security. A quick win for sure. Security evasion: You can also test security tools to see how they deal with evasion. If the new tools don’t use the same policy as the perimeter, which has been tuned to effectively deal with evasion, the new virtual device may require substantial tuning to ensure security within the private cloud. Network hopping: Another feature of private clouds is their ability to define network traffic flows and segmentation – “Software Defined Networks”. But if the virtual network isn’t configured correctly, it is possible to jump across logical segments to access protected information. Vulnerability testing of new instances: One of the really cool (and disruptive) aspects of cloud computing is elimination of the need for changing/tuning configurations and patching. Just spin up a new instance, fully patched and configured correctly, move the workload over, and take down the old one. But if new instances spin up with vulnerabilities or poor configurations, auto-scaling is not your friend. Test new instances on an ongoing basis to ensure proper security. Again, a win if something was amiss. As you see, many things can go wrong with any kind of infrastructure upgrade. A strong process to find breaking points in the infrastructure before going live can mitigate much of the deployment risk – especially if you are dealing with new equipment. Given the dynamic nature of technology you will want to make sure you are testing the environment on an ongoing basis, as well ensuring that change doesn’t add unnecessary attack surface. This scenario points out where many issues can be found. What happens if you can’t find any issues? Does that impact the value of the SA&T program? Actually, if anything, it enhances its value – by providing peace of mind that the infrastructure is ready for production. New Application Capabilities To dig into another scenario, let’s move up the stack a bit to discuss how SA&T applies to adding new capabilities within an application serving a large user community, to enable commerce on a web site. Business folks like to sell stuff, so they like these kinds of new capabilities. This initiative involves providing access to a critical data store previously inaccessible directly from an Internet-facing application, which is an area of concern. The development team has run some scans against the application to identify application layer issues such as XSS, and fixed them before deployment by front-ending the application with a WAF. So a lot of the low-hanging fruit of application testing is gone. But that shouldn’t be the end of testing. Let’s look into some other areas which could uncover issues by focusing on realistic attack patterns and tactics: Attack the stack: You could use a slow HTTP attack to see if the application can defend against availability attacks on the stack. These attacks are very hard to detect at the network layer so you need to make sure the underlying stack is configured to deal with them. Shopping cart attack: Another type of availability attack uses the application’s legitimate functionality against it. It’s a bit like

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.