Securosis

Research

Summary: Physicality

Writing is an oddly physical act. Technically you are just sitting there, clanking away on the keyboard, while your bottom loses circulation and gets sore. (Maybe I need a new chair.) But keeping your brain running at the right tempo for effective writing involves a complicated dance of nutrition, sleep, physical movement, and environmental management. The past few days I have been cranking through some projects, writing one or two major pieces a day. While sometimes the words flow, this run was more the molasses sort. I never seemed to maintain the right combination of sleep, caffeine, food, and activity to hammer through the content effectively. But deadlines are deadlines so I pushed through as best I could. Take today, for example. I felt better than any other morning this week, so I ran to a coffee shop and carefully managed my food-to-caffeine ration in an effort to maintain a productivity-enhancing caffeine buzz. Too much and I can’t focus. Too little and I… can’t focus. I did manage to keep it going for a few hours and finished one deliverable, but then it was time for lunch. If I didn’t eat I’d crash. But I knew once I did, I’d crash in a different way. Lose/lose situation. So I ate, then had more coffee, then wasted an hour before I could write again. But at that point it was mid-afternoon, when I tend to be at my worst. Normally I’d go work out to clear the head, but that wasn’t an option. So I muscled through. As a result, my 600-800 word piece is now clocking in at 1,800 words, and I cannot figure out whether it’s better than what I mapped out in my head last night. I knew I should have written it right then and there. And 1,800 words takes a certain amount of time, no matter how fast your write. Leaving me at 6pm to write this summary sitting on the floor, watching Peppa Pig with my two youngest kids, barely able to hold my head up, but knowing that if I don’t go for a run when my wife gets home I won’t sleep well tonight, and will be even less productive tomorrow. Yes, there are worse work-related problems out there. I have held far more outwardly physical jobs, some putting me at great physical risk. But never doubt that writing isn’t physical. And unlike rescue or manual labor, you don’t get to release any of the stress through movement. I am not thrilled with most of what I wrote this week. I’m hoping that’s just my usual self-criticism, but nothing really came out as I intended, and that is a direct result of being unable to properly manage my physical state to optimize my focus. Sounds stilly, but in the end I might have blown an article because a cat decided to sleep on my face the other night. In unrelated news, the rest of the Securosis team is completely out this week, so the rest of this summary is slimmed down. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian will be presenting Pragmatic WAF Management October 15. Favorite Securosis Posts Adrian Lane: Deployment Pipelines and DevOps. Rich does a great job tying the series together and showing how and where DevOps is making development and security more Agile. Other Securosis Posts Firestarter: Hulk bash. Like I said: everyone is out. Favorite Outside Posts A special note first – Brian Krebs is releasing his book, Spam Nation. I haven’t read it, but I guarantee you it will be good. Brian knows more than anyone about the computer underground. Well, more than anyone who can talk about it without getting shot. I mean, he probably won’t get shot. Er, I hope he doesn’t get shot. Adrian Lane: A State of Xen – Chaos Monkey & Cassandra. Keeping a 2,600-node Cassandra cluster up and running is hard. Keeping it fully functional while 10% of the cluster is rebooted is fracking astounding! Chaos Monkey is one of the few truly Rugged approaches to software development I have seen. Rich: Have most analysts completely given up doing “research”? An interesting take, especially because Securosis is quite profitable, and doesn’t do a single thing they talk about. Then again I’m not sure you could scale us. Research Reports and Presentations Leveraging Threat Intelligence in Incident Response/Management. Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Top News and Posts The Horror of a ‘Secure Golden Key’. Hackers’ Attack Cracked 10 Financial Firms in Major Assault BadUSB ‘Patch’ Skirts More Effective Options Share:

Share:
Read Post

The New Agile: Deployment Pipelines and DevOps

Our last post reviewed key tools to conduct security tests in the development process, and before that we discussed big picture process adjustments to accommodate security testing, but didn’t fully how to integrate. Agile itself is in the middle of a major disruptive evolution, transforming into a new variant called DevOps, bringing significant long-term implications which are beneficial to security. The evolution of development security and Agile are closely tied together, so we can start by specifying how to integrate into the deployment pipeline, then discuss the implications of DevOps. Understanding the Deployment Pipeline The best way to integrate security testing into the development process is by integrating with the deployment pipeline. This is the series of tools an organization uses to take developed code from the brain of a developer into the hands of a customer. While products vary greatly, the toolchains themselves are relatively consistent, although not all organizations use all components. Integrated Development Environment (IDE): The IDE is where developers write code. It typically consists of a source code editor (a text editor), a compiler or an interpreter, a debugger, and other tools to help the programmer write code and build applications (such as a user interface editor, code snippet library, version control browser, etc.). Issue Tracker: A tracker is basically a project management tool designed to integrate directly into the development process. User stories are entered directly, broken down into features, and broken down again then specific developer tasks/assignments. Detected bugs also go into the issue tracker. This is the central tool for tracking the status of the development project – from earliest concepts, to updates, to production bugs. Version Control System/Source Code Management: Managing constantly changing code for even a small application is challenging. Source code is mostly a bunch of text files. And we mean a lot of files, which may be worked on by teams of tens, hundreds, or thousands of developers. The version control system/source code management tool keeps track of all changes and handles checkout, checkin, branching, forking, and otherwise keeping the code consistent and manageable. Whichever tool is used, this is typically referred to as the source code repository, or repo for short. Build Automation: Automation tools convert the text of source code into compiled applications. Most modern applications include many components which need to be compiled, integrated, and linked in the correct order. A build automation tool handles both simple and complex scenarios, according to scripts created by developers. Continuous Integration (CI) Server: A CI server is the next iteration of build automation. It connects to the source code repository and, based on rules, automatically integrates and compiles code as it is committed. Rather than manually running a build automation tool, the CI server grabs code, creates a build, and runs automated testing automatically when triggered – such as when a developer commits code from an IDE. CI servers can also automate the deployment process, pushing updated code onto production systems. There are an unlimited range of possible deployment pipelines, and the pipeline is often actually a series of manual processes. But the broad steps are: The product owner enters requirements for a feature into the issue tracker. The product owner or someone else on the development team (such as the program manager) breaks the user story and features down into a set of developer assignments, which are then added to the backlog. The program manager assigns specific tasks to developers. A developer checks out the latest code, writes/edits in an IDE, tests and debugs locally, and then commits it to the source code repository using the version control system. The developer might for existing for independent development and testing, depending on the nature of the feature. The build automation tool compiles the code into the main application and may perform automated testing. The compiled product is then sent to QA/testing and eventually to operations to push into production. If something breaks, that is marked as a bug in the issue tracker. If the organization uses continuous integration the code will be automatically compiled, integrated, and tested using the CI server. It may be pushed into deployment or handed off for additional manual testing, such as user acceptance testing. Again, if something breaks that becomes a bug in the issue tracker, probably automatically. Not every organization follows even this general process, but just about everyone running Agile uses some variation of it. Integrating Security If you map our security toolchain to the deployment pipeline there are clear opportunities for integration. The ones we most commonly see are: Security manages security issues and bugs in the issue tracker. Security features are often entered as user stories or feature requirements, in cooperation with the product owner or program manager. Security sensitive bugs are tagged as security issues. In some cases security teams monitor the issue tracker to help identify potential vulnerabilities that might have been entered as simple bug reports. Static analysis is integrated in the IDE, build automation tool, or CI server. Sometimes all of the above. For example when a developer commits code locally it can undergo static analysis, with issues highlighted back in the IDE for easy identification and remediation. Static analysis may also be triggered when code is committed to the source code repository. Dynamic analysis is also typically integrated at the build automation or CI server, using tests defined by security. Other security tests, such as unit, component, and regression testing, are also often best integrated at the build or CI server. Vulnerability analysis may be automated if the organization uses a CI server, but otherwise is often a manual or periodic process. Any problems discovered by the testing tools generate entries in the issue tracker, just like any other bugs. Ideally security needs to sign off on any unremediated security bugs before release. Security and DevOps There is no single definition of DevOps, but essentially it means deeper integration of development and operations in the software deployment process. A better way to phrase it is

Share:
Read Post

Firestarter: Hulk bash

Mike, Adrian, and I start off a little rough around the edges, but eventually get to the point. Travel is taking its toll so we won’t be able to keep our usual weekly schedule, but we will stay as close as possible – until I run off to Amsterdam for a week, for Black Hat Europe. We catch up on the inane for a few minutes, before jumping into a discussion of the bash vulnerability and disclosure debacle. We agree it is often valuable to analyze an event after the initial shock waves (See what I did there? Shellshock? Shock waves?). Today we focus on the deeper implications and how the heck a disclosure could be so bungled. Plus a little advice on where to focus your patching efforts. The audio-only version is up too. Share:

Share:
Read Post

Friday Summary: October 3, 2014 cute puppy edition

I was going to write more this week on Apple Pay security and it use of tokenization because more details have come out, but I won’t bother because TUAW beat me to it. They did a good job explaining how tokenization is used by Apple, and went on to discuss one of the facets I have been trying to get details on: the CCV/CVV code. Apple is dynamically generating a new CVV for each transaction, which can be verified by the payment processor to ensure it is coming from an authorized device. In a nutshell: fingerprint scan to verify the user is present, a token that represents the card/device combination, and a unique CVV to verify the device in use. That is not just beyond magstripes – it is better than EMV-style smart cards. No wonder the banks were happy to work with Apple. Tip of the cap to Yoni Heisler for a well-written article. It is interesting to watch events unfold when you knew exactly how they would occur beforehand. Try as you might, you cannot avoid the inevitable, even when you know it’s coming a long ways off. In this case 6 months ago a very dear friend – someone we had not spoken with in quite a while – called my wife and asked her to have lunch. The first thing that popped into my mind was, “Oh crap, we’re getting a new puppy!” See, this friend is a breeder of Boston Terriers, and we have owned many of her dogs over the years. And she was thinking of breeding two of her stock, and she will be looking for good homes to place them in. I guarantee you that landing in the Lane household is the puppy equivalent of winning the lottery – our home is a bit sought after by many dog breeders and rescue shelters. And this friend and my wife are both aware that our current Boston is 12 – he is still feisty but clearly in elder statesman territory. Keep in mind that none of the above factoids are ever discussed. No need. But you don’t have to be prescient to see what’s coming. Now that the puppies are on the ground, my wife was invited back to “help socialize” a litter of puppies with a cuteness index of 11. So I have no doubt that within several weeks we will be hearing the all-too-familiar nighttime puppy running outside before it wets the blanket again. Who needs sleep? I need to proactively pick out some puppy names! As Mike’s weekly Incite discussed, it has been a dizzying week for all of us here, but we have come out the other side unscathed. And next week will be no different. I am presenting at the Akamai Edge conference in Miami, so if you’ll be in town let me know! Now let’s move on to the summary… Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s webcast on Hadoop Security. James Arlen’s Derbycon talk Favorite Securosis Posts Adrian Lane: Stranger in my own town. A glimpse into what it’s like to be Mike. A good post and a feel for what it has been like this year. David Mortman: Security and Privacy on the Encrypted Network: The Future is Encrypted. Mike Rothman: Friday Summary: September 26, 2014. Slim pickings this week, but I like the round-up of stuff on Adrian’s mind. Always entertaining. Favorite Outside Posts David Mortman: Four Interactions That Could Have Gone Better. Adrian Lane: Top 10 Web Hacking Techniques of 2013 – OWASP AppSecUSA 2014. My fave this week is a video from last week’s OWASP event – I was not able to go this year but it’s always a great con. Mike Rothman: The Truth About Ransomware. Great post by Andrew Hay about the fact that you’re on your own if you get infected with ransomware. You might get your files back, you might not. So make sure you back up the important stuff. And don’t click things. Truth. Research Reports and Presentations Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Top News and Posts NoSQL SSJI Authentication Bypass. Today’s laboratory hack, tomorrow’s Hadoop data breach. The shockingly obsolete code of bash Cops Are Handing Out Spyware to Parents—With Zero Oversight. Mind. Blowingly. Stupid. More Evidence Found in JPMorgan Chase Breach (Updated) Apple Releases Patches for Shellshock Bug Inside the NSA’s Private Cloud OpenVPN vulnerable to Shellshock Bash vulnerability A Comprehensive Outline of the Security Behind Apple Pay EC2 Maintenance Update II Oracle, Cisco step up cloud battle Apache Drill is ready to use and part of MapR’s distro. SQL queries for Hadoop. Three critical changes to PCI DSS 3.0 Blog Comment of the Week This week’s best comment goes to nobody because our comment system is broken, but we’re working on it. Promise! Share:

Share:
Read Post

Incite 10/1/2014: Stranger in my own town

I had a bit of a surreal experience earlier this week. Rich probably alluded to it a few times on the Twitter, but we are all as busy as we have been since we started the new Securosis 5 years ago. I m traveling like a mad man and it’s getting hard to squeeze in important meetings with long-time clients. But you do what you need to – we built this business on relationships, and that means we pay attention to the ones that matter. So when a Monday meeting on the west coast is the only window you can meet with a client before an important event, you do it. I flew out Sunday and had a good meeting Monday. But there was a slight complication. I was scheduled to do the mindfulness talk with JJ at the ISC2 Congress Tuesday morning in Atlanta. I had agreed to speak months ago and it’s my favorite talk, so there was no way I was bailing on JJ. That means the red-eye. Bah! I hate the red-eye. I have friends who thrive on it. They hate the idea of spending a working day in the air. I relish it because I don’t have calls and can mute the Tweeter. I get half a day of solid thinking, writing, or relaxing time. With in-flight networking I can catch up on emails and reading if I choose. So I can be productive and compensate for my challenges sleeping on planes. If I get a crappy night’s sleep the next couple of days are hosed, and that’s not really an option right now. Thankfully I got an upgrade to first class, which is about as rare as sniffing unicorn dust. I poured my exhausted self into my first-class seat, plugged in my headphones, and slept pretty well, all things considered. It wasn’t solid sleep, but it was sleep. When we landed in ATL I felt decent. Which was a lot better than I expected. So what now? Normally I’d get in the car and drive home to get all pretty for the conference. But that wouldn’t work this week because I needed to be in another city Tuesday afternoon, ahead of another strategy day on Wednesday. I didn’t have time to go home, clean up, and then head back downtown for my talk. I made some calls to folks who would be at the ISC2 conference and was graciously offered the use of a shower. But that would involve wading into some man soup in a flop room, so I was grateful for the offer, but kept looking for alternatives. Then I realized the ATL airport has showers in some of its Sky Clubs. So I trudged down to the International Terminal and found a very spacious, comfortable changing room and shower. It was bigger than some hotel rooms I’ve had in Europe. I became a stranger in my own town. Showering up at my home airport to do a talk in my city before heading back to the airport to grab another flight to another city. The boy told me it was cool to be in 3 cities in less than a day. I told him not so much, but it’s what I do. It’s a strange nomadic existence. But I’m grateful that I have clients who want to meet with me, and a family who is understanding of the fact that I love my job… –Mike Photo credit: “Darth Shower” originally uploaded by _Teb The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Security and Privacy on the Encrypted Network The Future is Encrypted Secure Agile Development Building a Security Tool Chain Process Adjustments Working with Development Agile and Agile Trends Introduction Trends in Data Centric Security Deployment Models Tools Introduction Use Cases Newly Published Papers The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Incite 4 U Gorillas in the mist: In case you missed it, was another important vulnerability was disclosed last week, aside from Shellshock. It was a flaw with the network security library used by Firefox and Google’s Chrome that allows an attacker to create forged RSA signatures to confuse browsers. In practice someone can fake a certificate for eBay or Amazon – or any other SSL connection – and act as a man-in-the-middle, collecting any private data sent down the pipe. You’d think that we would have beaten on SSL libraries enough to uncover these types of flaws, but just as with the bash shell vulnerability we will

Share:
Read Post

Security and Privacy on the Encrypted Network: The Future is Encrypted

The cloud and mobility are disrupting how IT builds and delivers value to the organization. Whether you are moving computing workloads to the cloud with your data now on a network outside your corporate perimeter, or an increasingly large portion of your employees are now accessing data outside of your corporate network, you no longer have control over networks or devices. Security teams need to adapt their security models to protect data. For details see our recent Future of Security research. But this isn’t the only reasons organizations are being forced to adapt security postures. The often discussed yet infrequently addressed insider threat must be addressed. Given how attackers are compromising devices, performing reconnaissance to find vulnerable targets and sniffing network traffic to steal credentials, at some point during every attack the adversary becomes an insider with credentials to access your most sensitive stuff. Regardless of whether an adversary is external or internal, at some point they will be inside your network. Finally, tighter collaboration between business partners means people outside your organization need access to your systems and vice-versa. You don’t want this access to add significant risk to your environment, so those connections need to be protected somehow to ensure data is not stolen. Given these overarching trends, organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials, and ensures data moving outside the organization is protected from man-in-the-middle attacks. But no good deed goes unpunished. Encrypting network traffic impacts traffic inspection and enforcement of security policies. Encrypted networks also complicate security monitoring because traffic needs to be decrypted at wire speed for capture and forensics. Encrypted traffic also presents compliance issues and raises human resources considerations around decryption, which must be factored into your plans as you contemplate driving network encryption deeper into the network. In our new series, Security and Privacy on the Encrypted Network, we will discuss how to enforce security policies to ensure data isn’t leaking out over encrypted tunnels, and employees adhere to corporate acceptable use policies, by decrypting traffic as needed. Then we will dive into the security monitoring and forensics use case to discuss traffic decryption strategies to ensure you can properly alert on security events and investigate incidents. Finally we will wrap up with guidance about how to handle human resources and compliance issues as an increasing fraction of network traffic is encrypted. We would like to thank Blue Coat Systems for potentially licensing the paper when this project is complete. Without our clients willingness to license our research you wouldn’t be able to access this research for the low low price of $0… Share:

Share:
Read Post

Why Amazon is Rebooting Your Instances (Updated)

Update: Amazon published some details. Less than 10% of AWS systems are affected, and the vulnerability will be disclosed October 1st. As suspected this is about Xen – not the bash vulnerability. Yesterday I received notice that Amazon Web Services is force rebooting one of my instances. Then more emails started rolling in, and it looks like many (or all) of them will be rebooted during a single maintenance window. It has been a few years since this happened, and the reason ties into how AWS updates the servers your instances run on. We actually teach this in our cloud security training class, including how to architect your own cloud so you might not have to do the same thing – with, of course, many caveats. My initial assumption was application of a quiet security patch, and that looks dead on: From @ClipperChip via Matt Green on Twitter: Amazon rebooting all AWS instances (https://t.co/xg2XoXDdEe) + an undisclosed advisory on http://t.co/PdLqk8qXSE http://t.co/Fo1beT7xrN 🙂 And here is what looks like that vuln: XSA-108 | 2014-10-01 12:00 | none (yet) assigned | (Prereleased, but embargoed) How AWS updates servers Amazon uses a modified version of the Xen hypervisor. Our understanding of their architecture indicates they do not support live migration. Live migration, available under VMware as vMotion, allows you to move a running virtual machine from one physical host to another without shutting it down. When you build a cloud, host servers consist of (at least) a hypervisor with management and connectivity components. Sometimes, as with OpenStack, you even have a usable operating system. All these components need to be updated periodically. Some updates require rebooting the host server. To update the hypervisor you typically need to shut down the virtual machines (instances) running on top of it. There are two common ways to manage these updates to reduce downtime: Update a host without any virtual machines running on it, then live migrate instances from a vulnerable host to a patched one. Then update the vulnerable host once all its instances are running elsewhere. If you cannot live migrate, do the same thing by shutting down and restarting the instances. If you built your cloud properly you can set a rule in the controller to not launch instances on the vulnerable host while preparing to reboot. Then the simple act of shutting down and relaunching the instance will automatically migrate it to a patched host. In case you didn’t realize, every time you shut an instance down and start it again you likely move to a new host server. That is just normal cloud automation at work. When AWS has a large security patch like this they cannot rely on all customers conveniently relaunching during the desired window, so they need to take a maintenance window and do it for all affected users. Simple reboots generally do not trigger a host migration because a reboot doesn’t actually shutdown the entire instance – the virtual machine just executes the operating system shutdown and reboot procedures, but the instance is never destroyed or completely halted. Many people don’t architect resilient servers to handle reboots, which is the problem. Or the reboots require some manual testing. This is why I am a massive fan of DevOps – its techniques provide extra resiliency for situations like this – but that’s for another post. Our cloud security training covers this, and one critical security requirement when building a private (or public) cloud is to understand your patching requirements and their implications for instances. For example if you architect for live migration you can reduce required reboots, by accepting different implications and constraints. Share:

Share:
Read Post

Why the bash vulnerability is such a big deal (updated)

Updated: I made a mistake and gave Akamai credit. Stephane doesn’t work for them – I misread the post. Fixed. Critical update: Red Hat confirmed their patch is incomplete, and patched bash is still exploitable. The technical term is “cluster fuck”. Anything you patch now will need to be repatched later. For critical systems consider the workaround in their post. For everything else, wait until your vendors release complete patches. Earlier today details of a vulnerability in the UNIX/Linux/OS X tool bash, discovered by Stephane Chazelas, became public with a disclosure and patch by Red Hat. It is called Shellshock, and it might be worse than Heartbleed. Most of you reading this are likely extremely familiar with bash, but in case you aren’t it is the most popular command-line shell program in the UNIX world, installed on pretty much anything and everything. From Red Hat: Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. You might be thinking that someone needs to log in before they can ever reach bash, so no big deal, right? Wrong. Access to bash is embedded in a ton of applications. From CGI scripts running on Apache web sites to all sorts of random applications. Here is the short explanation of why this is so bad, and why we will likely be dealing with it for years: bash is embedded and accessed in so many ways that we cannot fully understand its depth of use. Many systems you would never think of as having a command line use bash to run other programs. I have used it myself, a bunch, in programs I have written – and I barely code. We cannot possibly understand all the ways an attacker could interact with bash to exploit this vulnerability. As Rob Graham has discovered, this is likely wormable. That places it into Code Red/Nimbda territory. A workable bug that can exploit public web servers is scary. We don’t know for sure, Rob doesn’t know for sure, but it looks very very possible. Potential worms are like staring at the smoking volcano while the earthquakes stir your martini – they aren’t the sort of thing you can wait for definitive proof on before taking seriously. There are rumors the patch may be incomplete. There is already a Metasploit module. Gee, thanks guys… you couldn’t give us a day? I strongly suggest keeping up with Rob’s analysis. There is really only one option: patch. It isn’t a fancy patch, but fragile systems could still suffer downtime. And you may need to re-patch if the original patch turns out to be faulty, which is always terrible. I will patch my systems and keep my ears open for any updates. Don’t trust any security vendor who claims they can block this. Patching is the only way to fix the core problem, which likely includes multiple exploit vectors. I will give bonus points to anyone who finds a vendor using Shellshock in their marketing, which then turns out to have a vulnerable product. Any security product based on UNIX/Linux is potentially vulnerable, although not necessarily exploitable. I suspect the Microsoft Security Response Center is very much enjoying their quiet evening. Share:

Share:
Read Post

Friday Summary: September 26, 2014

I have a great job. The combination of extended coverage areas, coupled with business to tech, and everything in between, makes it so. In this week alone I have talked to customers about Agile development and process adjustments, technical details of how to deploy masking for Hadoop, how to choose between two SIEM vendors, and talked to a couple vendors about Oracle and SAP security. The breadth of stuff I am exposed to is awesome. People often ask me if I want to go back to being a CTO or offer me VP of Engineering positions, but I cannot imagine going back to just focusing on one platform. I don’t get my hands as dirty, but in some ways it is far more difficult to learn nuances of half a dozen competitive product areas than jus one. And what a great time to be neck deep in security … so long as I don’t drown in data. Learning about DevOps is fascinating. Talking to people who are pushing forward with continuous integration and deployment, and watching them break apart old dev/QA/IT cycles, provides a euphoric glimpse at what’s possible with Agile code development. Then I speak with more traditional firms, still deeply embedded in 24-month waterfall development. The long tail (and neck, and back) of their process feels like a cold bucket of reality – I wonder if a significant percentage of companies will ever be agile. When I contrast Why Security Automation is the Way Forward with mid-sized enterprises, I get another cold slap from reality. I speak with many firms who cannot get servers patched every other quarter. Security patches for open source will come faster than before, but organizational lag holds firm. It is clear that many firms have a decade-long transition to more agile processes in store, and some will never break down the cultural barriers between different teams within their companies. Gunnar’s recent To Kill A Flaw post is excellent. Too good, in fact – his post includes several points that demand their own blog entries. One of the key points Gunnar has been making lately, especially in light of the nude celebrity photo leaks, is that credentials are a “zero day” attack. You need to keep that in mind when designing identity and access management today. If a guessed password provides a clear way in, you need to be able to live with that kind of 0-day. That is why we see a push away from simple passwords toward identity tokens, time-limited access, and risk-based authorization on the back end. Not only is it harder to compromise credentials, the relative risk score moves from 10 to about 4 because the scope of damage is lessened. A family member who is a bit technically challenged asked me “Is the Bash Bug Bad?” “Bad. Bad-bad-bad!” I left it at that. I think I will use that answer for press as well. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Chip and Pin. Rich quoted in Denver Post on How to Protect Your Data in the Cloud Favorite Securosis Posts Mike Rothman: Secure Agile Development: Building a Security Tool Chain. The testing process is where the security gets real. Great series from Adrian and Rich. Adrian Lane: Why the bash vulnerability is such a big deal (updated). Excellent overview by Rich on the bash vulnerability hyped as ‘shellshock’. Other Securosis Posts Why Amazon is Rebooting Your Instances (Updated). Hindsight FTW. Summary: Run Free. Secure Agile Development: Process Adjustments. Incite 9/17/2014: Break the Cycle. Firestarter: Apple Pay. Fix Something. Favorite Outside Posts Mike Rothman: The Pirate Bay Operations: 21 Virtual Machines Are Used To Run The File-sharing Website. This cloud thing might not be a fad. This is how you take an international network of stuff and move it quickly… And the torrents are pleased. Adrian Lane: Can Static Analysis replace Code Reviews? The case for security … this shows why old-fashioned manual scans cannot be fully replaced by static analysis. It also shows the need to train developers on what type of flaws to look for. Good post! Research Reports and Presentations Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Top News and Posts Three critical changes to PCI DSS 3.0 Trustworthy Computing RSA Signature Forgery in NSS Data Masking Bundled with Cloudera, Hortonworks Apple releases iOS 8 with 56 security patches CloudFlare Introduces SSL Without Private Key SSL Issues with this Blog? by Branden Williams. Bash ‘shellshock’ bug is wormable Funds in Limbo After FBI Seizes Funds from Cyberheist. Not a new problem, just a new cause. Jimmy John’s Confirms Breach at 216 Stores Julian Sanchez on NSA reform. Home Depot’s Former Security Engineer Had a Legacy of Sabotage Ping Identity Scoops $35M To Authenticate Everywhere Blog Comment of the Week This week’s best comment goes to Andrew Hay, in response to Why the bash vulnerability is such a big deal (updated). As per a conversation I had with HD Moore, he loves to release the Metasploit modules as quickly as possible in an effort to eliminate pay-per-exploit companies from profiting off of a particular vuln. I kind of agree with him. Share:

Share:
Read Post

Hindsight FTW

[soapbox] Within a week or two after every high profile data breach, we get naysayers and Tuesday Morning Quarterbacks playing the “If they only did X…” game. You know – the game where they are always right in hindsight. I am a bit surprised Pescatore jumped on that bandwagon in Simple Math: It Always Costs Less to Avoid a Breach Than to Suffer One, but he did. **Of course* it’s much cheaper to avoid a data breach. And folks have been talking about whitelisting on fixed-function devices such as POS systems for years (including me). Whitelisting is one of the SANS Critical Controls so Home Depot definitely should have implemented it, right? After all, they could have avoided over $200MM in losses if they only had spent $25MM installing whitelisting on every device across their network. But that calculation is nonsense without the benefit of foresight. $25MM to implement whitelisting is real money. When folks make resource allocation decisions in a company like Home Depot, it’s not just a simple question of “Let’s spend $25MM to save $200MM.” The likelihood of a breach is X. The potential loss is Y. And X and Y are both unknown. Whereas $25MM could be used to update a bunch of stores, resulting in assured revenue increases. It’s not like they knew about Target or any other retail breach when they made that decision. Even though John contends they should have known (again with the fortune telling) and mobilized immediately to protect their devices. However, after the Target breach become public, any rational risk assessment would have significantly raised the probability of the bad thing happening – to pretty close to 100%! Note that I do not know for sure why Home Depot didn’t install tighter controls on their POS systems. I don’t know if they weighed one capital expenditure against another and whitelisting lost. I don’t know if they decided not to implement whitelisting after learning about Target. The only thing I know is that I don’t know enough to call them out. It is disingenuous to make assumptions about what they did or didn’t do and why, so I will not. But I feel like the only one. We see an amazing number of folks have perfect vision about what Home Depot should have done. Of course it’s easy to see clearly in the rearview mirror. Or as the Fall Out Boys sing: I’m looking forward to the future But my eyesight is going bad And this crystal ball It’s always cloudy except for When you look into the past [/off soapbox] Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.