Securosis

Research

Firestarter: Hulk bash

Mike, Adrian, and I start off a little rough around the edges, but eventually get to the point. Travel is taking its toll so we won’t be able to keep our usual weekly schedule, but we will stay as close as possible – until I run off to Amsterdam for a week, for Black Hat Europe. We catch up on the inane for a few minutes, before jumping into a discussion of the bash vulnerability and disclosure debacle. We agree it is often valuable to analyze an event after the initial shock waves (See what I did there? Shellshock? Shock waves?). Today we focus on the deeper implications and how the heck a disclosure could be so bungled. Plus a little advice on where to focus your patching efforts. The audio-only version is up too. Share:

Share:
Read Post

Friday Summary: October 3, 2014 cute puppy edition

I was going to write more this week on Apple Pay security and it use of tokenization because more details have come out, but I won’t bother because TUAW beat me to it. They did a good job explaining how tokenization is used by Apple, and went on to discuss one of the facets I have been trying to get details on: the CCV/CVV code. Apple is dynamically generating a new CVV for each transaction, which can be verified by the payment processor to ensure it is coming from an authorized device. In a nutshell: fingerprint scan to verify the user is present, a token that represents the card/device combination, and a unique CVV to verify the device in use. That is not just beyond magstripes – it is better than EMV-style smart cards. No wonder the banks were happy to work with Apple. Tip of the cap to Yoni Heisler for a well-written article. It is interesting to watch events unfold when you knew exactly how they would occur beforehand. Try as you might, you cannot avoid the inevitable, even when you know it’s coming a long ways off. In this case 6 months ago a very dear friend – someone we had not spoken with in quite a while – called my wife and asked her to have lunch. The first thing that popped into my mind was, “Oh crap, we’re getting a new puppy!” See, this friend is a breeder of Boston Terriers, and we have owned many of her dogs over the years. And she was thinking of breeding two of her stock, and she will be looking for good homes to place them in. I guarantee you that landing in the Lane household is the puppy equivalent of winning the lottery – our home is a bit sought after by many dog breeders and rescue shelters. And this friend and my wife are both aware that our current Boston is 12 – he is still feisty but clearly in elder statesman territory. Keep in mind that none of the above factoids are ever discussed. No need. But you don’t have to be prescient to see what’s coming. Now that the puppies are on the ground, my wife was invited back to “help socialize” a litter of puppies with a cuteness index of 11. So I have no doubt that within several weeks we will be hearing the all-too-familiar nighttime puppy running outside before it wets the blanket again. Who needs sleep? I need to proactively pick out some puppy names! As Mike’s weekly Incite discussed, it has been a dizzying week for all of us here, but we have come out the other side unscathed. And next week will be no different. I am presenting at the Akamai Edge conference in Miami, so if you’ll be in town let me know! Now let’s move on to the summary… Webcasts, Podcasts, Outside Writing, and Conferences Adrian’s webcast on Hadoop Security. James Arlen’s Derbycon talk Favorite Securosis Posts Adrian Lane: Stranger in my own town. A glimpse into what it’s like to be Mike. A good post and a feel for what it has been like this year. David Mortman: Security and Privacy on the Encrypted Network: The Future is Encrypted. Mike Rothman: Friday Summary: September 26, 2014. Slim pickings this week, but I like the round-up of stuff on Adrian’s mind. Always entertaining. Favorite Outside Posts David Mortman: Four Interactions That Could Have Gone Better. Adrian Lane: Top 10 Web Hacking Techniques of 2013 – OWASP AppSecUSA 2014. My fave this week is a video from last week’s OWASP event – I was not able to go this year but it’s always a great con. Mike Rothman: The Truth About Ransomware. Great post by Andrew Hay about the fact that you’re on your own if you get infected with ransomware. You might get your files back, you might not. So make sure you back up the important stuff. And don’t click things. Truth. Research Reports and Presentations Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Top News and Posts NoSQL SSJI Authentication Bypass. Today’s laboratory hack, tomorrow’s Hadoop data breach. The shockingly obsolete code of bash Cops Are Handing Out Spyware to Parents—With Zero Oversight. Mind. Blowingly. Stupid. More Evidence Found in JPMorgan Chase Breach (Updated) Apple Releases Patches for Shellshock Bug Inside the NSA’s Private Cloud OpenVPN vulnerable to Shellshock Bash vulnerability A Comprehensive Outline of the Security Behind Apple Pay EC2 Maintenance Update II Oracle, Cisco step up cloud battle Apache Drill is ready to use and part of MapR’s distro. SQL queries for Hadoop. Three critical changes to PCI DSS 3.0 Blog Comment of the Week This week’s best comment goes to nobody because our comment system is broken, but we’re working on it. Promise! Share:

Share:
Read Post

Incite 10/1/2014: Stranger in my own town

I had a bit of a surreal experience earlier this week. Rich probably alluded to it a few times on the Twitter, but we are all as busy as we have been since we started the new Securosis 5 years ago. I m traveling like a mad man and it’s getting hard to squeeze in important meetings with long-time clients. But you do what you need to – we built this business on relationships, and that means we pay attention to the ones that matter. So when a Monday meeting on the west coast is the only window you can meet with a client before an important event, you do it. I flew out Sunday and had a good meeting Monday. But there was a slight complication. I was scheduled to do the mindfulness talk with JJ at the ISC2 Congress Tuesday morning in Atlanta. I had agreed to speak months ago and it’s my favorite talk, so there was no way I was bailing on JJ. That means the red-eye. Bah! I hate the red-eye. I have friends who thrive on it. They hate the idea of spending a working day in the air. I relish it because I don’t have calls and can mute the Tweeter. I get half a day of solid thinking, writing, or relaxing time. With in-flight networking I can catch up on emails and reading if I choose. So I can be productive and compensate for my challenges sleeping on planes. If I get a crappy night’s sleep the next couple of days are hosed, and that’s not really an option right now. Thankfully I got an upgrade to first class, which is about as rare as sniffing unicorn dust. I poured my exhausted self into my first-class seat, plugged in my headphones, and slept pretty well, all things considered. It wasn’t solid sleep, but it was sleep. When we landed in ATL I felt decent. Which was a lot better than I expected. So what now? Normally I’d get in the car and drive home to get all pretty for the conference. But that wouldn’t work this week because I needed to be in another city Tuesday afternoon, ahead of another strategy day on Wednesday. I didn’t have time to go home, clean up, and then head back downtown for my talk. I made some calls to folks who would be at the ISC2 conference and was graciously offered the use of a shower. But that would involve wading into some man soup in a flop room, so I was grateful for the offer, but kept looking for alternatives. Then I realized the ATL airport has showers in some of its Sky Clubs. So I trudged down to the International Terminal and found a very spacious, comfortable changing room and shower. It was bigger than some hotel rooms I’ve had in Europe. I became a stranger in my own town. Showering up at my home airport to do a talk in my city before heading back to the airport to grab another flight to another city. The boy told me it was cool to be in 3 cities in less than a day. I told him not so much, but it’s what I do. It’s a strange nomadic existence. But I’m grateful that I have clients who want to meet with me, and a family who is understanding of the fact that I love my job… –Mike Photo credit: “Darth Shower” originally uploaded by _Teb The fine folks at the RSA Conference posted the talk Jennifer Minella and I did on mindfulness at the conference this year. You can check it out on YouTube. Take an hour and check it out. Your emails, alerts and Twitter timeline will be there when you get back. Securosis Firestarter Have you checked out our new video podcast? Rich, Adrian, and Mike get into a Google Hangout and.. hang out. We talk a bit about security as well. We try to keep these to 15 minutes or less, and usually fail. September 16 – Apple Pay August 18 – You Can’t Handle the Gartner July 22 – Hacker Summer Camp July 14 – China and Career Advancement June 30 – G Who Shall Not Be Named June 17 – Apple and Privacy May 19 – Wanted Posters and SleepyCon May 12 – Another 3 for 5: McAfee/OSVDB, XP Not Dead, CEO head rolling May 5 – There Is No SecDevOps April 28 – The Verizon DBIR Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, with our content in all its unabridged glory. And you can get all our research papers too. Security and Privacy on the Encrypted Network The Future is Encrypted Secure Agile Development Building a Security Tool Chain Process Adjustments Working with Development Agile and Agile Trends Introduction Trends in Data Centric Security Deployment Models Tools Introduction Use Cases Newly Published Papers The Security Pro’s Guide to Cloud File Storage and Collaboration The 2015 Endpoint and Mobile Security Buyer’s Guide Open Source Development and Application Security Analysis Advanced Endpoint and Server Protection Defending Against Network-based DDoS Attacks Reducing Attack Surface with Application Control Leveraging Threat Intelligence in Security Monitoring The Future of Security Incite 4 U Gorillas in the mist: In case you missed it, was another important vulnerability was disclosed last week, aside from Shellshock. It was a flaw with the network security library used by Firefox and Google’s Chrome that allows an attacker to create forged RSA signatures to confuse browsers. In practice someone can fake a certificate for eBay or Amazon – or any other SSL connection – and act as a man-in-the-middle, collecting any private data sent down the pipe. You’d think that we would have beaten on SSL libraries enough to uncover these types of flaws, but just as with the bash shell vulnerability we will

Share:
Read Post

Security and Privacy on the Encrypted Network: The Future is Encrypted

The cloud and mobility are disrupting how IT builds and delivers value to the organization. Whether you are moving computing workloads to the cloud with your data now on a network outside your corporate perimeter, or an increasingly large portion of your employees are now accessing data outside of your corporate network, you no longer have control over networks or devices. Security teams need to adapt their security models to protect data. For details see our recent Future of Security research. But this isn’t the only reasons organizations are being forced to adapt security postures. The often discussed yet infrequently addressed insider threat must be addressed. Given how attackers are compromising devices, performing reconnaissance to find vulnerable targets and sniffing network traffic to steal credentials, at some point during every attack the adversary becomes an insider with credentials to access your most sensitive stuff. Regardless of whether an adversary is external or internal, at some point they will be inside your network. Finally, tighter collaboration between business partners means people outside your organization need access to your systems and vice-versa. You don’t want this access to add significant risk to your environment, so those connections need to be protected somehow to ensure data is not stolen. Given these overarching trends, organizations have no choice but to encrypt more traffic on their networks. Encrypting the network prevents adversaries from sniffing traffic to steal credentials, and ensures data moving outside the organization is protected from man-in-the-middle attacks. But no good deed goes unpunished. Encrypting network traffic impacts traffic inspection and enforcement of security policies. Encrypted networks also complicate security monitoring because traffic needs to be decrypted at wire speed for capture and forensics. Encrypted traffic also presents compliance issues and raises human resources considerations around decryption, which must be factored into your plans as you contemplate driving network encryption deeper into the network. In our new series, Security and Privacy on the Encrypted Network, we will discuss how to enforce security policies to ensure data isn’t leaking out over encrypted tunnels, and employees adhere to corporate acceptable use policies, by decrypting traffic as needed. Then we will dive into the security monitoring and forensics use case to discuss traffic decryption strategies to ensure you can properly alert on security events and investigate incidents. Finally we will wrap up with guidance about how to handle human resources and compliance issues as an increasing fraction of network traffic is encrypted. We would like to thank Blue Coat Systems for potentially licensing the paper when this project is complete. Without our clients willingness to license our research you wouldn’t be able to access this research for the low low price of $0… Share:

Share:
Read Post

Why Amazon is Rebooting Your Instances (Updated)

Update: Amazon published some details. Less than 10% of AWS systems are affected, and the vulnerability will be disclosed October 1st. As suspected this is about Xen – not the bash vulnerability. Yesterday I received notice that Amazon Web Services is force rebooting one of my instances. Then more emails started rolling in, and it looks like many (or all) of them will be rebooted during a single maintenance window. It has been a few years since this happened, and the reason ties into how AWS updates the servers your instances run on. We actually teach this in our cloud security training class, including how to architect your own cloud so you might not have to do the same thing – with, of course, many caveats. My initial assumption was application of a quiet security patch, and that looks dead on: From @ClipperChip via Matt Green on Twitter: Amazon rebooting all AWS instances (https://t.co/xg2XoXDdEe) + an undisclosed advisory on http://t.co/PdLqk8qXSE http://t.co/Fo1beT7xrN 🙂 And here is what looks like that vuln: XSA-108 | 2014-10-01 12:00 | none (yet) assigned | (Prereleased, but embargoed) How AWS updates servers Amazon uses a modified version of the Xen hypervisor. Our understanding of their architecture indicates they do not support live migration. Live migration, available under VMware as vMotion, allows you to move a running virtual machine from one physical host to another without shutting it down. When you build a cloud, host servers consist of (at least) a hypervisor with management and connectivity components. Sometimes, as with OpenStack, you even have a usable operating system. All these components need to be updated periodically. Some updates require rebooting the host server. To update the hypervisor you typically need to shut down the virtual machines (instances) running on top of it. There are two common ways to manage these updates to reduce downtime: Update a host without any virtual machines running on it, then live migrate instances from a vulnerable host to a patched one. Then update the vulnerable host once all its instances are running elsewhere. If you cannot live migrate, do the same thing by shutting down and restarting the instances. If you built your cloud properly you can set a rule in the controller to not launch instances on the vulnerable host while preparing to reboot. Then the simple act of shutting down and relaunching the instance will automatically migrate it to a patched host. In case you didn’t realize, every time you shut an instance down and start it again you likely move to a new host server. That is just normal cloud automation at work. When AWS has a large security patch like this they cannot rely on all customers conveniently relaunching during the desired window, so they need to take a maintenance window and do it for all affected users. Simple reboots generally do not trigger a host migration because a reboot doesn’t actually shutdown the entire instance – the virtual machine just executes the operating system shutdown and reboot procedures, but the instance is never destroyed or completely halted. Many people don’t architect resilient servers to handle reboots, which is the problem. Or the reboots require some manual testing. This is why I am a massive fan of DevOps – its techniques provide extra resiliency for situations like this – but that’s for another post. Our cloud security training covers this, and one critical security requirement when building a private (or public) cloud is to understand your patching requirements and their implications for instances. For example if you architect for live migration you can reduce required reboots, by accepting different implications and constraints. Share:

Share:
Read Post

Why the bash vulnerability is such a big deal (updated)

Updated: I made a mistake and gave Akamai credit. Stephane doesn’t work for them – I misread the post. Fixed. Critical update: Red Hat confirmed their patch is incomplete, and patched bash is still exploitable. The technical term is “cluster fuck”. Anything you patch now will need to be repatched later. For critical systems consider the workaround in their post. For everything else, wait until your vendors release complete patches. Earlier today details of a vulnerability in the UNIX/Linux/OS X tool bash, discovered by Stephane Chazelas, became public with a disclosure and patch by Red Hat. It is called Shellshock, and it might be worse than Heartbleed. Most of you reading this are likely extremely familiar with bash, but in case you aren’t it is the most popular command-line shell program in the UNIX world, installed on pretty much anything and everything. From Red Hat: Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. You might be thinking that someone needs to log in before they can ever reach bash, so no big deal, right? Wrong. Access to bash is embedded in a ton of applications. From CGI scripts running on Apache web sites to all sorts of random applications. Here is the short explanation of why this is so bad, and why we will likely be dealing with it for years: bash is embedded and accessed in so many ways that we cannot fully understand its depth of use. Many systems you would never think of as having a command line use bash to run other programs. I have used it myself, a bunch, in programs I have written – and I barely code. We cannot possibly understand all the ways an attacker could interact with bash to exploit this vulnerability. As Rob Graham has discovered, this is likely wormable. That places it into Code Red/Nimbda territory. A workable bug that can exploit public web servers is scary. We don’t know for sure, Rob doesn’t know for sure, but it looks very very possible. Potential worms are like staring at the smoking volcano while the earthquakes stir your martini – they aren’t the sort of thing you can wait for definitive proof on before taking seriously. There are rumors the patch may be incomplete. There is already a Metasploit module. Gee, thanks guys… you couldn’t give us a day? I strongly suggest keeping up with Rob’s analysis. There is really only one option: patch. It isn’t a fancy patch, but fragile systems could still suffer downtime. And you may need to re-patch if the original patch turns out to be faulty, which is always terrible. I will patch my systems and keep my ears open for any updates. Don’t trust any security vendor who claims they can block this. Patching is the only way to fix the core problem, which likely includes multiple exploit vectors. I will give bonus points to anyone who finds a vendor using Shellshock in their marketing, which then turns out to have a vulnerable product. Any security product based on UNIX/Linux is potentially vulnerable, although not necessarily exploitable. I suspect the Microsoft Security Response Center is very much enjoying their quiet evening. Share:

Share:
Read Post

Friday Summary: September 26, 2014

I have a great job. The combination of extended coverage areas, coupled with business to tech, and everything in between, makes it so. In this week alone I have talked to customers about Agile development and process adjustments, technical details of how to deploy masking for Hadoop, how to choose between two SIEM vendors, and talked to a couple vendors about Oracle and SAP security. The breadth of stuff I am exposed to is awesome. People often ask me if I want to go back to being a CTO or offer me VP of Engineering positions, but I cannot imagine going back to just focusing on one platform. I don’t get my hands as dirty, but in some ways it is far more difficult to learn nuances of half a dozen competitive product areas than jus one. And what a great time to be neck deep in security … so long as I don’t drown in data. Learning about DevOps is fascinating. Talking to people who are pushing forward with continuous integration and deployment, and watching them break apart old dev/QA/IT cycles, provides a euphoric glimpse at what’s possible with Agile code development. Then I speak with more traditional firms, still deeply embedded in 24-month waterfall development. The long tail (and neck, and back) of their process feels like a cold bucket of reality – I wonder if a significant percentage of companies will ever be agile. When I contrast Why Security Automation is the Way Forward with mid-sized enterprises, I get another cold slap from reality. I speak with many firms who cannot get servers patched every other quarter. Security patches for open source will come faster than before, but organizational lag holds firm. It is clear that many firms have a decade-long transition to more agile processes in store, and some will never break down the cultural barriers between different teams within their companies. Gunnar’s recent To Kill A Flaw post is excellent. Too good, in fact – his post includes several points that demand their own blog entries. One of the key points Gunnar has been making lately, especially in light of the nude celebrity photo leaks, is that credentials are a “zero day” attack. You need to keep that in mind when designing identity and access management today. If a guessed password provides a clear way in, you need to be able to live with that kind of 0-day. That is why we see a push away from simple passwords toward identity tokens, time-limited access, and risk-based authorization on the back end. Not only is it harder to compromise credentials, the relative risk score moves from 10 to about 4 because the scope of damage is lessened. A family member who is a bit technically challenged asked me “Is the Bash Bug Bad?” “Bad. Bad-bad-bad!” I left it at that. I think I will use that answer for press as well. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Adrian quoted on Chip and Pin. Rich quoted in Denver Post on How to Protect Your Data in the Cloud Favorite Securosis Posts Mike Rothman: Secure Agile Development: Building a Security Tool Chain. The testing process is where the security gets real. Great series from Adrian and Rich. Adrian Lane: Why the bash vulnerability is such a big deal (updated). Excellent overview by Rich on the bash vulnerability hyped as ‘shellshock’. Other Securosis Posts Why Amazon is Rebooting Your Instances (Updated). Hindsight FTW. Summary: Run Free. Secure Agile Development: Process Adjustments. Incite 9/17/2014: Break the Cycle. Firestarter: Apple Pay. Fix Something. Favorite Outside Posts Mike Rothman: The Pirate Bay Operations: 21 Virtual Machines Are Used To Run The File-sharing Website. This cloud thing might not be a fad. This is how you take an international network of stuff and move it quickly… And the torrents are pleased. Adrian Lane: Can Static Analysis replace Code Reviews? The case for security … this shows why old-fashioned manual scans cannot be fully replaced by static analysis. It also shows the need to train developers on what type of flaws to look for. Good post! Research Reports and Presentations Pragmatic WAF Management: Giving Web Apps a Fighting Chance. The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Top News and Posts Three critical changes to PCI DSS 3.0 Trustworthy Computing RSA Signature Forgery in NSS Data Masking Bundled with Cloudera, Hortonworks Apple releases iOS 8 with 56 security patches CloudFlare Introduces SSL Without Private Key SSL Issues with this Blog? by Branden Williams. Bash ‘shellshock’ bug is wormable Funds in Limbo After FBI Seizes Funds from Cyberheist. Not a new problem, just a new cause. Jimmy John’s Confirms Breach at 216 Stores Julian Sanchez on NSA reform. Home Depot’s Former Security Engineer Had a Legacy of Sabotage Ping Identity Scoops $35M To Authenticate Everywhere Blog Comment of the Week This week’s best comment goes to Andrew Hay, in response to Why the bash vulnerability is such a big deal (updated). As per a conversation I had with HD Moore, he loves to release the Metasploit modules as quickly as possible in an effort to eliminate pay-per-exploit companies from profiting off of a particular vuln. I kind of agree with him. Share:

Share:
Read Post

Hindsight FTW

[soapbox] Within a week or two after every high profile data breach, we get naysayers and Tuesday Morning Quarterbacks playing the “If they only did X…” game. You know – the game where they are always right in hindsight. I am a bit surprised Pescatore jumped on that bandwagon in Simple Math: It Always Costs Less to Avoid a Breach Than to Suffer One, but he did. **Of course* it’s much cheaper to avoid a data breach. And folks have been talking about whitelisting on fixed-function devices such as POS systems for years (including me). Whitelisting is one of the SANS Critical Controls so Home Depot definitely should have implemented it, right? After all, they could have avoided over $200MM in losses if they only had spent $25MM installing whitelisting on every device across their network. But that calculation is nonsense without the benefit of foresight. $25MM to implement whitelisting is real money. When folks make resource allocation decisions in a company like Home Depot, it’s not just a simple question of “Let’s spend $25MM to save $200MM.” The likelihood of a breach is X. The potential loss is Y. And X and Y are both unknown. Whereas $25MM could be used to update a bunch of stores, resulting in assured revenue increases. It’s not like they knew about Target or any other retail breach when they made that decision. Even though John contends they should have known (again with the fortune telling) and mobilized immediately to protect their devices. However, after the Target breach become public, any rational risk assessment would have significantly raised the probability of the bad thing happening – to pretty close to 100%! Note that I do not know for sure why Home Depot didn’t install tighter controls on their POS systems. I don’t know if they weighed one capital expenditure against another and whitelisting lost. I don’t know if they decided not to implement whitelisting after learning about Target. The only thing I know is that I don’t know enough to call them out. It is disingenuous to make assumptions about what they did or didn’t do and why, so I will not. But I feel like the only one. We see an amazing number of folks have perfect vision about what Home Depot should have done. Of course it’s easy to see clearly in the rearview mirror. Or as the Fall Out Boys sing: I’m looking forward to the future But my eyesight is going bad And this crystal ball It’s always cloudy except for When you look into the past [/off soapbox] Share:

Share:
Read Post

Secure Agile Development: Building a Security Tool Chain

Now that we have laid out the Agile process it’s time to discuss where different types of security testing fits within it. Your challenge is not just to figure out what testing you need to identify code issues, but also to smoothly fit tests into the framework to help speed testing. You will incorporate multiple testing techniques into the the process, with each tool or technique focused on finding slightly different issues. Developers are clever, so development teams find ways to circumvent security testing if it interferes with efficient coding. And you will need to accept that some tests simply cannot be performed in certain parts of the process, while others can be incorporated in multiple places. To help you evaluate both which tools to consider and how to incorporate them, we offer several recommendations for designing a security “tool chain”. Pre-Sprint Tests and Analysis Threat modeling: Threat modeling is the act of looking for design-level security problems from the perspective of an attacker, and then designing countermeasures. The process enables designers and developers to think about the big picture security of an application or function, then build in defenses, rather than merely focusing on bugs. The classic vectors include unwanted escalation of user credentials, information disclosure, repudiation (i.e.: injecting false data into logs), tampering, spoofing, and denial of service. For each new feature, all these subversion techniques are evaluated against every place a user or code module communicates with another. If issue are identified, the design is augmented to address the problem. In Agile these changes are incorporated into user stories before task cards are doled out. Security defect list: Security defect tracking covers both collecting security defect data and getting the subset of information developers need to address problems. Most organizations funnel all discovered defects into a bug tracking system. These defects may be found in normal testing or through any of the security tests described below. Security testing tools feed defect tracking systems so issues can be tracked, but that does not mean they provide consistent levels of information. Nor do they set the bar for criticality the same. How you integrate and tailor defect feeds from test tools, and normalize those results, is important for effective Agile integration. You need to reach an agreement with the Product Owner on which defects will be addressed in each sprint, and which will be allowed to slide (and for how long). The security defect backlog should be reviewed each sprint. Patching and configuration management: Most software uses a combination of open source and/or commercial code to supplement what the in-house development team builds. For example Apache supports most current web services. Keeping these supplementary components patched is just as necessary as fixing issues in your own code. Agile offers a convenient way to integrate patching and configuration changes at the beginning of each sprint: catalog security patches in supporting commercial and open source platforms, and incorporate the changes into the sprint as tasks. This pre-supposes IT and its production systems are as Agile as development systems, which is regrettably not always the case. Daily Tests Unit testing: Development teams use unit tests to prove that delivered code functions as designed. These tests are typically created during the development process; teams using test-driven or behavior-driven development write them before the code. Unit tests are run after each successful build, and help catch any defects that pop up due to recent changes. Unit tests often include attacks and garbage input to verify that the application is resilient to potential issues outlined during threat modeling. The formal requirement for this type of testing needs to be included in the Agile user stories or tasks – they do not magically appear if you fail to specify them as a requirement. Security regression tests: Regression tests verify that a code change actually fixes a defect. Like unit testing they run after each successful build. Unlike unit test, regression tests each target a known defect, either in the code or a supporting code module. It is common for development teams to break previous security fixes – usually when merging code branches – so security regression tests are a failsafe. With each security tasks to fix a defect, include a simple test to ensure it stays fixed. Manual code inspection: Code reviews, also called peer review, are where a member of the development team examines another developer’s code. They check to ensure that code complies with general standards, but also look for specific implementation flaws such as unfiltered input variables, insufficient user authentication, and unhandled errors. Despite wide adoption of automated testing, about 50% of development shops still leverage manual code review for code quality assessment. Manual efforts may appear inefficient, but manual testing is as Agile as it needs to be. For example, the team chooses whether to perform these tests as part of development, QA, predeployment, or any combination of the above. The task can be assigned to any developer, on any branch of code, and as focused or random as the team decides. We recommend manual testing against critical code modules, supplemented with automated code scanning because manual scanning is repetitive and prone to errors. Manual testing can serve a very valuable security function when properly integrated, by focusing on critical functions (including authentication and encryption), using domain experts to keep an eye on that code and subsequent changes. Every-Sprint (Commit) Tests Static analysis: Static analysis examines the source code of a web application for common vulnerabilities, errors, and omissions within the constructs of the language itself. This provides an automated counterpart to peer review. Among other things these tools generally scan for unhandled error conditions, unfiltered input variables, object availability and/or scoping, and potential buffer overflows. The technique is called “static analysis” because it examines source code rather than execution flow of a running program. Like manual reviews, static analysis is effective at discovering ‘wetware’ problems: issues in code that are directly attributable to programmer error. Better tools integrate well with various

Share:
Read Post

Summary: Run Free

Last night I spent four hours without my iPhone. Four conscious hours, to be specific. It was wonderful. I realize that may sound strange, but I bet the majority of you reading this nearly always have a phone within hearing range, if not actively grasped in your hand or stuffed in a pocket where you obsessively check it every now and then, when the slightest breeze triggers the vibration nerves in your upper thigh. Maybe the Apple Watch will fix that last one. Unlike most of you I have been living with pagers, radios, and other on-call devices since around 1991. Due to my involvement in emergency services, I was effectively on-call continuously for years at a time. No, I was not required to show up, but between paid and volunteer gigs you just get used to always being in touch. It was also an amazing way to get out of crappy dates. But somehow my public service commitment slowly transitioned to having my phone on or near me at nearly all times. Part of this is due to my inherent geekiness, some an effect of running my own business, a smidge from being a parent, and plenty from a developed habit that isn’t necessarily the most positive psychological development. Practically speaking I do need to have my phone near me quite a bit, especially during working hours. Even when I am blocking out distractions, the folks I work with need to be able to get a hold of me if something important comes up – especially since I manage all our IT. And with a family of 5 there is a lot to coordinate. I even need it on longer workouts for safety – I run in the desert, ride my bike far from home (sometimes an hour away by car) and go on excursions in new cities. Is my phone a necessity? No, I did all that before having a phone, but I also got into some dicey situations. But that doesn’t mean it needs to be all the time. I used to catch a break when I was on mountain rescues or ski patrol. But not only do I not participate in those any more, cell coverage is far better than you would expect unless you go really deep into the backcountry. Or need to make a call on AT&T in New York City. Last night I was in San Jose for the Cloud Security Alliance conference. After teaching a developer class I met up with a friend who is also a runner (better than me). We went out for a nice four miles, and decided to grab some beer and burritos without swinging back for our stuff (she had cash). Between the run, slow service, and finding food, it was nearly four hours before we re-attached our digital leashes. This wasn’t some sort of existential event. But it was nice to be out of touch for a while, and not worry about it. And even better that it didn’t involve some massive excursion to evade cell towers. A run, two beers, a burrito, and then back home. No Yelp to check reviews, Siri to find the closest burrito, email interruptions, or text messages. We survived, as did our children and businesses. Go figure. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich quoted in USA Today on payments. Rich also quoted in The Guardian on Apple Pay. Adrian quoted on Sentrix. Not that the rest of us know who that is. Adrian quoted on Apple Pay at TechTarget. Rich on the ThreatPost podcast with Dennis Fisher. I always love talking with him. He lets me use bad words. Favorite Securosis Posts Mike Rothman: Secure Agile Development: Process Adjustments. Adapting to the situation is always challenging. Adrian and Rich go into how to adapt Agile development when things need to be tuned a bit. Adrian Lane: Firestarter: Apple Pay. Rich: Fix Something. No matter how good you are at poking holes and pointing fingers, I respect those who try to fix things more. Other Securosis Posts Incite 9/17/2014: Break the Cycle. New Paper! The Security Pro’s Guide to Cloud File Storage and Collaboration. Favorite Outside Posts Mike Rothman: And so there must come an end. Really inspiring post on handling the end of life with grace. Charley documented her battle against cancer and wrapped up the story in a way that reminds us of the impermanence of everything. Adrian Lane: OWASP Top 10 is Overrated. The author is clear that this is flame bait, but correct that the focus has shifted to the top 10, without understanding reaching beyond that simple list. The point of OWASP was community awareness, but they stumbled across what everyone in the press knows: people want distilled information. Rich: I’m picking my own post on Apple Privacy at Macworld from back in June. Why? Well, Tim Cook’s statement on privacy might be one reason. Research Reports and Presentations The Security Pro’s Guide to Cloud File Storage and Collaboration. The 2015 Endpoint and Mobile Security Buyer’s Guide. Analysis of the 2014 Open Source Development and Application Security Survey. Defending Against Network-based Distributed Denial of Service Attacks. Reducing Attack Surface with Application Control. Leveraging Threat Intelligence in Security Monitoring. The Future of Security: The Trends and Technologies Transforming Security. Security Analytics with Big Data. Security Management 2.5: Replacing Your SIEM Yet? Defending Data on iOS 7. Top News and Posts Home Depot hack may have exposed 56 million credit card numbers. I think we have our inflection point now. Ping Identity Scoops $35M To Authenticate Everywhere. The NSA Spied on German Telecoms. Chinese Penetrate TRANSCOM Amid Lack of Data Sharing. Long term penetration of US military logistics chain. Nice. Critical updates for Adobe Reader and Acrobat. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.