Securosis

Research

Incite 9/18/2013: Got No Game

On Monday night I did a guest lecture for some students in Kennesaw State’s information security program. It is always a lot of fun to get in front of the “next generation” of practitioners (see what I did there?). I focused on innovation in endpoint protection and network security, discussing the research I have been doing into threat intelligence. The kids (a few looked as old as me) seemed to enjoy hearing about the latest and greatest in the security space. t also gave me a forum to talk about what it’s really like in the security trenches, which many students don’t learn until they are knee-deep in the muck. I didn’t shy away from the lack of obvious and demonstrable success, or how difficult it is to get general business folks to understand what’s involved in protecting information. The professor had a term that makes a lot of sense: security folks are basically digital janitors, cleaning up the mess the general population makes. When I started talking about the coming perimeter re-architecture (driven by NGFW, et al), I mentioned how much time they will be able to save by dealing with a single policy, rather than having to manage the firewall, IPS, web filter, and malware detection boxes separately. I told them that would leave plenty of time to play Tetris. Yup, that garnered an awkward silence. I started spinning and asked if any knew what Tetris was? Of course they did, but a kind student gently informed me that no one has played that game in approximately 10 years. Okay, how about Gears of War? Not so much – evidently that trilogy is over. I was going to mention Angry Birds, but evidently Angry Birds was so 12 months ago. I quit before I lost all credibility. There it was, stark as day – I have no game. Well no video game anyway. Once I got over my initial embarrassment, I realized my lack of prowess is kind of intentional. I have a fairly addictive personality, so anything that can be borderline addictive (such as video games) is a problem for me. It’s hard to pay my bills if I’m playing Strategic Conquest for 40 hours straight, which I did back in the early 90’s. I have found through the years that if I just don’t start, I don’t have to worry about when (or if) I will stop. I see the same tendencies in the Boy. He’s all into “Clash of Clans” right now. Part of me is happy to see him get his Braveheart on attacking other villages, Game of Thrones style. He seems pretty good at analyzing an adversary’s defenses and finding a way around them, leading his clan to victory. But it’s frustrating when I have to grab the Touch just to have a conversation with him. Although at least I know where he gets it from. Some folks can practice moderation. You know, those annoying people who can take a little break for 15 minutes and play a few games, and then be disciplined enough to stop and get back on task. I’m not one of those people. When I start something, I start something. And that means the safest thing for me is often to not start. It’s all about learning my own idiosyncrasies and not putting myself in situations where I will get into trouble. So no video games for me! –Mike Photo credit: “when it’s no longer a game” originally uploaded by istolethetv Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Optimizing Rules Change Management Introduction Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? API Gateways Implementation Key Management Developer Tools Newly Published Papers Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Incite 4 U Good guys always get DoS’ed: Django learned the hard way that if you give hackers an inch they’ll take a mile – and your site too. Last week they suffered a denial of service attack when users submitted multi-megabyte passwords – the “computational complexity” of generating strong hashes for a few requests was enough to DoS the site. Awesome. The mitigation to this kind of attack is input validation. Sure, as a security expert I still get pissed when sites limit me to 8 character passwords, but it’s unreasonable to accept the Encyclopedia Britannica as valid field input. I am sorry to be smiling as I write this – I feel bad for the Django folks – but it’s funny how no good security intentions go unpunished. Thanks for patching, guys! – AL DHS gets monitoring religion (to the tune of $6B): Not sure how I missed the award of a $6 billion DHS contract to implement continuous detection and mitigation technology. Evidently this is the new term for continuous monitoring, and it seems every beltway bandit, and scanning and SIEM vendor, is involved. So basically nothing will get done – I guess that’s the American way. But this move, which started with NIST’s push to continuous monitoring and continues with DHS’s rebranded CDM initiative, is going in the right direction. Will they ever get to “real time” monitoring? Does it matter? They can’t actually respond in real time, so I don’t think so. If any of these gold-plated toilet seats provides the ability to see a vulnerability with a few days (rather than showing up on a quarterly report, and being ignored), it’s an improvement. As they said in Contact, “baby steps…” – MR FUD filled vacuum: When working with clients I am often still surprised at how often even mature organizations underestimate the eventual misinterpretations of their

Share:
Read Post

Firewall Management Essentials: Managing Access Risk

We have discussed two of the three legs of comprehensive firewall management: a change management process and optimizing the rules. Now let’s work through managing risk using the firewall. Obviously we need to define risk, because depending on your industry and philosophy, risk can mean many different things. For firewall management we are talking about the risk of unauthorized parties accessing sensitive resources. Obviously if a device with critical data is inaccessible to internal and/or external attackers, the risk it presents is lower. This “access risk management” function involves understanding first and foremost the network’s topology and security controls. The ability to view attack paths provides a visual depiction of how an attacker could gain access to a device. With this information you can see which devices need remediation and/or network workarounds, and prioritize fixes. Another benefit of visualizing attack paths is in understanding when changes on the network or security devices unintentionally expose additional attack surface. So what does this have to do with your firewall? That’s a logical question, but a key firewall function is access control. You configure the firewall and its rule set to ensure that only authorized ports, protocols, applications, users, etc. have access to critical devices, applications, etc. within your network. A misconfigured firewall can have significant and severe consequences, as discussed in the last post. For example, years ago when supporting a set of email security devices, we got a call about an avalanche of spam hitting the mailboxes of key employees. The customer was not pleased, but the deployed email security gateway appeared to be working perfectly. Initially perplexed, one of our engineers checked the backup email server, and discovered it was open to Internet traffic due to a faulty firewall rule. So attackers were able to use the back-up server as a mail relay, and blast all the mailboxes in the enterprise. With some knowledge of network topology and the paths between external networks and internal devices, this issue could have been identified and remediated before any employees were troubled. Key Access Risk Management Features When examining the network and keeping track of attack paths, look for a few key features: Topology monitoring: Topology can be determined actively, passively, or both. For active mapping you will want your firewall management tool to pull configurations from firewalls and other access control devices. You also need to account for routing tables, network interfaces, and address translation rules. Interoperating with passive monitoring tools (network behavioral analysis, etc.) can provide more continuous monitoring. You need the ability to determine whether and how any specific device can be accessed, and from where – both internal and external. Analysis horsepower: Accounting for all the possible paths through a network requires an n_*(_n-1) analysis, and n gets rather large for an enterprise network. The ability to re-analyze millions of paths on every topology change is critical for providing an accurate view. What if?: You will want to assess each possible change before it is made, to understand its impact on the network and attack surface. This enables the organization to detect additional risks posed by a change before committing it. In the example above, if that customer had a tool to help understand that a firewall rule change would make their backup email server a sitting duck for attackers they would have reconsidered. Alternative rules: It is not always possible to remediate a specific device due to operational issues. So to control risk you would like a firewall management tool to suggest appropriate rule changes or alternate network routes to isolate the vulnerable device and protect the network. At this point it should be clear that all these firewall management functions depend on each other. Optimizing rules is part of the change management process, and access risk management comes into play for every change. And vice-versa, so although we discussed these function as distinct requirements of firewall management, in reality you need all of these functions to work together for operational excellence. In this series’ last post we will focus on getting a quick win with firewall management technology. We will discuss deployment architectures and integration with enterprise systems, and work through a deployment scenario to make many of these concepts a bit more tangible. Share:

Share:
Read Post

Firewall Management Essentials: Optimizing Rules

Now that you have a solid, repeatable, and automated firewall change management process, it’s time to delve into the next major aspect of managing your firewalls: optimizing rules. Back in our introduction we talked about how firewall rule sets tend to resemble a closet over time. You have a ton of crap in there, most of which you don’t use, and whatever you do use is typically hard to get to. So you need to occasionally clean up and reorganize – getting rid of stuff you don’t need, making sure the stuff that’s still in there should be, and arranging things so you can easily access the stuff you use the most. But let’s drop the closet analogy to talk firewall specifics. You need to optimize rules for a variety of reasons: Eliminate duplicate rules: When you have a lot of hands in the rule base, rules can get duplicated. Especially when the management process doesn’t require a search to make sure an overlapping rule doesn’t already exist. Address conflicting rules: At times you may add a rule (such as ALLOW PORT 22) to address a short-term issue, even though you might have other rules to lock down the port or application. Depending on where the rule resides in the tree, the rules may conflict, either adding attack surface or breaking functionality. Get rid of old and unused rules: If you don’t go back into the rule set every so often to ensure your rules are relevant, you are bound to have rules that are no longer necessary, such as access to that old legacy mainframe application that was decommissioned 4 years ago. It is also useful to go back and confirm with each rule’s business owner that their application still needs that access, and they accept responsibility for it. Simplify the rule base: The more rules, the more complicated the rule base, and the more likely something will go wrong. By analyzing and optimizing rules on a periodic basis, you can find and remove unneeded complexity. Improving performance: If you have frequently used rules at the bottom of the tree, the firewall needs to go through every preceding rule to reach them. That can bog down performance, so you want the most frequently hit rules as early as possible. Without conflicting with other rules, of course. Controlling network risk: Networks are very dynamic, so you need to ensure that every network or device configuration change doesn’t add attack surface, requiring a firewall rule change. For all these reasons, going through the rule base on a regular basis is key to keeping firewalls running optimally. Every rule should be required to support the business, and optimally configured. Key Firewall Management Rule Optimization Features The specific features you should get in your firewall management product or service apply directly to the requirements above. Centralized management: A huge benefit of more actively managing firewalls is the ability to enforce a set of consistent policies across all firewalls, regardless of vendor. So you need a scalable tool that supports all your devices. You should have a single authoritative source for firewall policies. Rule change recommendations: If a firewall rule set gets complicated enough it’s hard for any human – even your best security admin – to keep everything straight. So a tool should be able to mine the existing rule set (thousands of rules) to find and get rid of duplicate, hidden, unused, and expired rules. Tools should assess the risk of the rules, and flag rules which allow too much access (you know: ANY ANY). Optimize rule order: A key aspect of improving firewall performance is making sure the most-hit rules are closer to the top of the tree. The tool should track which rules are hit most often through firewall log analysis, and suggest an ordering to optimize performance without increasing exposure. Simulating rule changes: Clever ideas can turn out badly if a change conflicts with other rules or opens up (or closes) the wrong ports/protocols/applications/users/groups, etc. The tool should simulate rule changes and a prediction of whether the change is likely to present problems. Monitoring network topology and device configuration: Every network and device configuration change can expose additional attack surface, so the tool needs to analyze every proposed change in context of the existing rule set. This involves polling managed devices for their configurations on a periodic basis, as well as monitoring routing tables. Compliance checking: Related to monitoring topology and configurations, changes can also cause compliance violations. SO you need the firewall management tool to flag rule changes that might violate any relevant compliance mandates. Recertify rules: The firewall management tool should offer a mechanism to go back to business owners to ensure rules are still relevant and that they accept responsibility for their rules. You should be able to set an expiration date on a rule, and then require an owner to confirm each rule is still necessary. Getting rid of old rules is one of the most effective ways to optimize a rule set. Asking for Forgiveness Speaking of firewall rule recertification, you certainly can go through the process of chasing down all the business owners of rules, if you know who they are, and getting them to confirm each rule is still needed. That’s a lot of work. You could choose a less participatory approach as well: make changes and then ask forgiveness if you break something. There are a couple options with this approach: Turn off unused rules: Use the firewall management tool’s ability to flag unused rules and just turn them off. If someone complains you know the rule is still required and you can assume they would be willing to recertify the rule. If not you can get rid of it. Blow out the rule base: You can also burn the rule base to the ground and wait for complaints to start about applications that broke as a result. This is only sane in dire circumstance, where no one will take responsibility for rules or people are totally unresponsive to your attempts to clean things up. But it’s certainly an option. NGFW Support With the move

Share:
Read Post

Black Hat West Cloud Security Training

I am psyched to announce that our Black Hat Vegas class went well, and we have been invited to teach in Seattle December 9-10 and 11-12. As before, we will be bringing some advanced material, but you shouldn’t be scared off – advanced skillz are not required to make it through the class. You can sign up for the class here. The short description is: CLOUD SECURITY PLUS (CCSK-Plus) Provide students with the practical knowledge they need to understand the real cloud security issues and solutions. The Cloud Security Plus class provides students a comprehensive two-day review of cloud security fundamentals and prepares them to take the Cloud Security Alliance Certificate of Cloud Computing Security Knowledge (CCSK) exam (this course is also known as the CCSK-Plus). Starting with a detailed description of cloud computing, the course covers all major domains in the latest Guidance document from the Cloud Security Alliance, and includes a full day of hands-on cloud security training covering both public and private cloud. Share:

Share:
Read Post

Threat Intelligence for Ecosystem Risk Management [New Paper]

Most folks think the move towards the extended enterprise is very cool. You know, get other organizations to do the stuff your organization isn’t great at. It’s a win/win, right? From a business standpoint, there are clear advantages to building a robust ecosystem that leverages the capabilities of all organizations. But from a security standpoint, the extended enterprise adds a tremendous amount of attack surface. In order to make the extended enterprise work, your business partners need access to your critical information. And that’s where security folks tend to break out in hives. It’s hard enough to protect your networks, servers, and applications while making sure your own employees don’t do anything stupid to leave you exposed. Imagine your risk – based not just on how you protect your information, but also on how well all your business partners protect their information and devices as well. Actually, you don’t need to imagine that – it’s reality. We are pleased to announce the availability of our Threat Intelligence for Ecosystem Risk Management white paper. This paper delves into the risks of the extended enterprise and then present a process to gather information about trading partners to make decisions regarding connectivity and access more fact-based. Many of you are not in positions to build your own capabilities to assess partner networks, but this paper offers perspective on how you would, so when considering external threat intelligence services you will be an educated buyer. You can see the Threat Intelligence for Ecosystem Risk Management page in our Research Library or download the paper directly (PDF) We want to thank BitSight Technologies for licensing the content in this paper. The largesse of our licensees enables us to provide our research without cost to you. Share:

Share:
Read Post

Firewall Management Essentials: Change Management

As we dive back into Firewall Management Essentials, let’s revisit some of the high points from our Introduction: The firewalls run on a set of rules that basically define what ports, protocols, networks, users, and increasingly applications, can do on your network. And just like a closet in your house, if you don’t spend time sorting through old stuff it can become a disorganized mess, with a bunch of things you haven’t used in years and don’t need any more. The problem is that, like your closet, this issue just gets worse if you put off addressing the issue. And it’s not like rule bases are static. You have new requests coming in to open this port or allow that group of users to do something new or different pretty much every day. The situation can get out of control quickly, especially as you increase the number of devices in use. So first we will dig into building a consistent workflow to manage the change process. This process is important for numerous reasons: Accuracy: If you make an incorrect change or have rules which conflict with other rules you can add significant attack surface to your environment. So it is essential to ensure you make the proper changes, correctly. Authorization: It is difficult for many security admins to say no, especially to persuasive business and technology leaders who ‘need’ their stuff done now. So a consistent and fair authorization process eliminates bullying and other shenanigans folks use to get what they want. Verification: Was the change made correctly? Are you sure? The ability to verify the change was correct and successful is important, especially for auditing. Audit trail: Speaking of audit, making sure every change is documented, with details of the requestor and approver, is helpful both when preparing for an audit and for ensuring the audit’s outcome is positive. Network Security Operations A few years ago we tackled building a huge and granular process map for network security operations as part of our Network Security Operations Quant research. One of the functions we explicitly described was managing firewalls. Check out the detailed process map: This can be a bit ponderous for many organizations, and isn’t necessarily intended to be implemented in its entirety. But it illustrates what is involved in managing these devices. To ensure you understand how we define some of these terms, here is a brief description of each step from that report. Policy, Rule, and Signature Management In this phase we manage the content that underlies the network security devices. This includes attack signatures and the policies & rules that control response to an attack. Policy Review: Given the number of monitoring and blocking policies available on network devices, it is important to keep rules (policies) current. Keep in mind the severe performance hit (and false positive issues) of deploying too many policies on a device. It is a best practice to review network security device policies and prune rules that are obsolete, duplicative, overly exposed, prone to false positives, or otherwise unneeded. Policy review triggers include signature updates, service requests (new application support, etc.), external advisories (to block a certain attack vector or work around a missing patch, etc.), and policy updates resulting from the operational management of the device (change management process described below). Define/Update Policies & Rules: This involves defining the depth and breadth of the network security device policies, including the actions (block, alert, log, etc.) taken by the device if an attack is detected – whether via rule violation, signature trigger, or another method. Note that as the capabilities of network security devices continue to expand, a variety of additional detection mechanisms come into play. They include increasing visibility into application traffic and identity stores. Time-limited policies may also be deployed to activate or deactivate short-term policies. Logging, alerting, and reporting policies are defined in this step. Here it is important to consider the hierarchy of policies that will be implemented on devices. You will have organizational policies at the highest level, applying to all devices, which may be supplemented or supplanted by business unit or geographic policies. Those highest-level policies feed into the policies and/or rules implemented at a location, which then filter down to the rules and signatures implemented on a specific device. The hierarchy of policy inheritance can dramatically increase or decrease complexity of rules and behaviors. Initial policy deployment should include a Q/A process to ensure none of the rules impacts the ability of critical applications to communicate either internally or externally. Document Policies and Rules: As the planning stage is an ongoing process, documentation is important for operational and compliance reasons. This step lists and details the policies and rules in use on the device according to the associated operational standards, guidelines, and requirements. Change Management In this phase rule & signature additions, changes, updates, and deletions are handled. Process Change Request and Authorize: Based on either a signature or policy change within the Content Management process, a change to the network security device(s) is requested. Authorization requires both ensuring the requestor is allowed to request the change, and the change’s relative priority to select an appropriate change window. The change’s priority is based on the nature of the signature/policy update and risk of the relevant attack. Then build out a deployment schedule based on priority, scheduled maintenance windows, and other factors. This usually involves the participation of multiple stakeholders – ranging from application, network, and system owners to business unit representatives if downtime or changes to application use models is anticipated. Test and Approve: This step requires you to develop test criteria, perform any required testing, analyze the results, and approve the signature/rule change for release once it meets your requirements. Testing should include signature installation, operation, and performance impact on the device as a result of the change. Changes may be implemented in ‘log-only’ mode to observe their impact before committing to blocking mode in production. With an understanding of the impact of the change(s), the request is either approved or denied. Obviously approvals may be required from

Share:
Read Post

Friday Summary: No Sleep, Mishmash Edition

I had a really great Friday Summary planned. I was going to go all in-depth and metaphysical on something really important, with a full-on “and knowing is half the battle” conclusion at the end, tying it back to security and making you reevaluate your life. That was before my 6-month-old decided to go to bed after 11pm, then wake up at 3am, and not go back to sleep until 5:15am. Followed by my 4.5-year-old waking me up at 6am because, although she knew it was too early, I forgot to put the iPad that she is allowed to watch until it’s time to wake us up in her room. Then there was the cat. That f***ing cat. (It was my turn to take the baby… he had already wrecked my wife the nights before). So someone is reevaluating their life, but it isn’t you. Instead, I’m going to emulate Adrian: here is my stream of consciousness… Residential alarm companies don’t really like hackers/tinkerers. I have some extensive home automation and I want to pull alerts out of my alarm panel (without enabling control) to trigger certain things and use the sensors. The phone calls tend not to go well. They all have home automation packages they will gladly sell me, and usually after the third time I tell them I have thousands of dollars and tons of custom programming of my own system they finally get it. None of them want to let you access the panel you pay for because they are legitimately worried about false alarms. Can’t really blame them – I wouldn’t trust me either. I finally added some security cameras, mostly to watch the kids outside in the play pool when I have to run inside for my morning… constitutional. I’d like to put some in the play areas but I don’t like how intrusive they look. Need to figure that out. There is a bobcat in our neighborhood. It’s living in the yard of a house that has been effectively abandoned for 3 years because no one seems to know who actually owns or is responsible for it. The bank would sure like the cash, but doesn’t want to deal with maintenance. I smell one of those improperly handled mortgage paperwork situations. The bobcat has cubs and seems quite content to bounce around our backyards. Many neighbors are scared of it, despite, you know, scientific evidence. I mentioned on our community forum that their kids should be safe unless they leash the babies to a stake out in a backyard – that may not work out well. A bunch of neighbors would also like to gate our community due to a mild uptick in break-ins (the other reason for the alarm and camera updates). That would involve about 50 unmanned gates for 900 homes and 6,752 landscapers with keys, judging from the 24/7 blower noises around here. Seriously, we would have to give gate codes to easily over 10K people over the course of the first year. Then there is the maintenance, and if you gate a community you need to take over street maintenance. And there is no evidence that unmanned gates reduce crime. I live with a lot of very scared upper-middle-class people. Other people want to slather cameras all over our community. They don’t understand that no one watches them. Someone thought we would have a control center like a casino or something with security calling in drone strikes for suspicious vehicles. (I consider them a mild deterrent at best, and mostly useful for me to keep an eye on the kids when I need to take my morning constitutional). I mean cameras are mild deterrents – a few drone strikes would probably be pretty effective. Me? I think for a fraction of the long-term cost of either option we could hire additional security and off-duty police patrols. Incident response and active defense, baby! My 4.5-year-old and her best friend have decided which boys they are going to marry. In related news, I will be shopping for a gun safe this weekend. The new Lego Mindstorms EV3 is amazing. I’m a long-time fan of Lego robots, and this one is far more accessible to my young kids due to the ball shooter and iPhone/iPad control. I still need to do all the building and programming, but I’m working on getting them to tell me what they want it to do and break that down into discrete steps. They want me to build an “evil robot” so they can put on their super hero clothes and battle it. The 4.5-year-old has a nice Captain America shield (she was pissed the first time she threw it, because it didn’t come back), and the 3-year-old has a cool Fisher Price Spider-Man web shooter thing. Both girls, both started on super hero kicks without my influence, and both are totally awesome. That’s all I got. Go buy Legos, watch out for bobcats, and don’t get involved in your community security program unless you want to realize how nice our infosec world is in comparison. Seriously. One last note – good luck to everyone in Boulder. It’s very hard to watch the floods from the outside, but still a hell of a lot easier than what you all are going through. Stay safe! On to the Summary. To be honest, due to the lack of sleep and my family walking in the door, it’s be a bit light this week… On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Rich presenting on cloud encryption next week. Rich wrote two articles on Apple’s Touch ID fingerprint sensor. You can read them at Macworld and TidBITS. They were both referenced by a ton of sites. Rich also quoted on Touch ID at the Wall Street Journal. Cloud IAM webcast next week: Check it out! Adrian’s DR post on PII and Entitlement Management. Another DR piece from Mike on “Talking Threats with Senior Management”. Mike’s latest DR column on the million bot network. Mike quoted

Share:
Read Post

Oracle Quietly Adds (Possibly Major) Java Security Update

We received an email tip today that Oracle added a new security feature to Java that might be pretty important (awaiting confirmation that I can publicly credit the person who sent it in): Deployment Rule Set is a new security feature in JDK 7u40 that allows a system administrator to control which applets or Java Web Start applications an end user is permitted to execute and which version of the Java Runtime Environment (JRE) is associated with them. Deployment Rule Set provides a common environment to manage employee access in a controlled and secure manner. Clearly it depends on how easy it is to circumvent, and I don’t even hope it will stop advanced attacks, but it does seem like it might help if you put the right policy set in place. More details are available. Share:

Share:
Read Post

Incite 9/11/2013: Brave New World

On a trip to the Bay Area recently, I drove past the first electronic billboard I ever saw. It’s right on the 101 around Palo Alto, and has been there at least 7 or 8 years. This specific billboard brings up a specific and painful memory – it was also the first billboard I saw advertising Barracuda’s spam firewall many moons ago. But clearly it wasn’t the last. Working for CipherTrust (a competitor) at the time, I got calls and then started getting pictures of all the billboards from our field reps, who were sporting new phones with cameras. They wanted to know why we couldn’t have billboards. I told them we could have billboards or sales people, but not both. Amazingly enough they chose to stop calling me after that. That’s how I knew camera phones were going to be a big deal. At that point a camera built into your phone was novel. There was a time when having music and video on the phone was novel too. Not any more. Now almost every phone has these core features, and lots of other stuff we couldn’t imagine living without today. For example, when was the last time you asked a rental car company for a paper map? Or didn’t price check something you were buying in a store to see whether you could get it cheaper online? And fancy new capabilities are showing up every day. Yesterday the Apple fanboys were all excited about thumbprint authentication and a fancy flash. Unless you are a pretty good photographer, there really isn’t any reason to carry a separate camera around any more. I’m sure Samsung will come out with something else before long, and the feature war will continue. But keep in mind that just 7 years ago all these capabilities were just dreams of visionaries designing the next generation of mobile devices. And then the hard work of the engineers and designers to make those dreams a reality. And we are only getting started. It’s a brave new mobile-enabled world. And it s really exciting to see where we will end up next. –Mike Photo credit: “Brave New World #1” originally uploaded by Rodrigo Kore Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Introduction Ecosystem Threat Intelligence Use Cases and Selection Criteria Assessing Ecosystem Risk The Risk of the Extended Enterprise Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Database Denial of Service Countermeasures Attacks Introduction API Gateways Implementation Key Management Developer Tools Newly Published Papers Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Network-based Malware Detection 2.0: Assessing Scale, Accuracy, and Deployment Quick Wins with Website Protection Services Incite 4 U Touch me baby: I have long been skeptical of the possibility of widespread use of biometrics among consumers. What are the odds that someone could get a large percentage of consumers to carry around a fingerprint reader all the time? Phones were always the potential sweet spot, but most of the smaller optical readers we have seen integrated into smaller devices had serious usability issues. That’s why Apple’s Touch ID is so interesting (I wrote it up at TidBITS and Macworld). It uses a snappy capacitive sensor in a device with a crypto chip, ubiquitous network access, and even short-range wireless (Bluetooth LE). Plus, it is a single phone model which will see widespread adoption. Expect others to copy the idea (potentially a good thing, but good luck finding decent sensors) and to see some very interesting applications over the next few years. 2FA for the mass market, here we go! – RM Pull my finger: Schneier has it right that biometric systems can ‘almost certainly’ be hacked’, but shoving a fake finger in front of a fingerprint scanner isn’t it. Biometric analysis is more than just the scanner. Once you have scanned a retina or fingerprint, you send scanned data to some other location, comparing the data with a known representation of the print (probably a hash) in a database, and then send back a yea/nay to the service the user is trying to access – mobile phone, building, or whatever. That service may also perform some risk assessment before granting access. That entire ecosystem has to be secure as well. And the kicker is that the better the biometric detection piece, the more complex the system needs to be, leading to more potential methods to subvert the overall system! Biometrics should be a second factor of authentication, making fakery much more difficult. And the idea is popular because of the convenience factor for the user – biometrics can be more convenient than a password. But no one should consider them intrinsically more secure than passwords. Some people this is a bad idea. – AL Walenda CISO: Simon Wardley posted an interesting article about when it’s time to fire the CISO. You’d figure after a breach, right? Or maybe if a big compliance fine is heading your way. Those are both decent times to think about making a change. But Simon’s point is that when the CISO (or CIO, for that matter) can no longer balance the needs of business with the needs of security and make appropriate adjustments, then it is time for a change. Basically you need a tightrope walker, a Flying Walenda, to balance all the competing interests in today’s IT environments. If the business is constantly going around IT (to become Shadow IT), then there is clearly a failure to communicate or a resourcing problem. Either way, IT and/or security isn’t getting it done and some changes are probably in order. – MR Protection racket: I chuckled when completing the application for a corporate

Share:
Read Post

Unprecedented and Shortsighted

I am still putting my personal thoughts together on the recent NSA revelations. The short version is that when you look at it in the context of developments in vulnerability disclosure and markets, we are deep into a period of time where our benign government has actively undermined the security of citizens, businesses, and even other arms of government, at scale, in order to develop and maintain offensive capabilities. (Yes, I’m a patriotic type who considers our government benign). They traded one risk for another, with the assumption that the scale and scope of their activities would remain secret. Now that they aren’t, we will see a free for all. That’s why I am even writing about this on Securosis. Those of us in security need to prepare for both system/design vulnerabilities and specific implementation flaws. We may have to replace hardware, as foreign governments and criminals find these flaws (they will). I don’t believe this was done maliciously. It appears to be mission creep as individual units worked towards their mission without considering the overall implications. Someone at the top decided it was better to leave us exposed to widespread exploitation than lose monitoring capabilities and miss another terrorist attack (these programs existed to some degree before 9/11, but clearly have exploded since then). It was a calculated risk decision. One I may not agree with, but can sympathize with. But the end result is that we may be in the first days of cleaning up some very fundamental messes. Now that we have direct evidence, the risks of external attack have increased for organizations and consumers. The issue has gone beyond monitoring and data collection to affect every security professional, and our ability to do our jobs. Share:

Share:
Read Post

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.