Securosis

Research

Investigating Touch ID and the Secure Enclave

As much as it pained me, Friday morning I slipped out of my house at 3:30am, drove to the nearest Apple Store, set up my folding chair, and waited patiently to acquire an iPhone 5s. I was about number 150 in line, and it was a good thing I didn’t want a gold or silver model. This wasn’t my first time in a release line, but it is most definitely the first time I have stood in line since having children and truly appreciated the value of sleep. It wasn’t that I felt I must have new shiny object, but because, as someone who writes extensively on Apple security, I felt it was important to get my hands on a Touch ID equipped phone as quickly as possible, to really understand how it works. I learned even more than I expected. The training process is straightforward and rapid. Once you enable Touch ID you press and lift your finger, and if you don’t move it around at all the iPhone prompts you to slightly change positioning for a better profile. Then there is a second round of sensing the fringes of your finger. You can register up to five fingers, and they don’t have to all be your own. What does this tell me from a security perspective? Touch ID is clearly storing an encrypted fingerprint template, not a hashed one. The template is modified over time as you use it (according to Apple statements). Apple also, in their Touch ID support note, mentions that there is a 1 in 50,000 chance of a match of the section of fingerprint. So I believe they aren’t doing a full match of the entire template, but of a certain number of registered data points. There are some assumptions here, and some of my earlier assumptions about Touch ID were wrong. Apple has stated from the start that the fingerprint data is encrypted and stored in the Secure Enclave of the A7 chip. In my earlier Macworld and TidBITS articles I explained that I thought they really meant hashed, like a passcode, but I now believe not only that I was wrong, but that there is even more to it. Touch ID itself is insanely responsive. As someone who has used many fingerprint scanners before, I was stunned by how quickly it works, from so many different angles. The only failures I have are when my finger is really wet (it still worked fine during a sweaty workout). My wife had more misreads after a long bath when her skin was saturated and swollen. This is the future of unlocking your phone – if you want. I already love it. I mentioned that the fingerprint template (Apple prefers to call it a “mathematical representation”, but I am sticking with standard terms) is encrypted and stored. I believe that Touch ID also stores your device passcode in the Secure Enclave. When you perform a normal swipe to unlock, then use Touch ID, it clearly fills in your passcode (or Apple is visually faking it). Also, during the registration process you must enter your passcode (and Apple ID passwords, if you intend to use Touch ID for Apple purchases). Again, we won’t know until Apple confirms or denies, but it seems that your iPhone works just like normal, using standard passcode hashing to unlock and encrypt the device. Touch ID stores this in the Secure Enclave, which Apple states is walled off from everything else. When you successfully match an enrolled finger, your passcode is loaded and filled in for you. Again, assumptions abound here, but they are educated. The key implication is that you should still use a long and complicated passcode. Touch ID does not prevent brute-force passcode cracking! The big question is now how the Secure Enclave works, and how secure it really is. Based on a pointer provided by James Arlen in our Securosis chat room, and information released from various sources, I believe Apple is using ARM TrustZone technology. That page offers a white paper in case you want to dig deeper than the overview provides, and I read all 108 pages. The security of the system is achieved by partitioning all of the SoC hardware and software resources so that they exist in one of two worlds – the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI(TM) bus fabric ensures that Normal world components do not access Secure world resources, enabling construction of a strong perimeter boundary between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect assets against many possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification. Seems pretty clear. We still don’t know exactly what Apple is up to. TrustZone is very flexible and can be implemented in a number of different ways. At the hardware level, this might or might not include ‘extra’ RAM and resources integrated into the System on a Chip. Apple may have some dedicated resources embedded in the A7 for handling Touch ID and passcodes, which would be consistent with their statements and diagrams. Secure operations probably still run on the main A7 processor, in restricted Secure mode so regular user processes (apps) cannot access the Secure Enclave. That is how TrustZone handles secure and non-secure functions sharing the same hardware. So, for the less technical, part of the A7 chip is apparently dedicated to the Secure Enclave and only accessible when running in secure mode. It is also possible that Apple has processing resources dedicated only to the Secure Enclave, but either option still looks pretty darn secure. The next piece is the hardware. The Touch ID sensor itself may be

Share:
Read Post

Keep Calm and Bust out the Tinfoil Hat

Dennis Fisher writes what many of us have been feeling for a while in The Sky is Not Falling–It’s Fallen. He argues that the fundamental underpinnings of security are being whittled away – slowly but surely. And the fact that it’s a cynical view doesn’t make it wrong. …the steady accumulation of evidence over the last three months makes it difficult to come to any conclusion other than this: nothing can be trusted. Security folks have talked about trusting no one – basically since the beginning of time. But really trusting nothing appears to present a mental barrier that many people are either unable or unwilling to jump. So we’ve come to the point now where the most paranoid and conspiracy minded among us are the reasonable ones. Now the crazy ones are the people saying that it’s not as bad as you think, calm down, the sky isn’t falling. In one sense, they’re right. The sky isn’t falling. It’s already fallen. I am no government apologist, and I think some activities definitely cross the line – including using the specter of terrorism to do whatever they want. We have evidence that the “powers that be” have manipulated the truth, painted dissenters as traitors, and continue to hide behind layers and layers of national security rhetoric and fear of terrorism to obfuscate the truth. But I wonder whether all this is really new. If I remember correctly, McCarthy used many of the same tactics to squelch dissent about clear violations of the rights of good, upstanding citizens, and to wage a witch hunt. Now they have automated tools to search for witches, and we’re surprised they are using them? We have worried about foreign governments (regardless of which particular governments you are most concerned about) putting back doors in imported products for a long time. Why would anyone assume our own government wouldn’t be doing the same? I guess the outrage comes from the realization that the emperor hasn’t changed his clothes since the 1950’s. I suppose it’s much more comfortable to go through life blissfully unaware of what’s really happening. I can’t really say that my life is better now that I know for a fact what I always suspected. Actually, now that I think about it, my life is the same. Am I going to do things differently because someone is watching? Nope. That doesn’t mean we should accept a surveillance society. But at the end of the day I am a realist, and perhaps a crazy one, because even if it’s “as bad as you think,” I am pretty sure life will go on. It will be different, but change is inevitable – the increasing pace of communications and automation continue to disrupt how we do things, in security and everywhere else. The question we each need to ask is: how much will we let this stuff impact our daily lives? Will you start wearing a tinfoil hat and embrace your own personal paranoia to the point of distraction? Or will you move forward, knowing the world is different, society has overcome lots of bad behavior in the past, and will do so in the future. That is a decision each of us needs to make, and we all need to live with the consequences of our decisions. For better and worse. And somewhere along the line I have become a borderline optimist. I guess it’s time to leave security. Share:

Share:
Read Post

A Quick Response on the Great Touch ID Spoof

Hackers at the Chaos Computer Club were the first to spoof Apple’s Touch ID sensor. They used existing techniques, but at higher resolution. A quick response: The technique can be completed with generally available materials and technology. It isn’t the sort of thing everyone will do, but there is no inherent barrier to entry such as high cost, special materials, or super-special skills. The CCC did great work here – I just think the hype is a bit off-base. On the other hand, Touch ID primarily targets people with no passcodes, or 4-digit PINs. It is a large improvement for that population. We need some perspective here. Touch ID disables itself if the phone is rebooted or you don’t use Touch ID for 48 hours (or if you wipe your iPhone remotely). This is why I’m comfortable using Touch ID even though I know I am more targeted. There is little chance of someone getting my phone without me knowing it (I’m addicted to the darn thing). I will disable Touch ID when crossing international borders and at certain conferences and hacker events. Yes, I believe if you enable Touch ID it could allow law enforcement easier access to your phone (because they can get your fingerprint, or touch your finger to your phone). If this concerns you, turn it off. That’s why I intend to disable it when crossing borders and in certain countries. As Rob Graham noted, you can set up Touch ID to use your fingertip, not the main body of your finger. I can confirm that this works, but you do need to game the setup a little. Your fingertip print is harder to get, but still not impossible. Not all risk is equal. For the vast majority of consumers, this provides as much security as a strong passcode with the convenience of no passcode. If you are worried you might be targeted by someone who can get your fingerprint, get your phone, and fake out the sensor… don’t use Touch ID. Apple didn’t make it for you. PS: I see the biggest risk for Touch ID in relationships with trust issues. It wouldn’t shock me at all to read about someone using the CCC technique to invade the privacy of a significant other. There are no rules in domestics… Share:

Share:
Read Post

Friday Summary: September 20, 2013

I have been so totally overwhelmed with projects that I have had very little time to read, research, or blog. So I was excited this morning to take a few minutes to download the new SDL research paper from Microsoft’s blog. It examines vendors using Microsoft’s SDL in both Microsoft and non-Microsoft environments. And what did I learn? Nothing. Apparently their research team has the same problem as the rest of us: no good metrics, and the best user stories get sanitized into oblivion. I am seriously disappointed – this type of research is sorely needed. If you are new to secure software development programs and want to learn, I still encourage you to download the paper, which raises important topics with snippets of high-level information. As a bonus it includes an overview of Microsoft’s SDL. If you aren’t new to secure development, you would be better off learning about useful strategies from the BSIMM project. If you are a developer and want more detailed information on how to implement Microsoft’s SDL, use the blog and the web site. They offer a ton of useful information – you just have to dig a bit to find what you want. Back to the subject at hand: There are two basic reasons to examine previous SDL implementations: tell me why I should do it, and how do I do it. Actually three if you count failure analysis, but that is an unpopular pastime. Let’s stick with the two core reasons. Those who have built software with secure coding techniques and processes have seen the positive benefits. And in many cases they have seen that security can be effective without costing an arm and a leg. But objectively proving that is freaking hard. Plenty of people talk about business benefits, but few offer compelling proof. Upper management wants numbers or it’s not real. I have made the mistake of telling management peers, “We will be more secure if we do this, and we will save money in the long run as we avoid fixing broken stuff in the future, or buying bolt-on security products.” Invariably they ask “How secure?” “How much money?” or “How far into the future?” – all questions I am unable to answer. “Trust me” doesn’t work when asking for budget or trying to get a larger salary allocation for a person who has been trained in secure coding. It is very hard to quantify the advantages until you are coding, or trying to fix broken code. One of the advantages at larger financial firms is that they have been building or integrating software for a long time, have been under attack from various types of fraudsters for a long time, and can apply lessons from failed – and poorly executed – projects to subsequent projects. They have bugs, they understand fraud rates, and they can use internal metrics to see what fixes work. Over the long term they can objectively measure whether specific process changes are making a difference. Microsoft has. This report should have. Developers and managers need research like this to justify secure software development. So where do you start? How do you do it? You ask your friends, usually. The CISOs, developers, and DevOps teams I speak with use tools and techniques their peers tried and had good experiences with. You have the same problem as your buddy at BigCo, and he tried SDLC, and it worked. Ideal? No. Scientific? Hell, no. It’s the right course of action, for the wrong reasons. Still, though, peer encouragement is often how these efforts start. Word of mouth is how Agile development propagated. Will a company see the same successes as a peer? Almost assuredly not. Your people, your process, your code. Totally different. But overall, from a decade of experience doing this, I know it works. It’s not plug and play, there are growing pains, and it takes effort, but it works. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman’s speaking at BruCon Back in Black. Dave’s doing a BruCon panel as well, just in case you couldn’t get enough during the keynote. Mike’s Dark Reading post on fear mongers vs. innovation. Cloud IAM webcast next week: Check it out! Favorite Securosis Posts Adrian Lane: Defending Against Application Denial of Service Attacks. Mike is delving into application layer DoS, which is much more interesting than network DoS – there are tons of creative ways to kick over a server. This will be a fun series! David Mortman: Firewall Management Essentials: Change Management. Rich: Mike’s Incite this week. Mike is old. Then again, he’s the only one who wrote anything this week. Me? Baby no sleep, Rich no write. Mike Rothman: No Sleep Mismash. I have been where Rich is now. No sleep. Trying to be productive. Pile on a job change and relocation to ATL. I don’t miss that period in my life. Other Securosis Posts Firewall Management Essentials: Optimizing Rules. Black Hat West Cloud Security Training. Threat Intelligence for Ecosystem Risk Management [New Paper]. Firewall Management Essentials: Change Management. Favorite Outside Posts Adrian Lane: Crooks Hijack Retirement Funds Via SSA Portal. Great post and very informative regarding a growing problem with fraud. And the onus is not on every person with a social security number to fix the SSA’s operational problem – the SSA needs to a) do a better job vetting users, and b) stop payouts through pre-paid cards. That entire arrangement is an uncontrollable clusterfsck. They put the infrastructure on the Internet, so they are responsible for operational security. Not that it’s easy, but intractability is why many IT projects don’t get started in the first place. USA Today interview with Jony Ive. Some tidbits on design, and the one I really like is the focus on making functions invisible. His example of Touch ID is perfect – it just works, no “scanning… AUTHENTICATED” animations. Mike Rothman: Is the Perimeter Really Dead? Of course not. But it’s definitely changing. Decent take on the issue in Dark Reading. David Mortman: Managing Secrets With Chef Vault. Research Reports and Presentations Identity and Access Management for

Share:
Read Post

Defending Against Application Denial of Service Attacks [New Series]

As we discussed last year in Defending Against Denial of Service Attacks, attackers increasingly leverage availability-impacting attacks both to cause downtime (which costs site owners money) and to mask other kinds of attacks. These availability-impacting attacks are better known as Denial of Service (DoS) attacks. Our research identified a number of adversaries who increasingly use DoS attacks, including: Protection Racketeers: These criminals use DoS threats to demand ransom money. Attackers hold a site hostage by threatening to knock it down, and sometimes follow through. They get paid. They move on to the next target. The only thing missing is the GoodFellas theme music. Hacktivists: DoS has become a favored approach of hacktivists seeking to make a statement and shine a spotlight on their cause, whatever it may be. Hacktivists care less about the target than their cause. The target is usually collateral damage, though they are happy hit two birds with one stone by attacking an organization that opposes their cause when they can. You cannot negotiate with these folks, and starting public discourse is one of their goals. ‘CyberWar’: We don’t like the term – no one has been killed by browsing online (yet), but we can expect to see online attacks as a precursor to warplanes, ships, bombing, and ground forces. By knocking out power grids, defense radar, major media, and other critical technology infrastructure, the impact of an attack can be magnified. Exfiltrators: These folks use DoS to divert attention from the real attack: stealing data they can monetize. This could be an intellectual property theft or a financial attack such as stealing credit cards. Either way, they figure that if they blow in your front door you will be too distracted to notice your TV scooting out through the garage. They are generally right. Competitors: They say all’s fair in love and business. Some folks take that one a bit too far, and actively knock down competitor sites for an advantage. Maybe it’s during the holiday season. Maybe it happens after a competitor resists an acquisition or merger offer. It could be locking folks out from bidding on an auction. Your competition might screen scrape your online store to make sure they beat your pricing, causing a flood of traffic on a very regular and predictable basis. A competitor might try to ruin your hard-earned (and expensive) search rankings. Regardless of the reason, don’t assume an attacker is a nameless, faceless crime syndicate in a remote nation. It could be the dude down the street trying to get any advantage he can – legalities be damned. Given the varied adversaries, it is essential to understand that two totally different types of attacks are commonly lumped under the generic ‘DoS’ label. The first involves the network, blasting a site with enough traffic (sometimes over 300gbps) to flood the pipes and overwhelm security and network devices, as well as application infrastructure. This volumetric attack basically is the ‘cyber’ version of hitting something a billion times with a rock. This brute force attack typically demands a scrubbing service and/or CDN (Content Delivery Network) to deal with the onslaught of traffic and keep sites available. The second type of DoS attack targets weaknesses in applications. In Defending Against DoS we described an application attack as follows: Application-based attacks are different – they target weaknesses in web application components to consume all the resources of a web, application, or database server to effectively disable it. These attacks can target either vulnerabilities or ‘features’ of an application stack to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, and in many cases take advantage of legitimate features of the application, making defense all the harder. We are pleased to launch the next chapter in our Denial of Service research, entitled “Defending Against Application Denial of Service Attacks” (yep, we are thinking way out of the box for titles). In this series we will dig far more deeply into application DoS attacks and provide both an overview of the tactic and possible mitigations for defense. Here is a preliminary list of what we intend to cover: Application Server Attacks: The first group of AppDoS attacks targets the server and infrastructure stack. We will profile attacks such as Slowloris, Slow HTTP Post, RUDY, Slow read, and XerXes, discussing mitigations for each attack. We will also talk about brute force attacks on SSL (overwhelming servers with SSL handshake requests) and loading common pages – such as login, password reset, and store locators – millions of times. Attacking the Stack: Targeting Databases and Programming Languages: In this post we will talk about the next layers in the application stack – including the database and languages used to build the application. Regarding database DoS, we will highlight some of our recent research in Dealing with Database Denial of Service. Abusing Application Logic: As we continue to climb the application stack, we will talk about how applications are targeted directly with GET floods and variants. By profiling applications and learning which pages are most resource intensive, attackers can focus their efforts on the most demanding pages. To mitigate these attacks, we will discuss the roles of rate controls and input validation, as well as WAF and CDN based approaches to filter out attack requests before the application needs to deal with them. Billions of Results Served: We will profile the common attacks which overwhelm applications by overflowing memory with billions of results from either search results or shopping carts. We will touch on unfriendly scrapers, including search engines and other catalog aggregators that perform ‘legitimate’ searching but can be gamed by attackers. These attacks can only be remediated within the application, so we will discuss mechanisms for doing that (without alienating the developers). Building DoS Protections in: We will wrap up the series by talking about how to implement a productive process for working with developers to build in AppDoS protections.

Share:
Read Post

Incite 9/18/2013: Got No Game

On Monday night I did a guest lecture for some students in Kennesaw State’s information security program. It is always a lot of fun to get in front of the “next generation” of practitioners (see what I did there?). I focused on innovation in endpoint protection and network security, discussing the research I have been doing into threat intelligence. The kids (a few looked as old as me) seemed to enjoy hearing about the latest and greatest in the security space. t also gave me a forum to talk about what it’s really like in the security trenches, which many students don’t learn until they are knee-deep in the muck. I didn’t shy away from the lack of obvious and demonstrable success, or how difficult it is to get general business folks to understand what’s involved in protecting information. The professor had a term that makes a lot of sense: security folks are basically digital janitors, cleaning up the mess the general population makes. When I started talking about the coming perimeter re-architecture (driven by NGFW, et al), I mentioned how much time they will be able to save by dealing with a single policy, rather than having to manage the firewall, IPS, web filter, and malware detection boxes separately. I told them that would leave plenty of time to play Tetris. Yup, that garnered an awkward silence. I started spinning and asked if any knew what Tetris was? Of course they did, but a kind student gently informed me that no one has played that game in approximately 10 years. Okay, how about Gears of War? Not so much – evidently that trilogy is over. I was going to mention Angry Birds, but evidently Angry Birds was so 12 months ago. I quit before I lost all credibility. There it was, stark as day – I have no game. Well no video game anyway. Once I got over my initial embarrassment, I realized my lack of prowess is kind of intentional. I have a fairly addictive personality, so anything that can be borderline addictive (such as video games) is a problem for me. It’s hard to pay my bills if I’m playing Strategic Conquest for 40 hours straight, which I did back in the early 90’s. I have found through the years that if I just don’t start, I don’t have to worry about when (or if) I will stop. I see the same tendencies in the Boy. He’s all into “Clash of Clans” right now. Part of me is happy to see him get his Braveheart on attacking other villages, Game of Thrones style. He seems pretty good at analyzing an adversary’s defenses and finding a way around them, leading his clan to victory. But it’s frustrating when I have to grab the Touch just to have a conversation with him. Although at least I know where he gets it from. Some folks can practice moderation. You know, those annoying people who can take a little break for 15 minutes and play a few games, and then be disciplined enough to stop and get back on task. I’m not one of those people. When I start something, I start something. And that means the safest thing for me is often to not start. It’s all about learning my own idiosyncrasies and not putting myself in situations where I will get into trouble. So no video games for me! –Mike Photo credit: “when it’s no longer a game” originally uploaded by istolethetv Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Optimizing Rules Change Management Introduction Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? API Gateways Implementation Key Management Developer Tools Newly Published Papers Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Incite 4 U Good guys always get DoS’ed: Django learned the hard way that if you give hackers an inch they’ll take a mile – and your site too. Last week they suffered a denial of service attack when users submitted multi-megabyte passwords – the “computational complexity” of generating strong hashes for a few requests was enough to DoS the site. Awesome. The mitigation to this kind of attack is input validation. Sure, as a security expert I still get pissed when sites limit me to 8 character passwords, but it’s unreasonable to accept the Encyclopedia Britannica as valid field input. I am sorry to be smiling as I write this – I feel bad for the Django folks – but it’s funny how no good security intentions go unpunished. Thanks for patching, guys! – AL DHS gets monitoring religion (to the tune of $6B): Not sure how I missed the award of a $6 billion DHS contract to implement continuous detection and mitigation technology. Evidently this is the new term for continuous monitoring, and it seems every beltway bandit, and scanning and SIEM vendor, is involved. So basically nothing will get done – I guess that’s the American way. But this move, which started with NIST’s push to continuous monitoring and continues with DHS’s rebranded CDM initiative, is going in the right direction. Will they ever get to “real time” monitoring? Does it matter? They can’t actually respond in real time, so I don’t think so. If any of these gold-plated toilet seats provides the ability to see a vulnerability with a few days (rather than showing up on a quarterly report, and being ignored), it’s an improvement. As they said in Contact, “baby steps…” – MR FUD filled vacuum: When working with clients I am often still surprised at how often even mature organizations underestimate the eventual misinterpretations of their

Share:
Read Post

Firewall Management Essentials: Managing Access Risk

We have discussed two of the three legs of comprehensive firewall management: a change management process and optimizing the rules. Now let’s work through managing risk using the firewall. Obviously we need to define risk, because depending on your industry and philosophy, risk can mean many different things. For firewall management we are talking about the risk of unauthorized parties accessing sensitive resources. Obviously if a device with critical data is inaccessible to internal and/or external attackers, the risk it presents is lower. This “access risk management” function involves understanding first and foremost the network’s topology and security controls. The ability to view attack paths provides a visual depiction of how an attacker could gain access to a device. With this information you can see which devices need remediation and/or network workarounds, and prioritize fixes. Another benefit of visualizing attack paths is in understanding when changes on the network or security devices unintentionally expose additional attack surface. So what does this have to do with your firewall? That’s a logical question, but a key firewall function is access control. You configure the firewall and its rule set to ensure that only authorized ports, protocols, applications, users, etc. have access to critical devices, applications, etc. within your network. A misconfigured firewall can have significant and severe consequences, as discussed in the last post. For example, years ago when supporting a set of email security devices, we got a call about an avalanche of spam hitting the mailboxes of key employees. The customer was not pleased, but the deployed email security gateway appeared to be working perfectly. Initially perplexed, one of our engineers checked the backup email server, and discovered it was open to Internet traffic due to a faulty firewall rule. So attackers were able to use the back-up server as a mail relay, and blast all the mailboxes in the enterprise. With some knowledge of network topology and the paths between external networks and internal devices, this issue could have been identified and remediated before any employees were troubled. Key Access Risk Management Features When examining the network and keeping track of attack paths, look for a few key features: Topology monitoring: Topology can be determined actively, passively, or both. For active mapping you will want your firewall management tool to pull configurations from firewalls and other access control devices. You also need to account for routing tables, network interfaces, and address translation rules. Interoperating with passive monitoring tools (network behavioral analysis, etc.) can provide more continuous monitoring. You need the ability to determine whether and how any specific device can be accessed, and from where – both internal and external. Analysis horsepower: Accounting for all the possible paths through a network requires an n_*(_n-1) analysis, and n gets rather large for an enterprise network. The ability to re-analyze millions of paths on every topology change is critical for providing an accurate view. What if?: You will want to assess each possible change before it is made, to understand its impact on the network and attack surface. This enables the organization to detect additional risks posed by a change before committing it. In the example above, if that customer had a tool to help understand that a firewall rule change would make their backup email server a sitting duck for attackers they would have reconsidered. Alternative rules: It is not always possible to remediate a specific device due to operational issues. So to control risk you would like a firewall management tool to suggest appropriate rule changes or alternate network routes to isolate the vulnerable device and protect the network. At this point it should be clear that all these firewall management functions depend on each other. Optimizing rules is part of the change management process, and access risk management comes into play for every change. And vice-versa, so although we discussed these function as distinct requirements of firewall management, in reality you need all of these functions to work together for operational excellence. In this series’ last post we will focus on getting a quick win with firewall management technology. We will discuss deployment architectures and integration with enterprise systems, and work through a deployment scenario to make many of these concepts a bit more tangible. Share:

Share:
Read Post

Firewall Management Essentials: Optimizing Rules

Now that you have a solid, repeatable, and automated firewall change management process, it’s time to delve into the next major aspect of managing your firewalls: optimizing rules. Back in our introduction we talked about how firewall rule sets tend to resemble a closet over time. You have a ton of crap in there, most of which you don’t use, and whatever you do use is typically hard to get to. So you need to occasionally clean up and reorganize – getting rid of stuff you don’t need, making sure the stuff that’s still in there should be, and arranging things so you can easily access the stuff you use the most. But let’s drop the closet analogy to talk firewall specifics. You need to optimize rules for a variety of reasons: Eliminate duplicate rules: When you have a lot of hands in the rule base, rules can get duplicated. Especially when the management process doesn’t require a search to make sure an overlapping rule doesn’t already exist. Address conflicting rules: At times you may add a rule (such as ALLOW PORT 22) to address a short-term issue, even though you might have other rules to lock down the port or application. Depending on where the rule resides in the tree, the rules may conflict, either adding attack surface or breaking functionality. Get rid of old and unused rules: If you don’t go back into the rule set every so often to ensure your rules are relevant, you are bound to have rules that are no longer necessary, such as access to that old legacy mainframe application that was decommissioned 4 years ago. It is also useful to go back and confirm with each rule’s business owner that their application still needs that access, and they accept responsibility for it. Simplify the rule base: The more rules, the more complicated the rule base, and the more likely something will go wrong. By analyzing and optimizing rules on a periodic basis, you can find and remove unneeded complexity. Improving performance: If you have frequently used rules at the bottom of the tree, the firewall needs to go through every preceding rule to reach them. That can bog down performance, so you want the most frequently hit rules as early as possible. Without conflicting with other rules, of course. Controlling network risk: Networks are very dynamic, so you need to ensure that every network or device configuration change doesn’t add attack surface, requiring a firewall rule change. For all these reasons, going through the rule base on a regular basis is key to keeping firewalls running optimally. Every rule should be required to support the business, and optimally configured. Key Firewall Management Rule Optimization Features The specific features you should get in your firewall management product or service apply directly to the requirements above. Centralized management: A huge benefit of more actively managing firewalls is the ability to enforce a set of consistent policies across all firewalls, regardless of vendor. So you need a scalable tool that supports all your devices. You should have a single authoritative source for firewall policies. Rule change recommendations: If a firewall rule set gets complicated enough it’s hard for any human – even your best security admin – to keep everything straight. So a tool should be able to mine the existing rule set (thousands of rules) to find and get rid of duplicate, hidden, unused, and expired rules. Tools should assess the risk of the rules, and flag rules which allow too much access (you know: ANY ANY). Optimize rule order: A key aspect of improving firewall performance is making sure the most-hit rules are closer to the top of the tree. The tool should track which rules are hit most often through firewall log analysis, and suggest an ordering to optimize performance without increasing exposure. Simulating rule changes: Clever ideas can turn out badly if a change conflicts with other rules or opens up (or closes) the wrong ports/protocols/applications/users/groups, etc. The tool should simulate rule changes and a prediction of whether the change is likely to present problems. Monitoring network topology and device configuration: Every network and device configuration change can expose additional attack surface, so the tool needs to analyze every proposed change in context of the existing rule set. This involves polling managed devices for their configurations on a periodic basis, as well as monitoring routing tables. Compliance checking: Related to monitoring topology and configurations, changes can also cause compliance violations. SO you need the firewall management tool to flag rule changes that might violate any relevant compliance mandates. Recertify rules: The firewall management tool should offer a mechanism to go back to business owners to ensure rules are still relevant and that they accept responsibility for their rules. You should be able to set an expiration date on a rule, and then require an owner to confirm each rule is still necessary. Getting rid of old rules is one of the most effective ways to optimize a rule set. Asking for Forgiveness Speaking of firewall rule recertification, you certainly can go through the process of chasing down all the business owners of rules, if you know who they are, and getting them to confirm each rule is still needed. That’s a lot of work. You could choose a less participatory approach as well: make changes and then ask forgiveness if you break something. There are a couple options with this approach: Turn off unused rules: Use the firewall management tool’s ability to flag unused rules and just turn them off. If someone complains you know the rule is still required and you can assume they would be willing to recertify the rule. If not you can get rid of it. Blow out the rule base: You can also burn the rule base to the ground and wait for complaints to start about applications that broke as a result. This is only sane in dire circumstance, where no one will take responsibility for rules or people are totally unresponsive to your attempts to clean things up. But it’s certainly an option. NGFW Support With the move

Share:
Read Post

Black Hat West Cloud Security Training

I am psyched to announce that our Black Hat Vegas class went well, and we have been invited to teach in Seattle December 9-10 and 11-12. As before, we will be bringing some advanced material, but you shouldn’t be scared off – advanced skillz are not required to make it through the class. You can sign up for the class here. The short description is: CLOUD SECURITY PLUS (CCSK-Plus) Provide students with the practical knowledge they need to understand the real cloud security issues and solutions. The Cloud Security Plus class provides students a comprehensive two-day review of cloud security fundamentals and prepares them to take the Cloud Security Alliance Certificate of Cloud Computing Security Knowledge (CCSK) exam (this course is also known as the CCSK-Plus). Starting with a detailed description of cloud computing, the course covers all major domains in the latest Guidance document from the Cloud Security Alliance, and includes a full day of hands-on cloud security training covering both public and private cloud. Share:

Share:
Read Post

Threat Intelligence for Ecosystem Risk Management [New Paper]

Most folks think the move towards the extended enterprise is very cool. You know, get other organizations to do the stuff your organization isn’t great at. It’s a win/win, right? From a business standpoint, there are clear advantages to building a robust ecosystem that leverages the capabilities of all organizations. But from a security standpoint, the extended enterprise adds a tremendous amount of attack surface. In order to make the extended enterprise work, your business partners need access to your critical information. And that’s where security folks tend to break out in hives. It’s hard enough to protect your networks, servers, and applications while making sure your own employees don’t do anything stupid to leave you exposed. Imagine your risk – based not just on how you protect your information, but also on how well all your business partners protect their information and devices as well. Actually, you don’t need to imagine that – it’s reality. We are pleased to announce the availability of our Threat Intelligence for Ecosystem Risk Management white paper. This paper delves into the risks of the extended enterprise and then present a process to gather information about trading partners to make decisions regarding connectivity and access more fact-based. Many of you are not in positions to build your own capabilities to assess partner networks, but this paper offers perspective on how you would, so when considering external threat intelligence services you will be an educated buyer. You can see the Threat Intelligence for Ecosystem Risk Management page in our Research Library or download the paper directly (PDF) We want to thank BitSight Technologies for licensing the content in this paper. The largesse of our licensees enables us to provide our research without cost to you. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.