Securosis

Research

Incite 9/25/2013: Road Trip

Every so often my mind wanders and I flash back to scenes from classic movies. When I remember Animal House, I can’t help but spend perhaps 15 minutes thinking about all the great scenes in that movie. I don’t even know where to begin, but one scene that still cracks me up after all these years is: Boon: Jesus. What’s going on? Hoover: They confiscated everything, even the stuff we didn’t steal. Bluto: They took the bar! The whole f****** bar! [Otter grabs a bottle of whiskey and throws it to Bluto, who chugs it all.] Bluto: Thanks. I needed that. Hoover: Christ. This is ridiculous. What are we going to do? Otter & Boon: Road trip. ROAD TRIP! Just the mere mention of those words makes me smile. Like most folks, I have great memories of the road trips I took in high school, college, as a recent graduate, and even now when my ATL buddies and I make a pilgrimage to go see a SEC football game every year. There isn’t much better than hopping in the car with a few buddies and heading to a different location, equipped with a credit card to buy decent drinks. Though this past weekend I had a different kind of road trip. I took The Boy to go see the NY Giants play in Charlotte. After a crazy Saturday, we drove the 3.5 hours and even had dinner at Taco Bell on the way. He loves the Doritos shell tacos and since it was Boys weekend, we could suspend the rules of good eating for a day. We stayed at a nice Westin in downtown Charlotte and could see the stadium from our room. He was blown away by the hotel and the view of the stadium at night. It was great to see the experience through his eyes – to me a hotel is a hotel is a hotel. We slept in Sunday morning, and when I asked him to shower before breakfast, he sent a zinger my way. “But Dad, I thought on Boys weekend we don’t have to shower.” Normally I would agree to suspend hygiene, but I had to sit next to him all day, so into the shower he went. We hit the breakfast buffet and saw a bunch of like-minded transplanted New Yorkers in full gear to see the Giants play. He got a new Giants hat on the walk to the stadium and we got there nice and early to see the team warm-ups and enjoy club level. Of course, the game totally sucked. The G-men got taken to the woodshed. Normally I’d be fit to be tied – that was a significant investment in the hotel and tickets. But then I looked over and saw the Boy was still smiling and seemined happy to be there. He didn’t get pissed until the 4th quarter, after another inept Giants offensive series. He threw down the game program, but within a second he was happy again. I kept asking if he wanted to stay, and he didn’t want to go. We were there until the bitter end. After the long trip home, as he was getting ready for bed, we got to do a little post-mortem on the trip. He told me he had a great time. Even better, he suggested we take road trips more often – like every weekend. Even though I didn’t have one drink and the Giants totally sucked, it was the best road trip I’ve ever taken. By far. –Mike Photo credit: “Smoke Hole Rd, WV” originally uploaded by David Clow Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Quick Wins Managing Access Risk Optimizing Rules Change Management Introduction Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? Newly Published Papers Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Incite 4 U Security According to Security Moses: Evidently Security Moses has descended from Mt. Sinai with the tablets of CISO success: the 10 Golden Rules of the Outstanding CISO by Michael Boelen. Most of this stuff is obvious, but it’s a good reminder that your integrity is important and to focus on the fundamentals. I had a chat with a large enterprise yesterday about that very topic. Don’t forget to be the “master of communication” and not to panic. Although it is easy to panic when the house seems to be burning down. Don’t oversell what you can do, and remember that process beats technology. Again, not brain surgery here, but under duress it’s always good to go back and consult the stone tablets. – MR Emphatic Maybe: A simple statement like “We don’t have backdoors in our products” would address the issue. The problem is that every vendor who has released a statement regarding the NSA compromising their platforms has issued a qualified answer. This time it’s RSA, with “We don’t enable backdoors in our crypto products.” Which means exactly what? You have someone else do it? The NSA dropped the code into your product, so you didn’t have to? Was the RNG subsystem weakened to achieve the same result? Those are all accusations being thrown about, and the released statement does not definitively address them. The recommendation to stop using BSafe’s Dual Elliptic Curve Deterministic Random Bit Generator was a step in the right direction. Still, the ambiguity, which looks intentional, is fueling the fire of what has now become the biggest security story of the year. And it is reducing trust in data security vendors. In fact, it’s generating renewed interest in security

Share:
Read Post

Firewall Management Essentials: Quick Wins

As we put a little bow on our Firewall Management Essentials series, it’s time to focus on getting quick value from your investment. We are big fans of a Quick Wins approach, because far too many technologies sputter as deployment lags and value commensurate with the investment is never seen. The quick wins approach focuses on building momentum early in the deployment by balancing what can be done right now against longer-term goals for a technology investment. If a project team doesn’t prove value early and often, that typically dooms the implementation to failure. For firewall management, the lowest hanging fruit is optimization of existing rule sets before implementing a strong change management process. But let’s not put the cart before the horse – first you need to deploy the tool and integrate it with other enterprise systems. Deployment and Integration The good news for firewall management is that one central server can handle quite a few firewalls – especially because the optimization and change management processes happen on a periodic, rather than continuous or real-time, basis. It’s not like management devices need to be inline and monitoring continuously, so the deployment architecture won’t make or break the implementation. Typically you deploy the firewall management server in a central location, and have it discover all the firewalls in your environment. You might kickstart the effort by feeding the list of existing firewalls into the management system. Do you want one central system, or a distributed environment? That depends on the scale of your environment and how quickly you need to be notified of changes. The longer the interval before rechecking each device’s configuration, the longer the window before you detect an unauthorized change. So you need to balance resource consumption against frequent checks to narrow the exploitation window between exploitation and detection. The deployment architecture depends more on the frequency of monitoring for configuration changes than on anything else. The change process (workflow) can run off the central server. And the math to optimize a rule set doesn’t consume resources on a firewall. We have seen large firewall environments (think service providers) managed by a handful of firewall management devices – multiple devices installed for availability and redundancy, rather than for performance reasons. For integration, as described earlier in this series, you will want to pull or push information from tools like a vulnerability management system, a SIEM/log management tool, and/or a reporting/GRC system. Most of these tools have well-established APIs, and it is reasonable to expect your vendor to already have integrated with the leading tools in these categories. Pulling information into the firewall management tool provides more context to understand what changes pose what risk. The area where you will gain the most value from enterprise integration is the help desk/task management system. Given the operational leverage of automating an effective firewall change management process, you will want to make sure changes are tracked in whatever tool(s) the operations team uses so you don’t have two sources of information, and everything is in sync. The good news is that these operational tools are mature, with mature SDKs for integration. Again, it is reasonable to expect your firewall management vendor to have already integrated with your work management environment. Getting the Quick Win and Showing Value We covered the change management process first in this series, because over time it is where we typically see the most sustainable value accrued. But in a quick wins scenario we need to get something done now. So going through existing firewalls and pinpointing areas of improvement, in terms of both security and performance, can yield the quick win we want. This is the optimization process. The first job is to get value, but that is no good unless you can communicate it. So look to reports to highlight the results of the early optimization efforts. You will want to show things like how many unused rules were eliminated (reducing attack surface), as well as whether any of your old rules conflicted, and how the cleanup improved security. This quick effort (it should take a day or two) can build momentum for the next area of focus: change management. Once the change management process is accepted in the environment and enumerated in the firewall management tool, you can start tracking service levels and response times on changes happening daily. You can also track the number of times changes that would have increased attack surface were flagged (and stopped) before going operational, to show how the tool reduces risk and increases the accuracy of firewall changes. This highlights the benefits of a firewall management tools to reduce the risk of a faulty rule change and adding attack surface. A what-if analysis of potential changes can ensure that nothing will break (or crush performance) before actually making a change. You can also demonstrate value by migrating rules from one firewall to another. If you need to support a heterogeneous environment, or are currently moving to a NGFW-based architecture, these tools can provide value by suggesting rule sets based on existing policies and optimizing them for the new platform. If you are a glutton for punishment you can migrate one device without using the tool (busting out your old spreadsheets), and then use the firewall management tool for the next migration for a real comparison. Or you can use an anecdote (we saved XX days by using the tool) to communicate the value of the firewall management tool. Either way, substantiate the value of the tool to your operational process. Finally, at some point after deploying the tool, you will have an assessment or audit. You can then both leverage and quantify the value of the firewall management tool, in terms of saving time and increasing the accuracy of audit documentation. Depending on the regulation, the tool is likely to include a pre-built report which requires minimal customization the first time you go through the audit, in order to generate documentation and substantiate your firewall controls. You have now learned a bit about how to manage your firewalls in a

Share:
Read Post

API Gateways [New Research]

If you are thinking about skipping this post because you are not a developer, or think APIs are irrelevant to you, stop! You are missing the point of an important trend in both security and development. Today we launch our research paper on API gateways. It includes a ton of information about what these gateways are, how they work, and how best to take advantage of them. Additionally, we describe this industry trend and how it bakes security into the services. Even non-developers will be seeing these and working with one in the near future. On a more personal note, I need to say that this was one of the more fun projects I have worked on recently. The best research projects are the ones where you learn a lot. A full third of the content in this paper either was previously unknown to me, or I had not connected the dots to fully realize the picture they create, before Gunnar Peterson and I started the project. And for you jaded security and IT practitioners who have seen it all, I am willing to bet there is a lot going on here you were not aware of either. Going into the project I did not understand a few key things, such as: That lumbering health care company exposed back-office services to the public. Via the Internet? They can’t get out of their own way on simple IT projects, so how did they do that? I understand what OAuth is, but why is it so popular? It doesn’t make sense! How did that old school brick and mortar shop deliver Android and iOS apps? They don’t develop software! Someone is making money with apps? Bull$!^&: That’s ‘labor of love’ stuff. Show me how, or I don’t buy it! The word ‘enablement’ is one of those optimistic, feel-good words product vendors love. I stopped using it when I started working at Securosis because we hear a poop-storm of bloated, inappropriate, and self-congratulatory terms without any relevance to reality. When I am feeling generous I call it ‘market-leading’ optimism. So when Gunnar wanted the word ‘enablement’ in the title of the paper I let out a stream of curse words. “Are you crazy? That has got to be the dumbest idea I’ve ever heard. Security tech does not enable. Worse, we’ll lose credibility because it will sound like a vendor paper!” But by the end of the project I had caved. Sure enough, Gunnar was right. Not purely from a technical perspective, but also operationally. Security, application development, and infrastructure have evolved with a certain degree of isolation, which enables companies to provide external services while satisfying compliance requirements, often despite lacking in-house development skills. Anyway, this has been one of the more interesting research projects I have worked on. Gunnar and I worked hard to capture the essence of this trend, so I hope you find it as educational as I did. We would like to heartily thank Intel for licensing this content- they have an API Management solution and you can download the report from Intel’s API Gateway resource center that has tutorials and other related technical papers. We’ll have an upcoming webcast with Intel so I encourage you to register with them if you want more details. You can also download a free copy from our library : API Gateway research. Share:

Share:
Read Post

Investigating Touch ID and the Secure Enclave

As much as it pained me, Friday morning I slipped out of my house at 3:30am, drove to the nearest Apple Store, set up my folding chair, and waited patiently to acquire an iPhone 5s. I was about number 150 in line, and it was a good thing I didn’t want a gold or silver model. This wasn’t my first time in a release line, but it is most definitely the first time I have stood in line since having children and truly appreciated the value of sleep. It wasn’t that I felt I must have new shiny object, but because, as someone who writes extensively on Apple security, I felt it was important to get my hands on a Touch ID equipped phone as quickly as possible, to really understand how it works. I learned even more than I expected. The training process is straightforward and rapid. Once you enable Touch ID you press and lift your finger, and if you don’t move it around at all the iPhone prompts you to slightly change positioning for a better profile. Then there is a second round of sensing the fringes of your finger. You can register up to five fingers, and they don’t have to all be your own. What does this tell me from a security perspective? Touch ID is clearly storing an encrypted fingerprint template, not a hashed one. The template is modified over time as you use it (according to Apple statements). Apple also, in their Touch ID support note, mentions that there is a 1 in 50,000 chance of a match of the section of fingerprint. So I believe they aren’t doing a full match of the entire template, but of a certain number of registered data points. There are some assumptions here, and some of my earlier assumptions about Touch ID were wrong. Apple has stated from the start that the fingerprint data is encrypted and stored in the Secure Enclave of the A7 chip. In my earlier Macworld and TidBITS articles I explained that I thought they really meant hashed, like a passcode, but I now believe not only that I was wrong, but that there is even more to it. Touch ID itself is insanely responsive. As someone who has used many fingerprint scanners before, I was stunned by how quickly it works, from so many different angles. The only failures I have are when my finger is really wet (it still worked fine during a sweaty workout). My wife had more misreads after a long bath when her skin was saturated and swollen. This is the future of unlocking your phone – if you want. I already love it. I mentioned that the fingerprint template (Apple prefers to call it a “mathematical representation”, but I am sticking with standard terms) is encrypted and stored. I believe that Touch ID also stores your device passcode in the Secure Enclave. When you perform a normal swipe to unlock, then use Touch ID, it clearly fills in your passcode (or Apple is visually faking it). Also, during the registration process you must enter your passcode (and Apple ID passwords, if you intend to use Touch ID for Apple purchases). Again, we won’t know until Apple confirms or denies, but it seems that your iPhone works just like normal, using standard passcode hashing to unlock and encrypt the device. Touch ID stores this in the Secure Enclave, which Apple states is walled off from everything else. When you successfully match an enrolled finger, your passcode is loaded and filled in for you. Again, assumptions abound here, but they are educated. The key implication is that you should still use a long and complicated passcode. Touch ID does not prevent brute-force passcode cracking! The big question is now how the Secure Enclave works, and how secure it really is. Based on a pointer provided by James Arlen in our Securosis chat room, and information released from various sources, I believe Apple is using ARM TrustZone technology. That page offers a white paper in case you want to dig deeper than the overview provides, and I read all 108 pages. The security of the system is achieved by partitioning all of the SoC hardware and software resources so that they exist in one of two worlds – the Secure world for the security subsystem, and the Normal world for everything else. Hardware logic present in the TrustZone-enabled AMBA3 AXI(TM) bus fabric ensures that Normal world components do not access Secure world resources, enabling construction of a strong perimeter boundary between the two. A design that places the sensitive resources in the Secure world, and implements robust software running on the secure processor cores, can protect assets against many possible attacks, including those which are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification. Seems pretty clear. We still don’t know exactly what Apple is up to. TrustZone is very flexible and can be implemented in a number of different ways. At the hardware level, this might or might not include ‘extra’ RAM and resources integrated into the System on a Chip. Apple may have some dedicated resources embedded in the A7 for handling Touch ID and passcodes, which would be consistent with their statements and diagrams. Secure operations probably still run on the main A7 processor, in restricted Secure mode so regular user processes (apps) cannot access the Secure Enclave. That is how TrustZone handles secure and non-secure functions sharing the same hardware. So, for the less technical, part of the A7 chip is apparently dedicated to the Secure Enclave and only accessible when running in secure mode. It is also possible that Apple has processing resources dedicated only to the Secure Enclave, but either option still looks pretty darn secure. The next piece is the hardware. The Touch ID sensor itself may be

Share:
Read Post

Keep Calm and Bust out the Tinfoil Hat

Dennis Fisher writes what many of us have been feeling for a while in The Sky is Not Falling–It’s Fallen. He argues that the fundamental underpinnings of security are being whittled away – slowly but surely. And the fact that it’s a cynical view doesn’t make it wrong. …the steady accumulation of evidence over the last three months makes it difficult to come to any conclusion other than this: nothing can be trusted. Security folks have talked about trusting no one – basically since the beginning of time. But really trusting nothing appears to present a mental barrier that many people are either unable or unwilling to jump. So we’ve come to the point now where the most paranoid and conspiracy minded among us are the reasonable ones. Now the crazy ones are the people saying that it’s not as bad as you think, calm down, the sky isn’t falling. In one sense, they’re right. The sky isn’t falling. It’s already fallen. I am no government apologist, and I think some activities definitely cross the line – including using the specter of terrorism to do whatever they want. We have evidence that the “powers that be” have manipulated the truth, painted dissenters as traitors, and continue to hide behind layers and layers of national security rhetoric and fear of terrorism to obfuscate the truth. But I wonder whether all this is really new. If I remember correctly, McCarthy used many of the same tactics to squelch dissent about clear violations of the rights of good, upstanding citizens, and to wage a witch hunt. Now they have automated tools to search for witches, and we’re surprised they are using them? We have worried about foreign governments (regardless of which particular governments you are most concerned about) putting back doors in imported products for a long time. Why would anyone assume our own government wouldn’t be doing the same? I guess the outrage comes from the realization that the emperor hasn’t changed his clothes since the 1950’s. I suppose it’s much more comfortable to go through life blissfully unaware of what’s really happening. I can’t really say that my life is better now that I know for a fact what I always suspected. Actually, now that I think about it, my life is the same. Am I going to do things differently because someone is watching? Nope. That doesn’t mean we should accept a surveillance society. But at the end of the day I am a realist, and perhaps a crazy one, because even if it’s “as bad as you think,” I am pretty sure life will go on. It will be different, but change is inevitable – the increasing pace of communications and automation continue to disrupt how we do things, in security and everywhere else. The question we each need to ask is: how much will we let this stuff impact our daily lives? Will you start wearing a tinfoil hat and embrace your own personal paranoia to the point of distraction? Or will you move forward, knowing the world is different, society has overcome lots of bad behavior in the past, and will do so in the future. That is a decision each of us needs to make, and we all need to live with the consequences of our decisions. For better and worse. And somewhere along the line I have become a borderline optimist. I guess it’s time to leave security. Share:

Share:
Read Post

A Quick Response on the Great Touch ID Spoof

Hackers at the Chaos Computer Club were the first to spoof Apple’s Touch ID sensor. They used existing techniques, but at higher resolution. A quick response: The technique can be completed with generally available materials and technology. It isn’t the sort of thing everyone will do, but there is no inherent barrier to entry such as high cost, special materials, or super-special skills. The CCC did great work here – I just think the hype is a bit off-base. On the other hand, Touch ID primarily targets people with no passcodes, or 4-digit PINs. It is a large improvement for that population. We need some perspective here. Touch ID disables itself if the phone is rebooted or you don’t use Touch ID for 48 hours (or if you wipe your iPhone remotely). This is why I’m comfortable using Touch ID even though I know I am more targeted. There is little chance of someone getting my phone without me knowing it (I’m addicted to the darn thing). I will disable Touch ID when crossing international borders and at certain conferences and hacker events. Yes, I believe if you enable Touch ID it could allow law enforcement easier access to your phone (because they can get your fingerprint, or touch your finger to your phone). If this concerns you, turn it off. That’s why I intend to disable it when crossing borders and in certain countries. As Rob Graham noted, you can set up Touch ID to use your fingertip, not the main body of your finger. I can confirm that this works, but you do need to game the setup a little. Your fingertip print is harder to get, but still not impossible. Not all risk is equal. For the vast majority of consumers, this provides as much security as a strong passcode with the convenience of no passcode. If you are worried you might be targeted by someone who can get your fingerprint, get your phone, and fake out the sensor… don’t use Touch ID. Apple didn’t make it for you. PS: I see the biggest risk for Touch ID in relationships with trust issues. It wouldn’t shock me at all to read about someone using the CCC technique to invade the privacy of a significant other. There are no rules in domestics… Share:

Share:
Read Post

Friday Summary: September 20, 2013

I have been so totally overwhelmed with projects that I have had very little time to read, research, or blog. So I was excited this morning to take a few minutes to download the new SDL research paper from Microsoft’s blog. It examines vendors using Microsoft’s SDL in both Microsoft and non-Microsoft environments. And what did I learn? Nothing. Apparently their research team has the same problem as the rest of us: no good metrics, and the best user stories get sanitized into oblivion. I am seriously disappointed – this type of research is sorely needed. If you are new to secure software development programs and want to learn, I still encourage you to download the paper, which raises important topics with snippets of high-level information. As a bonus it includes an overview of Microsoft’s SDL. If you aren’t new to secure development, you would be better off learning about useful strategies from the BSIMM project. If you are a developer and want more detailed information on how to implement Microsoft’s SDL, use the blog and the web site. They offer a ton of useful information – you just have to dig a bit to find what you want. Back to the subject at hand: There are two basic reasons to examine previous SDL implementations: tell me why I should do it, and how do I do it. Actually three if you count failure analysis, but that is an unpopular pastime. Let’s stick with the two core reasons. Those who have built software with secure coding techniques and processes have seen the positive benefits. And in many cases they have seen that security can be effective without costing an arm and a leg. But objectively proving that is freaking hard. Plenty of people talk about business benefits, but few offer compelling proof. Upper management wants numbers or it’s not real. I have made the mistake of telling management peers, “We will be more secure if we do this, and we will save money in the long run as we avoid fixing broken stuff in the future, or buying bolt-on security products.” Invariably they ask “How secure?” “How much money?” or “How far into the future?” – all questions I am unable to answer. “Trust me” doesn’t work when asking for budget or trying to get a larger salary allocation for a person who has been trained in secure coding. It is very hard to quantify the advantages until you are coding, or trying to fix broken code. One of the advantages at larger financial firms is that they have been building or integrating software for a long time, have been under attack from various types of fraudsters for a long time, and can apply lessons from failed – and poorly executed – projects to subsequent projects. They have bugs, they understand fraud rates, and they can use internal metrics to see what fixes work. Over the long term they can objectively measure whether specific process changes are making a difference. Microsoft has. This report should have. Developers and managers need research like this to justify secure software development. So where do you start? How do you do it? You ask your friends, usually. The CISOs, developers, and DevOps teams I speak with use tools and techniques their peers tried and had good experiences with. You have the same problem as your buddy at BigCo, and he tried SDLC, and it worked. Ideal? No. Scientific? Hell, no. It’s the right course of action, for the wrong reasons. Still, though, peer encouragement is often how these efforts start. Word of mouth is how Agile development propagated. Will a company see the same successes as a peer? Almost assuredly not. Your people, your process, your code. Totally different. But overall, from a decade of experience doing this, I know it works. It’s not plug and play, there are growing pains, and it takes effort, but it works. On to the Summary: Webcasts, Podcasts, Outside Writing, and Conferences Mortman’s speaking at BruCon Back in Black. Dave’s doing a BruCon panel as well, just in case you couldn’t get enough during the keynote. Mike’s Dark Reading post on fear mongers vs. innovation. Cloud IAM webcast next week: Check it out! Favorite Securosis Posts Adrian Lane: Defending Against Application Denial of Service Attacks. Mike is delving into application layer DoS, which is much more interesting than network DoS – there are tons of creative ways to kick over a server. This will be a fun series! David Mortman: Firewall Management Essentials: Change Management. Rich: Mike’s Incite this week. Mike is old. Then again, he’s the only one who wrote anything this week. Me? Baby no sleep, Rich no write. Mike Rothman: No Sleep Mismash. I have been where Rich is now. No sleep. Trying to be productive. Pile on a job change and relocation to ATL. I don’t miss that period in my life. Other Securosis Posts Firewall Management Essentials: Optimizing Rules. Black Hat West Cloud Security Training. Threat Intelligence for Ecosystem Risk Management [New Paper]. Firewall Management Essentials: Change Management. Favorite Outside Posts Adrian Lane: Crooks Hijack Retirement Funds Via SSA Portal. Great post and very informative regarding a growing problem with fraud. And the onus is not on every person with a social security number to fix the SSA’s operational problem – the SSA needs to a) do a better job vetting users, and b) stop payouts through pre-paid cards. That entire arrangement is an uncontrollable clusterfsck. They put the infrastructure on the Internet, so they are responsible for operational security. Not that it’s easy, but intractability is why many IT projects don’t get started in the first place. USA Today interview with Jony Ive. Some tidbits on design, and the one I really like is the focus on making functions invisible. His example of Touch ID is perfect – it just works, no “scanning… AUTHENTICATED” animations. Mike Rothman: Is the Perimeter Really Dead? Of course not. But it’s definitely changing. Decent take on the issue in Dark Reading. David Mortman: Managing Secrets With Chef Vault. Research Reports and Presentations Identity and Access Management for

Share:
Read Post

Defending Against Application Denial of Service Attacks [New Series]

As we discussed last year in Defending Against Denial of Service Attacks, attackers increasingly leverage availability-impacting attacks both to cause downtime (which costs site owners money) and to mask other kinds of attacks. These availability-impacting attacks are better known as Denial of Service (DoS) attacks. Our research identified a number of adversaries who increasingly use DoS attacks, including: Protection Racketeers: These criminals use DoS threats to demand ransom money. Attackers hold a site hostage by threatening to knock it down, and sometimes follow through. They get paid. They move on to the next target. The only thing missing is the GoodFellas theme music. Hacktivists: DoS has become a favored approach of hacktivists seeking to make a statement and shine a spotlight on their cause, whatever it may be. Hacktivists care less about the target than their cause. The target is usually collateral damage, though they are happy hit two birds with one stone by attacking an organization that opposes their cause when they can. You cannot negotiate with these folks, and starting public discourse is one of their goals. ‘CyberWar’: We don’t like the term – no one has been killed by browsing online (yet), but we can expect to see online attacks as a precursor to warplanes, ships, bombing, and ground forces. By knocking out power grids, defense radar, major media, and other critical technology infrastructure, the impact of an attack can be magnified. Exfiltrators: These folks use DoS to divert attention from the real attack: stealing data they can monetize. This could be an intellectual property theft or a financial attack such as stealing credit cards. Either way, they figure that if they blow in your front door you will be too distracted to notice your TV scooting out through the garage. They are generally right. Competitors: They say all’s fair in love and business. Some folks take that one a bit too far, and actively knock down competitor sites for an advantage. Maybe it’s during the holiday season. Maybe it happens after a competitor resists an acquisition or merger offer. It could be locking folks out from bidding on an auction. Your competition might screen scrape your online store to make sure they beat your pricing, causing a flood of traffic on a very regular and predictable basis. A competitor might try to ruin your hard-earned (and expensive) search rankings. Regardless of the reason, don’t assume an attacker is a nameless, faceless crime syndicate in a remote nation. It could be the dude down the street trying to get any advantage he can – legalities be damned. Given the varied adversaries, it is essential to understand that two totally different types of attacks are commonly lumped under the generic ‘DoS’ label. The first involves the network, blasting a site with enough traffic (sometimes over 300gbps) to flood the pipes and overwhelm security and network devices, as well as application infrastructure. This volumetric attack basically is the ‘cyber’ version of hitting something a billion times with a rock. This brute force attack typically demands a scrubbing service and/or CDN (Content Delivery Network) to deal with the onslaught of traffic and keep sites available. The second type of DoS attack targets weaknesses in applications. In Defending Against DoS we described an application attack as follows: Application-based attacks are different – they target weaknesses in web application components to consume all the resources of a web, application, or database server to effectively disable it. These attacks can target either vulnerabilities or ‘features’ of an application stack to overwhelm servers and prevent legitimate traffic from accessing web pages or completing transactions. These attacks require knowledge of the application and how to break or game it. They can be far more efficient than just blasting traffic at a network, and in many cases take advantage of legitimate features of the application, making defense all the harder. We are pleased to launch the next chapter in our Denial of Service research, entitled “Defending Against Application Denial of Service Attacks” (yep, we are thinking way out of the box for titles). In this series we will dig far more deeply into application DoS attacks and provide both an overview of the tactic and possible mitigations for defense. Here is a preliminary list of what we intend to cover: Application Server Attacks: The first group of AppDoS attacks targets the server and infrastructure stack. We will profile attacks such as Slowloris, Slow HTTP Post, RUDY, Slow read, and XerXes, discussing mitigations for each attack. We will also talk about brute force attacks on SSL (overwhelming servers with SSL handshake requests) and loading common pages – such as login, password reset, and store locators – millions of times. Attacking the Stack: Targeting Databases and Programming Languages: In this post we will talk about the next layers in the application stack – including the database and languages used to build the application. Regarding database DoS, we will highlight some of our recent research in Dealing with Database Denial of Service. Abusing Application Logic: As we continue to climb the application stack, we will talk about how applications are targeted directly with GET floods and variants. By profiling applications and learning which pages are most resource intensive, attackers can focus their efforts on the most demanding pages. To mitigate these attacks, we will discuss the roles of rate controls and input validation, as well as WAF and CDN based approaches to filter out attack requests before the application needs to deal with them. Billions of Results Served: We will profile the common attacks which overwhelm applications by overflowing memory with billions of results from either search results or shopping carts. We will touch on unfriendly scrapers, including search engines and other catalog aggregators that perform ‘legitimate’ searching but can be gamed by attackers. These attacks can only be remediated within the application, so we will discuss mechanisms for doing that (without alienating the developers). Building DoS Protections in: We will wrap up the series by talking about how to implement a productive process for working with developers to build in AppDoS protections.

Share:
Read Post

Incite 9/18/2013: Got No Game

On Monday night I did a guest lecture for some students in Kennesaw State’s information security program. It is always a lot of fun to get in front of the “next generation” of practitioners (see what I did there?). I focused on innovation in endpoint protection and network security, discussing the research I have been doing into threat intelligence. The kids (a few looked as old as me) seemed to enjoy hearing about the latest and greatest in the security space. t also gave me a forum to talk about what it’s really like in the security trenches, which many students don’t learn until they are knee-deep in the muck. I didn’t shy away from the lack of obvious and demonstrable success, or how difficult it is to get general business folks to understand what’s involved in protecting information. The professor had a term that makes a lot of sense: security folks are basically digital janitors, cleaning up the mess the general population makes. When I started talking about the coming perimeter re-architecture (driven by NGFW, et al), I mentioned how much time they will be able to save by dealing with a single policy, rather than having to manage the firewall, IPS, web filter, and malware detection boxes separately. I told them that would leave plenty of time to play Tetris. Yup, that garnered an awkward silence. I started spinning and asked if any knew what Tetris was? Of course they did, but a kind student gently informed me that no one has played that game in approximately 10 years. Okay, how about Gears of War? Not so much – evidently that trilogy is over. I was going to mention Angry Birds, but evidently Angry Birds was so 12 months ago. I quit before I lost all credibility. There it was, stark as day – I have no game. Well no video game anyway. Once I got over my initial embarrassment, I realized my lack of prowess is kind of intentional. I have a fairly addictive personality, so anything that can be borderline addictive (such as video games) is a problem for me. It’s hard to pay my bills if I’m playing Strategic Conquest for 40 hours straight, which I did back in the early 90’s. I have found through the years that if I just don’t start, I don’t have to worry about when (or if) I will stop. I see the same tendencies in the Boy. He’s all into “Clash of Clans” right now. Part of me is happy to see him get his Braveheart on attacking other villages, Game of Thrones style. He seems pretty good at analyzing an adversary’s defenses and finding a way around them, leading his clan to victory. But it’s frustrating when I have to grab the Touch just to have a conversation with him. Although at least I know where he gets it from. Some folks can practice moderation. You know, those annoying people who can take a little break for 15 minutes and play a few games, and then be disciplined enough to stop and get back on task. I’m not one of those people. When I start something, I start something. And that means the safest thing for me is often to not start. It’s all about learning my own idiosyncrasies and not putting myself in situations where I will get into trouble. So no video games for me! –Mike Photo credit: “when it’s no longer a game” originally uploaded by istolethetv Heavy Research We are back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can get all our content in its unabridged glory. And you can get all our research papers too. Firewall Management Essentials Optimizing Rules Change Management Introduction Continuous Security Monitoring Migrating to CSM The Compliance Use Case The Change Control Use Case The Attack Use Case Classification Defining CSM Why. Continuous. Security. Monitoring? API Gateways Implementation Key Management Developer Tools Newly Published Papers Threat Intelligence for Ecosystem Risk Management Dealing with Database Denial of Service Identity and Access Management for Cloud Services The 2014 Endpoint Security Buyer’s Guide The CISO’s Guide to Advanced Attackers Defending Cloud Data with Infrastructure Encryption Incite 4 U Good guys always get DoS’ed: Django learned the hard way that if you give hackers an inch they’ll take a mile – and your site too. Last week they suffered a denial of service attack when users submitted multi-megabyte passwords – the “computational complexity” of generating strong hashes for a few requests was enough to DoS the site. Awesome. The mitigation to this kind of attack is input validation. Sure, as a security expert I still get pissed when sites limit me to 8 character passwords, but it’s unreasonable to accept the Encyclopedia Britannica as valid field input. I am sorry to be smiling as I write this – I feel bad for the Django folks – but it’s funny how no good security intentions go unpunished. Thanks for patching, guys! – AL DHS gets monitoring religion (to the tune of $6B): Not sure how I missed the award of a $6 billion DHS contract to implement continuous detection and mitigation technology. Evidently this is the new term for continuous monitoring, and it seems every beltway bandit, and scanning and SIEM vendor, is involved. So basically nothing will get done – I guess that’s the American way. But this move, which started with NIST’s push to continuous monitoring and continues with DHS’s rebranded CDM initiative, is going in the right direction. Will they ever get to “real time” monitoring? Does it matter? They can’t actually respond in real time, so I don’t think so. If any of these gold-plated toilet seats provides the ability to see a vulnerability with a few days (rather than showing up on a quarterly report, and being ignored), it’s an improvement. As they said in Contact, “baby steps…” – MR FUD filled vacuum: When working with clients I am often still surprised at how often even mature organizations underestimate the eventual misinterpretations of their

Share:
Read Post

Firewall Management Essentials: Managing Access Risk

We have discussed two of the three legs of comprehensive firewall management: a change management process and optimizing the rules. Now let’s work through managing risk using the firewall. Obviously we need to define risk, because depending on your industry and philosophy, risk can mean many different things. For firewall management we are talking about the risk of unauthorized parties accessing sensitive resources. Obviously if a device with critical data is inaccessible to internal and/or external attackers, the risk it presents is lower. This “access risk management” function involves understanding first and foremost the network’s topology and security controls. The ability to view attack paths provides a visual depiction of how an attacker could gain access to a device. With this information you can see which devices need remediation and/or network workarounds, and prioritize fixes. Another benefit of visualizing attack paths is in understanding when changes on the network or security devices unintentionally expose additional attack surface. So what does this have to do with your firewall? That’s a logical question, but a key firewall function is access control. You configure the firewall and its rule set to ensure that only authorized ports, protocols, applications, users, etc. have access to critical devices, applications, etc. within your network. A misconfigured firewall can have significant and severe consequences, as discussed in the last post. For example, years ago when supporting a set of email security devices, we got a call about an avalanche of spam hitting the mailboxes of key employees. The customer was not pleased, but the deployed email security gateway appeared to be working perfectly. Initially perplexed, one of our engineers checked the backup email server, and discovered it was open to Internet traffic due to a faulty firewall rule. So attackers were able to use the back-up server as a mail relay, and blast all the mailboxes in the enterprise. With some knowledge of network topology and the paths between external networks and internal devices, this issue could have been identified and remediated before any employees were troubled. Key Access Risk Management Features When examining the network and keeping track of attack paths, look for a few key features: Topology monitoring: Topology can be determined actively, passively, or both. For active mapping you will want your firewall management tool to pull configurations from firewalls and other access control devices. You also need to account for routing tables, network interfaces, and address translation rules. Interoperating with passive monitoring tools (network behavioral analysis, etc.) can provide more continuous monitoring. You need the ability to determine whether and how any specific device can be accessed, and from where – both internal and external. Analysis horsepower: Accounting for all the possible paths through a network requires an n_*(_n-1) analysis, and n gets rather large for an enterprise network. The ability to re-analyze millions of paths on every topology change is critical for providing an accurate view. What if?: You will want to assess each possible change before it is made, to understand its impact on the network and attack surface. This enables the organization to detect additional risks posed by a change before committing it. In the example above, if that customer had a tool to help understand that a firewall rule change would make their backup email server a sitting duck for attackers they would have reconsidered. Alternative rules: It is not always possible to remediate a specific device due to operational issues. So to control risk you would like a firewall management tool to suggest appropriate rule changes or alternate network routes to isolate the vulnerable device and protect the network. At this point it should be clear that all these firewall management functions depend on each other. Optimizing rules is part of the change management process, and access risk management comes into play for every change. And vice-versa, so although we discussed these function as distinct requirements of firewall management, in reality you need all of these functions to work together for operational excellence. In this series’ last post we will focus on getting a quick win with firewall management technology. We will discuss deployment architectures and integration with enterprise systems, and work through a deployment scenario to make many of these concepts a bit more tangible. Share:

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.