Securosis

Research

Incite 4/18/2012: Camión de Calor

It was a Mr. Mom weekend, so I particularly appreciated settling in at the coffee shop on Monday morning and getting some stuff done. And it wasn’t just trucking the kids around to their various activities. It was a big weekend for all of us to catch up on work. XX1 has the CRCT standardized test this week, which is a big deal in GA, so there was much prep for that. Both XX2 and Boy have How to presentations in class this week. So they each had to write and practice a presentation. And I had to finish up our taxes and update the Securosis financials. With the Boss in absentia, I was juggling knives trying to get everything done. I look back on an intense but fun weekend. But when you spend a large block of time with kids, they inevitably surprise you with their interrogation… I mean questions. I was wearing my Hot Truck t-shirt (pictured at right), and the Boy was fascinated. What’s a Hot Truck? Is it hot? That was just the beginning of the questioning, so the Boy needed a little context. The Hot Truck is an institution for those who went to Cornell. Basically a guy made French Bread pizzas in a truck parked every night right off campus. Conveniently enough the truck parked around the corner from my fraternity house, and it was clearly the preferred late night meal after a night of hard partying. At any time of year you had folks milling around the truck waiting for their order. Of course the truck itself was pretty cool. It was basically an old box truck fitted with a pizza oven. The city set up a power outlet right on the street and he’d drive up at maybe 10pm, plug in, and start cooking. Things didn’t get exciting until 1 or 2 in the morning. Then the line would be 10-15 deep and the money guy would write your order on a paper bag. No name, nothing else. Just your order. Obviously there were plenty of ways to game such a sophisticated system. You could sneak a peek at the list and then say the sandwich was yours when it came up. Then wait until the real owner of the sandwich showed up and tried to figure out what happened while you munched on their food. The truck was there until 4am or so – basically until everyone got served. Over time, you got to know Bob (the owner) and he’d let you inside the truck (which was great on those 10-degree winter nights) to chat. You’d get your sandwich made sooner or could just take one of the unclaimed orders. He must have loved talking to all those drunk fools every night. But best of all was the shorthand language that emerged from the Hot Truck. You could order the PMP (Poor Man’s Pizza), MBC (meatballs and cheese), RoRo (roast beef with mushrooms), or even a Shaggy (a little bit of everything) – named after a fraternity brother of mine. And then you’d put on the extras, like Pep (pepperoni) or G&G (grease the garden – mayo and lettuce). All on a french bread pizza. My favorite was the MBC Pep G&G. Between the Hot Truck and beer it’s no wonder I gained a bunch of weight every year at school. But all things end and Bob sold the Truck a few years ago. It was bought by a local convenience store and they still run the truck, as well as serve the sandwiches in their store in downtown Ithaca. It’s just not the same experience though – especially since I don’t eat meatballs anymore. But the memories of Hot Truck live on, and I even have the t-shirt to prove it. –Mike Photo credits: “Hot Truck T-Shirt” taken by Mike Rothman Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS to get all our content in its unabridged glory. Vulnerability Management Evolution Scanning the Application Layer Watching the Watchers (Privileged User Management) Monitor Privileged Users Clouds Rolling In Understanding and Selecting DSP Use Cases Malware Analysis Quant Index of Posts Incite 4 U Stone cold responders: I recently did a session with a dozen or so CISOs at an IANS Forum, and one of the topics was incident response. I started to talk about the human toll of high-pressure incident response, and got a bunch of blank stares. Of course we dug in, and the bigger companies with dedicated response staff said they staff incident response teams with even-keeled folks. The kind who don’t get too excited or depressed or much of anything. Which kind of aligns with Lenny Z’s post on the kind of personality that detects security issues early. Seems anxious folks on edge all the time may not have an effective early warning system. Just more evidence that you need the right folks in the right spots for any chance at success. – MR PCI: Living on borrowed time? Bob Carr of Heartland Payments says Anyone that thinks they’re not going to be breached is naive. This interview, posted just days after Heartland’s financial settlement details went public, reinforces the notion that – just like cockroaches are the only survivors of a nuclear holocaust, only lawyers win in lawsuits. It was expensive for Heartland, and CardSystems Solutions did not survive. Which is topical in light of the Global Payments breach, which illustrates the risk to financial companies when Visa is offering to forgo PCI audits if a majority of merchant transactions originate from EMV terminals. Keep in mind that the breach to Global Payments – or Heartland for that matter – and fraud managed by cloning credit cards are totally separate. So time when merchants and payment processors should more aggressively look at security and breach preparedness as Mr. Carr advocates… Visa is backing off on audits to boost EMV. Some will say this is an exchange for back office security for

Share:
Read Post

Watching the Watchers: Clouds Rolling in

As much as we enjoy being the masters of the obvious, we don’t really need to discuss the move to cloud computing. It’s happening. It’s disruptive. Blah blah blah. People love to quibble about the details but it’s obvious to everyone. And of course, when the computation and storage behind your essential IT services might not reside in a facility under your control, things change a bit. The idea of a privileged user morphs in the cloud context, by adding another layer of abstraction via the cloud management environment. So regardless of your current level of cloud computing adoption, you need to factor the cloud into your PUM (privileged user management) initiative. Or do you? Let’s play a little Devil’s advocate here. When you think about it, isn’t cloud computing just more happening faster? You still have the same operating systems running as guests in public and/or private clouds, but with a greatly improved ability to spin up machines, faster than ever before. If you are able to provision and manage the entitlements of these new servers, it’s all good, right? In the abstract, yes. But the same old same old doesn’t work nearly as well in the new regime. Though we do respect the ostrich. Unfortunately burying your head in the sand doesn’t really remove the need to think about cloud privileged users. So let’s walk through some ways cloud computing differs fundamentally than the classical world of on-premise physical servers. Cloud Risks First of all, any cloud initiative adds another layer of management abstraction. You manage cloud resources though either a virtualization console (such as vCenter or XenCenter) or a public cloud management interface. This means a new set of privileged users and entitlements which require management. Additionally, this cloud stuff is (relatively) new, so management capability lags well behind a traditional data center. It’s evolving rapidly but hasn’t yet caught up with tools and processes for management of physical servers on a local physical network – and that immaturity poses a risk. For example, without entitlements properly configured, anyone with access to the cloud console can create and tear down any instance in the account. Or they can change access keys, add access or entitlements, change permissions, etc. – for the entire virtual data enter. Again, this doesn’t mean you shouldn’t proceed and take full advantage of cloud initiatives. But take care to avoid unintended consequences stemming from the flexibility and abstraction of the cloud. We also face a number of new risks driven by the flexibility of provisioning new computing resources. Any privileged user can spin up a new instance, which might not include proper agentry & instrumentation to plug into the cloud management environment. You don’t have the same coarse control of network access we had before, so it’s easier for new (virtual) servers to pop up, which means it’s also easier to be exposed accidentally. Management and security largely need to be implemented within the instances – you cannot rely on the cloud infrastructure to provide them. So cloud consoles absolutely demand suitable protection – at least as much as the most important server under their control. You will want to take a similar lifecycle approach to protecting the cloud console as you do with traditional devices. The Lifecycle in the Clouds To revisit our earlier research, the Privileged User Lifecycle involves restricting access, protecting credentials, enforcing entitlements, and monitoring P-user activity – but what does that look like in a cloud context? Restrict Access (Cloud) As in the physical world, you have a few options for restricting access to sensitive devices, which vary dramatically between private and public clouds. To recap: you can implement access controls within the network, on the devices themselves (via agents), or by running all connections through a proxy and only allowing management connections from the proxy. Private cloud console: The tactics we described in Restrict Access generally work, but there are a few caveats. Network access control gets a lot more complicated due to the inherent abstraction of the cloud. Agentry requires pre-authorized instances which include properly configured software. A proxy requires an additional agent of some kind on each instance, to restrict management connections to the proxy. That is actually as in the traditional datacenter – but now it must be tightly integrated with the cloud console. As instances come and go, knowing which instances are running and which policy groups each instance requires becomes the challenge. To fill this gap, third party cloud management software providers are emerging to add finer-grained access control in private clouds. Public cloud console: Restricting network access is an obvious non-starter in a public cloud. Fortunately you can set up specific security groups to restrict traffic and have some granularity on which IP addresses and protocols can access the instances, which would be fine in a shared administrator context. But you aren’t able to restrict access to specific users on specific devices (as required by most compliance mandates) at the network layer, because you have little control over the network in a public cloud. That leaves agentry on the instances, but with little ability to stop unauthorized parties from accessing instances. Another less viable option is a proxy, but you can’t really restrict access per se – the console literally lives on the Internet. To protect instances in a public cloud environment, you need to insert protections into other segments of the lifecycle. Fortunately we are seeing some innovation in cloud management, including the ability to manage on demand. This means access to manage instances (usually via ssh on Linux instances) is off by default. Only when management is required does the cloud console open up a management port(s) via policy, and only for authorized users at specified times. That approach address a number of the challenges of always on and always accessible cloud instances, and so it’s a promising model for cloud management. Protect Credentials (Cloud) When we think about protecting credentials for cloud computing resources, we use got an expanded concept of credentials. We now need to worry about three types of credentials: Credentials for the cloud console(s) Credentials

Share:
Read Post

Incite 4/11/2012: Exchanging Problems

I figured an afternoon flight to the midwest would be reasonably peaceful. I was wrong. Things started on the wrong foot when I got an email notification from Delta that the flight was delayed, even though it wasn’t. The resulting OJ sprint through the terminal to make the flight was agitating. Then the tons of screaming kids on the flight didn’t help matters. I’m thankful for noise isolating headphones, that’s for sure. But seeing the parents walking their kids up and down the aisle and dealing with the pain of ascent and descent on the kids’ eardrums got me thinking about my own situation. As I mentioned, I was in Italy last week teaching our CCSK course, but the Boss took the kids up north for spring break to visit family. She flew with all of the kids by herself. 5 years ago that never would have happened. We actually didn’t fly as a family for years because it was just too hard. With infant/toddler twins and one three years older, the pain of getting all the crap through the airport and dealing with security and car seats and all the other misery just wasn’t worth it. It was much easier to drive and for anything less than 6-7 hours, it was probably faster to load up the van. The Boss had no problems on the flight. The kids had their iOS devices and watched movies, played games, ate peanuts, enjoyed soda, and basically didn’t give her a hard time at all. They know how to equalize their ears, so the pain wasn’t an issue, and they took advantage of the endless supply of gum they can chew on a flight. So that problem isn’t really a problem any more. As long as they don’t go on walkabout through the terminal, it’s all good. But it doesn’t mean we haven’t exchanged one problem for another. XX1 has entered the tween phase. Between the hormonally driven attitude and her general perspective that she knows everything (yeah, like every other kid), sometimes I long for the days of diapers. At least I didn’t have a kid challenging stuff I learned the hard way decades ago. And the twins have their own issues, as they deal with friend drama and the typical crap around staying focused. When I see harried parents with multiples, sometimes I walk up and tell them it gets easier. I probably shouldn’t lie to them like that. It’s not easier, it’s just different. You constantly exchange one problem for another. Soon enough XX1 will be driving and that creates all sorts of other issues. And then they’ll be off to college and the rest of their lives. So as challenging as it is sometimes, I try to enjoy the angst and keep it all in perspective. If life was easy, what fun would it be? -Mike Photo credits: “Problems are Opportunities” originally uploaded by Donna Grayson Heavy Research We’re back at work on a variety of blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all of our content in its unabridged glory. Vulnerability Management Evolution Scanning the Infrastructure Scanning the Application Layer Watching the Watchers (Privileged User Management) Enforce Entitlements Monitor Privileged Users Understanding and Selecting DSP Extended Features Administration Malware Analysis Quant Index of Posts Incite 4 U Geer on application security: no silent failures Honestly, it’s pointless to try to summarize anything Dan Geer says. A summary misses the point. It misses the art of his words. And you’d miss priceless quotes like “There’s no government like no government,” and regarding data loss, “if I steal your data, then you still have them, unlike when I steal your underpants.” Brilliant. Just brilliant. So read this transcript of Dan’s keynote at AppSecDC and be thankful Dan is generous enough to post his public talks. Let me leave you my main takeaway from Dan’s talk: “In a sense, our longstanding wish to be taken seriously has come; we will soon reflect on whether we really wanted that.” This is an opportunity to learn from a guy who has seen it all in security. Literally. Don’t squander it. Take the 15 minutes and read the talk. – MR AppSec trio: Fergal Glynn of Veracode has started A CISO’s Guide to Application Security, a series on Threatpost. And it’s off to a good start, packed with a lot of good information, but the ‘components’ are all blending together. Secure software development, secure operations, and a software assurance program are three different things; and while they go hand in hand if you want a thorough program, it’s easier to think about them as three legs of the proverbial stool. Make no mistake, I have implemented secure coding techniques based purely on threat modeling because we had no metrics – or even idea of what metrics were viable – to do an assurance program. I’ve worked in finance, with little or no code development, relying purely on operational controls around pre-deployment and deployment phases on COTS software. At another firm I implemented metrics and risk analysis to inspire the CEO to allow secure code development to happen. So while these things get blurred together under the “application security” umbrella, remember they’re three different sets of techniques and processes, with three slightly different – and hopefully cooperating – audiences. – AL It’s the economy, stupid: One of the weirdest things I’ve realized over years in the security industry is how much security is about economics and psychology, not about technology. No, I’m not flying off the deep end and ignoring the tech (I’m still a geek, after all), but if you want to make big changes you need to focus on things that affect the economics, not how many times a user clicks on links in email. One great example is the new database the government and cell phone providers are setting up to track stolen phones. Not only will they keep track of the stolen phones, they will make sure they can’t be

Share:
Read Post

Responsible or Irresponsible Disclosure?—NFL Style

It’s funny to contrast this April to last April, at least as an NFL fan. Last year the lockout was in force, the negotiations stalled, and fans wondered how billionaires could argue with millionaires when the economy was in the crapper. Between the Peyton Manning lottery, the upcoming draft, and the Saints Bounty situation, there hasn’t been a dull moment for pro football fans since the Super Bowl ended. Speaking of the Saints, even after suspensions and fines, more nasty aspects of the story keep surfacing. Last week, we actually heard Gregg Williams, Defensive Coordinator of the Saints, implore his guys to target injured players, ‘affect’ the head, and twist ankles in the pile. Kind of nauseating. OK, very nauseating. I guess it’s true that most folks don’t want to see how the sausage is made – they just want to enjoy the taste. But the disclosure was anything but clean, Sean Pamphilon, the director who posted the audio, did not have permission to post it. He was a guest of a guest at that meeting, there to capture the life of former Saints player Steve Gleason, who is afflicted with ALS. The director argues he had the right. The player (and the Saints) insist he didn’t. Clearly the audio put the bounty situation in a different light for fans of the game. Before it was deplorable, but abstract. After listening to the tape, it was real. He really said that stuff. Really paid money for his team to intentionally hurt opponents. Just terrible. But there is still the dilemma of posting the tape without permission. Smart folks come down on both sides of this discussion. Many believe Pamphilon should have abided by the wishes of his host and not posted the audio. He wouldn’t have been there if not for the graciousness of both Steve Gleason and the Saints. But he was and he clearly felt the public had a right to know, given the history of the NFL burying audio & video evidence of wrongdoing (Spygate, anyone?). Legalities aside, this is a much higher profile example of the same responsible disclosure debate we security folks have every week. Does the public have a need to know? Is the disclosure of a zero day attack doing a public service? Or should the researcher wait until the patch goes live, when they get to enjoy a credit buried in the patch notice? Cynically, some folks disclosing zero-days are in it for the publicity. Sure, they can blame unresponsive vendors, but at the end of the day, some folks seek the spotlight by breaking a juicy zero-day. Likewise, you can make a case that Pamphilon was able to draw a lot of attention to himself and his projects (past, current, and future) by posting the audio. Obviously you can’t buy press coverage like that. Does that make it wrong – that the discloser gets the benefit of notoriety? There is no right or wrong answer here. There are just differing opinions. I’m not trying to open Pandora’s box and entertain a lot of discussion on responsible disclosure. Smart people have differing opinions and nothing I say will change that. My point was to draw the parallel between the Saints bounty tape disclosure and disclosing zero day attacks. Hopefully that provides some additional context for the moral struggles of researchers deciding whether to go public with their findings or not. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Application Layer

In our last Vulnerability Management Evolution post we discussed scanning infrastructure, which remains an important part of vulnerability management. But we recognize that most attacks target applications directly, so we can no longer just scan the infrastructure and be done with it. We need to climb the stack and pay attention to the application layer, looking for vulnerabilities in application as well as the supporting components. But that requires us to define an ‘application’, which is surprisingly difficult. A few years ago, the definition of application was fairly straightforward. Even in an N-tier app, with a variety of application servers and data stores, you largely controlled all the components of the application. Nowadays, not so much. Pre-assembled web stacks, open source application servers, third party crypto libraries, and cloud-provided services all make for quick application development, but blur the line between your application and the supporting infrastructure. You have little visibility into what’s going on behind the curtain, but you’re still responsible for securing it. For the purposes of our vulnerability/threat management discussion, we define the app as presentation and infrastructure. The presentation layer focuses on assembling information from a number of different sources – either internal or external to your enterprise. The user of the application couldn’t care less about where the data comes from. So from a threat standpoint you need to assess the presentation code for issues that put devices at risk. But your focus on reducing attack surface of applications also requires you to pay attention to the infrastructure. That means the application servers, interfaces, and databases that assemble the data presented by the application. So you scan application servers and databases to find problems. Let’s dig into the two aspects of the application layer to assess: databases and application infrastructure. Database Layer Assessing databases is more similar to the scanning infrastructure than applications – you look for vulnerabilities in the DBMS (database management system). As with other infrastructure devices, databases can be misconfigured and might have improper entitlements, all of which pose risks to your environment. So assessment needs to focus on whether appropriate database patches have been installed, the configuration of the database, improper access control, entitlements, etc… Let’s work through the key steps in database assessment: Discovery: First you need to know where your databases are. That means a discovery process, preferably automated to find both known and unknown databases. You need to be wary of shadow IT, where lines of business and other groups build their own data stores – perhaps without the operational mojo of your data center group. You should also make sure you are continuously searching for new databases because they can pop up anywhere, at any time, just like rogue access points – and they do. Vulnerabilities: You will also look for vulnerabilities in your DBMS platform, which requires up-to-date tests for database issues. Your DB assessment provider should have a research team to keep track of the newest and latest attacks on whatever database platforms you use. Once something is found, information about exposure and workarounds & remediations, is critical for making your job easier. Configurations: Configuration checking a DBMS is slightly different – you are assessing mostly internals. Be sure to you check the database both with credentials (as an authorized user) and without credentials (which more accurately represents a typical outside attacker). Both scenarios are common in database attacks, so make sure your configuration is sufficiently locked against both of them. Access Rights and Entitlements: Aside from default accounts and passwords, focus your efforts on making sure no users (neither humans nor applications) have additional entitlements that put the database platform at risk. For example, you need to ensure credentials of de-provisioned users have been removed and that accounts which only need read access don’t have the ability to DROP TABLES. And you need to verify that users – especially administrators – cannot ‘backdoor’ the database through local system privileges. Part of this is housekeeping, but you need to pay attention – make sure your databases are configured correctly to avoid unnecessary risk. Finally, we know this research focuses more on vulnerability/threat identification and assessment, but over time you will see even tighter integration between evolved vulnerability/threat management platforms and tactics to remediate problems. We have written a detailed research report on Database Assessment, and you should track our Database Security Platform research closely so you can shorten your exposure window by catching problems and taking action more quickly. Application Layer Application assessment (especially of web applications) is a different animal. Mostly because you have to actually ‘attack’ the application to find vulnerabilities, which might exist within the application code or the infrastructure components it is built on. Obviously you need to crawl through the app to find issues to fix issues. There are a several different types of app security testing (as discussed in Building a Web App Security Program), so we will just summarize here. Platform Vulnerabilties: This is the stuff we check for when scanning infrastructure and databases. Applications aren’t ‘stand-alone’ – they depend on infrastructure and inherit vulnerabilities from their underlying components. The clearest example is a content management system, where a web app built on Drupal inherits all the vulnerabilities of Drupal, unless they are somehow patched worked around. Static Application Security Testing (SAST): Also called “white box testing”, SAST involves developers analyzing source to identify coding errors. This is not normally handled by security teams – it is normally part of a secure development lifecycle (SDLC). Dynamic Application Security Testing (DAST): Also known as “black box testing”, DAST is the attempt to find application defects using bad inputs, using fuzzing and other techniques. This doesn’t involve access to the source code, so some security teams get involved in DAST, but it is still largely seen as a development responsibility because thorough DAST testing can be destructive to the app, and so shouldn’t be used on production applications. Web App Scanners But the technology most relevant to the evolution of vulnerability management is the web application scanner. Many of the available vulnerability management offerings offer an add-on capability to scan applications and their underlying infrastructures to identify

Share:
Read Post

Watching the Watchers: Monitor Privileged Users

As we continue our march through the Privileged User Lifecycle, we have locked down privileged accounts as tightly as needed. But that’s not the whole story, and the lifecycle ends with a traditional audit. Because verifying what the administrators do with their privileges is just as important as the other steps. Admittedly, some organizations have as large a cultural issue with granular user monitoring because they actually want to trust their employees. Silly organizations, right? But in this case there is no monitoring slippery slope – we aren’t talking about recording an employee’s personal Facebook interactions or checking out pictures of Grandma. We’re talking about capturing what an administrator has done on a specific device. Before we get into the how of privileged user monitoring, let’s look at why you would monitor admins. There are two main reasons: Forensics: In the event of a breach, you need to know what happened on the device, quickly. A detailed record of what an administrator did on a device can be instrumental to putting the pieces together – especially in the event of an inside job. Of course privileged user monitoring is not a panacea to forensics – there are a zillion other ways to get compromised – but if the breach began with administrator activity, you would have a record of what happened, and the proverbial smoking gun. Audit: Another use is to make your auditor happy. Imagine the difference between showing the auditor a policy saying how you do things, and showing a screen capture of an account being provisioned or a change being committed. Monitoring logs are powerful for showing that the controls are in place. Sold? Good, but how to you move from concept to reality? You have a couple of options, including: SIEM/Log Management: As part of your other compliance efforts, you likely send most events from sensitive devices to a central aggregation point. This SIEM/Log Management work can also be used to monitor privileged users. By setting up some reports and correlation rules for administrator activity you can effectively figure out what administrators are doing. By the way, this is one of the main use cases for SIEM and log management. Configuration Management: A similar approach is to pull data out of a configuration management platform which tracks changes on managed devices. A difference between using configuration management and a SIEM is the ability to go beyond monitoring, and actually block unauthorized changes. Screen Capture If a picture is worth a thousand words, how much would you say a video is worth? An advantage of routing your administrative sessions through a proxy is the ability to capture exactly what admins are doing on every device. With a video screen capture of the session and the associated keystrokes, there can be no question of intent – no inference of what actually happened. You’ll know what happened – you just need to watch the playback. For screen capture you can deploy an agent on the managed device or you could route sessions through a proxy. We started discussing the P-User Lifecycle by focusing on how to restrict access to sensitive devices. After discussing a number of options, we explained why proxies make a lot of sense for making sure only the right administrators access the correct devices at the right times. So it’s appropriate that we come full circle and end our lifecycle discussion back in a similar position. Let’s look at performance and scale first. Video is pretty compute intensive, and consumes a tremendous amount of storage. The good news is that an administrative session doesn’t require HD quality to catch a bad apple red-handed. So significant compression is feasible, and can save a significant chunk of storage – whether you capture with an agent or through a proxy. But there is a major difference in device impact between these approaches. An agent takes resources for screen capture from the managed device, which impacts the server’s performance – probably significantly. With a proxy, the resources are consumed by the proxy server rather than the managed device. The other issue is the security of the video – ensuring there is no tampering with the capture. Either way you can protect the video with secure storage and/or other means of making tampering evident, such as cryptographic hashing. The main question is how you get the video into secure storage. Using an agent, the system needs a secure transport between the device and the storage. Using a proxy approach, the storage could be integrated into (or very close to) the proxy device. We believe a proxy-based approach to monitoring privileged users makes the most sense, but there are certainly cases where an agent could suffice. And with that we have completed our journey through the Privileged User Lifecycle, but we aren’t done yet. This “cloud computing” thing threatens to dramatically complicate how all devices are managed, with substantial impact on how privileged users need to be managed. So in the next post we will delve into the impact of the cloud on privileged users. Share:

Share:
Read Post

Watching the Watchers: Enforce Entitlements

So far we have described the Restrict Access and Protect Credentials aspects of the Privileged User Lifecycle. So far any administrator managing a device is authorized to be there and uses strong credentials. But what happens when they get there? Do they get free reign? Should you just give them root or full Administrator rights and have done with it? What could possibly go wrong with that? Clearly you should make sure administrators only perform authorized functions on managed devices. This protects against a couple of scenarios you probably need to worry about: Insider Threat: A privileged user is the ultimate insider, as he/she has the skills and knowledge to compromise a system and take what they want, cover their tracks, etc. So it makes sense to provide a bit more specificity over what admins and groups can do, and block them from doing everything else. Separation of Duties: Related to the Insider Threat, optimally you should make sure no one person has the ability to take down your environment. So you can logically separate duties, where one group can manage the servers but not the storage. Or one admin can provision a new server but can’t move data onto it. Compromised Endpoints: You also can’t assume any endpoint isn’t compromised. So even an authenticated and authorized user may not be who you think they are. You can protect yourself from this scenario by restricting what the administrator can do. So even in the worst case, where an intruder is in your system as an admin, they can’t wreck everything. Smaller organizations may lack the resources to define administrator roles with real granularity. But the more larger enterprises can restrict administrators to particular functions the harder it becomes for a bad apple to take everything down. Policy Granularity You need to define roles and responsibilities – what administrators can and can’t do – with sufficient granularity. We won’t go into detail on the process of setting policies, but you will either adopt a whitelist approach: defining legitimate commands and blocking everything else; or block specific commands (a blacklist), such as restricting folks in the network admin group from deleting or snapshotting volumes in the data center. Depending on your needs, you could also define far more granular polices, similar to the policy options available for controlling access to the password vault. For example you might specify that a sysadmin can only add user accounts to devices during business hours, but can add and remove volumes at any time. Or you could define specific types of commands authorized to flow from an application to the back-end database to prevent unauthorized data dumps. But granularly brings complexity. In a rapidly changing environment it can be hard to truly nail down a legitimate set of allowable actions for specific administrators. So getting too granular is a problem too – similar to the issues with application whitelisting. And the higher up the application stack you go, the more integration is required, as homegrown and highly customized applications need to be manually integrated into the privileged user management system. Location, Location, Location As much fun as it is to sit around and set up policies, the reality is that nothing is protected until the entitlements are enforced. There are two main approaches to actually enforcing entitlements. The first involves implementing a proxy in between the admin and the system, which acts as a man in the middle to interpret and then either allow or block each command. Alternatively, entitlements can be enforced on the end devices via agents that intercept commands and enforce policy locally. We aren’t religious about either approach, and each has pros and cons. Specifically, the proxy implementation is simpler – you don’t need to install agents on every device, so you don’t have to worry about OS compatibility (as long as the command syntax remains consistent) or deal with incompatibilities every time an underlying OS is updated. Another advantage is that unauthorized commands are blocked before reaching the managed device, so even if the attacker has elevated privileges, management commands can only come through the proxy. On the other hand the proxy serves as a choke point, which may introduce a single point of failure. Similarly, an agent-based approach offers advantages such as preventing attackers from back-dooring devices by defeating the proxy or gaining physical access to the devices. The agent runs on each device, so even being at the keyboard doesn’t kill it. But agents require management, and consume processing resources on the managed systems. Pick the approach that makes the most sense for your environment, culture, and operational capabilities. At this point in the lifecycle privileged users should be pretty well locked down. But as a card-carrying security professional you don’t trust anything. Keep an eye on exactly what the admins are doing – we will cover privileged user monitoring next. Share:

Share:
Read Post

Vulnerability Management Evolution: Scanning the Infrastructure

As we discussed in the Vulnerability Management Evolution introduction, traditional vulnerability scanners, focused purely on infrastructure devices, do not provide enough context to help organizations prioritize their efforts. Those traditional scanners are the plumbing of threat management. You don’t appreciate the scanner until your proverbial toilet is overflowing with attackers and you have no idea what are they targeting. We will spend most of this series on the case for transcending device scanning, but infrastructure scanning remains a core component of any evolved threat management platform. So let’s look at some key aspects of a traditional scanner. Core Features As a mature technology, pretty much all the commercial scanners have a core set of functions that work well. Of course different scanners have different strengths and weaknesses, but for the most part they all do the following: Discovery: You can’t protect something (or know it’s vulnerable) if you don’t know it exists. So the first key feature is discovery. The enemy of a security professional is surprise, so you want to make sure you know about new devices as quickly as possible, including rogue wireless access points and other mobile devices. Given the need to continuously perform discovery, passive scanning and/or network flow analysis can be an interesting and useful complement to active device discovery. Device/Protocol Support: Once you have found a device, you need to figure out its security posture. Compliance demands that we scan all devices with access to private/sensitive/protected data, so any scanner should assess the varieties of network and security devices running in your environment, as well as servers on all relevant operating systems. Of course databases and applications are important too, but we’ll discuss those later in this series. And be careful scanning brittle systems like SCADA, as knocking down production devices doesn’t make any friends in the Ops group. Inside/Out and Outside/In: You can’t assume adversaries are only external or internal, so you need the ability to assess your devices from both inside and outside your network. So some kind of scanner appliance (which could be virtualized) is needed to scan the innards of your environment. You’ll also want to monitor your IP space from the outside to identify new Internet facing devices, find open ports, etc. Accuracy: Unless you enjoy wild goose chases, you’ll come to appreciate a scanner that minimizes false positives by focusing on accuracy. Accessible Vulnerability Information: With every vulnerability found, decisions must be made on the severity of the issue, so it’s very helpful to have information from either the vendor’s research team or other third parties on the vulnerability, directly within the scanning console. Appropriate Scale: Adding capabilities to the evolved platform makes scale a much more serious issue. But first things first: the scanner must be able to scan your environment quickly and effectively, whether that is 200 or 200,000 devices. The point is to ensure the scanner is extensible to what you’ll need as you add devices, databases, apps, virtual instances, etc. over time. We will discuss platform technical architectures later in this series, but for now suffice it to say there will be a lot more data in the vulnerability management platform, and the underlying platform architecture needs to keep up. New & Updated Tests: Organizations face new attacks constantly and attacks evolve constantly. So your scanner needs to keep current to test for the latest attacks. Exploit code based on patches and public vulnerability disclosures typically appears within a day so time is of the essence. Expect your platform provider to make significant investments in research to track new vulnerabilities, attacks, and exploits. Scanners need to be updated almost daily, so you will need the ability to transparently update them with new tests – whether running on-premises or in the cloud. Additional Capabilities But that’s not all. Today’s infrastructure scanners also offer value-added functions that have become increasingly critical. These include: Configuration Assessment: There really shouldn’t be a distinction between scanning for a vulnerability and checking for a bad configuration. Either situation provide an opportunity for device compromise. For example, a patched firewall with an any-to-any policy doesn’t protect much – completely aside from any vulnerability defects. But unfortunately the industry’s focus on vulnerabilities means this capability is usually considered a scanner add-on. Over time these distinctions will fade away, as we expect both vulnerability scanning and configuration assessment to emerge as critical components of the platform. Further evolution will add the ability to monitor for system file changes and integrity – it is the same underlying technology. Patch Validation: As we described in Patch Management Quant, validating patches is an integral part of the process. With some strategic integration between patch and configuration management, the threat management platform can (and should) verify installed patches to confirm that the vulnerability has been remediated. Further integration involves sending information to and from IT Ops systems to close the loop between security and Operations. Cloud/Virtualization Support: With the increasing adoption of virtualization in data centers, you need to factor in the rapid addition and removal of virtual machines. This means not only assessing hypervisors as part of your attack surface, but also integrating information from the virtualization management console (vCenter, etc.) to discover what devices are in use and which are not. You’ll also want to verify the information coming from the virtualization console – you learned not to trust anything in security pre-school, didn’t you? Leveraging Collection So what’s the difference with all of these capabilities from what you already have? It’s all about making 1 + 1 = 3 by integrating data to derive information and drive priorities. We have seen some value-add capabilities (configuration assessment, patch validation, etc.) further integrated into infrastructure scanners to good effect. This positions the vulnerability/threat management platform as another source of intelligence for security professionals. And we are only getting started – there are plenty of other data types to incorporate into this discussion. Next we will climb the proverbial stack and evaluate how database and application scanning play into the evolved platform story. Share:

Share:
Read Post

Incite 4/4/2012: Travel the Barbarian

Flying into Milan to teach the CCSK class on Sunday morning, it really struck me how much we take this technology stuff for granted. The flight was uneventful (though that coach seat on a 9+ hour flight is the suxxor), except for the fact that the in-seat entertainment system didn’t work in our section. Wait. What? You mean you can’t see the movies and TV shows you want, or play the trivia game to pass the time? How barbaric! Glad I brought my iPad, so I enjoyed half the first season of Game of Thrones. Then when I arrive I jump in the cab. The class is being held in a suburb of Milan, a bit off the beaten path. I’m staying in a local hotel, but it’s not an issue because I have the address and the cabbie has GPS. What did we do before GPS was pervasive? Yeah, I remember. We used maps. How barbaric. Then I get to the hotel and ask for the WiFi code. The front desk guy then proceeds to explain that you can buy 1, 4 or 12 hour blocks for an obscene number of Euros. Wait. What? You don’t have a daily rate? So I’ve got to connect and disconnect? And I have to manage connections between all of my devices. Man, feels like 5 years ago when you had to pay for WiFi in hotels in the US. No longer, though, because I carry around my MiFi and it provides great bandwidth for all my devices. They do offer MiFi devices in Italy, but not for rent. Yeah, totally barbaric – making me constrain my Internet usage. And don’t even get me started on cellular roaming charges. Which is why hourly WiFi is such a problem. I forwarded my cell phone to a Skype number, and the plan was to have Skype running in the background so I could take calls. Ah, the best laid plans… But one thing about Italy is far from barbaric, and that’s gelato. So what if they don’t take AmEx at most of the places I’ll go this week. They do have gelato, so I’ll deal with the inconveniences, and get back in the gym when I return to the States. Gelato FTW. -Mike Photo credits: “Conan the Barbarian #1” originally uploaded by Philipp Lenssen Heavy Research We’re back at work on a variety of our blog series, so here is a list of the research currently underway. Remember you can get our Heavy Feed via RSS, where you can access all our content in its unabridged glory. Vulnerability Management Evolution Introduction Defending iOS Data Managed Devices Defining Your iOS Data Security Strategy Watching the Watchers (Privileged User Management) Protect Credentials Understanding and Selecting DSP Core Features Malware Analysis Quant Index of Posts Incite 4 U PCI CYA: We’ve said it here so many times that I can’t even figure out what to link to. The PCI Council claims that no PCI compliant organization has ever been breached. And as Alan Shimel points out, the Global Payments breach is no exception. The house wins once again. Or does it? Brian Krebs also reports that timelines don’t match up, or perhaps there is also another breach involved with a different payment processor? I’m sure if that’s true they’ll be dropped from PCI like a hot turd. Never forget that PCI is about protecting the card brands first, and anyone else 27th. – RM White noise: You’ve probably heard about the Global Payments breach. That means that as I write this the marketing department of every security vendor is crafting a story about how their products would have stopped the breach. And that’s all BS. Visa and Brian Krebs are reporting the attackers accessed Track 2 data – that tells us a lot. It’s clearly stated in the PCI-DSS specification that the mag stripe data is not to be stored anywhere by payment processors or merchant banks. It’s unlikely that attackers compromised the point-of-sale devices or the network feeds into Global Payments to collect 1.5M records from the merchant account of Joe’s Parking Garage in a month. As Global Payments is saying the data was ‘exported’, it’s more likely that their back office systems were breached, exposing unencrypted track data. Any security vendor’s ability to detect and stop the ‘export’ is irrelevant; it’s more secure to not collect the data at all. And even if the records were ‘temporary’, they should have been encrypted to avoid just this exposure to people poking around systems and databases at any time. So just sit back and learn (once again) from the screw-ups that continue to occur. I’m sure we’ll hear a lot more about this in the coming weeks. – AL I’ll take “nothing” for $200, Alex: Everybody batten down the hatches, it may be Spring (in the Northern Hemisphere, anyway), but when Shack becomes optimistic you can be sure that winter is coming. Though I do like to see a happier Shack talking about what is right with Infosec. Things like acceptance of breach inevitability and less acceptance of bureaucracy (though that cycles up and down). There are some good points here, but the most optimistic thing Dave says is that we have smart new blood coming into the field. And that the responsibility is ours, as the grizzled cynical old veterans, not to tarnish the new guys before their time. – MR Security is the broker: Managing enterprise adoption of cloud computing is a tough problem. There is little to prevent dev and ops from running out and spinning up their own systems on various cloud services; assuming you are silly enough to give them credit cards. Gartner thinks that enterprises will use cloud service brokerages (which will be internal) to facilitate cloud use. I agree, although if you are smart, security will play this key role (or a big part of it). Security can broker identity and access management, secure cloud APIs, handle encryption, and define compliance policies (the biggest obstacle to cloud adoption ). We have the tools, mandate, and responsibility. But if you don’t get ahead of things you will be

Share:
Read Post

Watching the Watchers: Protect Credentials

As we continue our march through the Privileged User Lifecycle, we have provisioned the privileged users and restricted access to only the devices they are authorized to manage. The next risk to address is the keys or credentials of these privileged users (P-Users) falling into the wrong hands. The best access and entitlements security controls fail if someone can impersonate a P-User. But the worst risk isn’t even compromised credentials. It’s not having unique credentials in the first place. You must have seen the old admin password sharing scheme, right? It was used, mostly out of necessity, many moons ago. Administrators needed access to the devices they managed. But at times they needed help, so they asked a buddy to take care of something, and just gave him/her the credentials. What could possibly go wrong? We covered a lot of that in the Keys to the Kingdom. Shared administrative credentials open Pandora’s box. Once the credentials are in circulation you can’t get them back – which is a problem when an admin leaves the company or no longer has those particular privileges. You can’t deprovision shared credentials so you need to change them. PCI, as the low bar for security (just ask Global Payments), recognizes the issues with sharing IDs, so Requirement 8 is all about making sure anyone with access to protected data uses a unique ID, and that their use is audited – so you can attribute every action to a particular user. But that’s not all! (in my best infomercial voice). What about the fact that some endpoints could be compromised? Even administrative endpoints. So sending admin credentials to that endpoint might not be safe. And what happens when developers hard-code credentials into an applications? Why go through the hassle of secure coding – just embed the password right into the application! That password never changes anyway, so what’s the risk? So we need to protect credentials, as much as whatever they control. Credential Lockdown How can we protect these credentials? Locking the credentials away in a vault meets many of the requirements described above. First, if the credentials are stored in a vault, it harder for admins to share them. Let’s not put the cart before the horse, but this makes it pretty easy (and transparent) to change the password after every access, eliminating the sticky-note-under-keyboard risk. Going through the vault for every administrative credential access means you have an audit trail of who used which credentials (and presumably which specific devices they were managing) and when. That kind of stuff makes auditors happy. Depending on the deployment of the vault, the administrator may never even see the credentials, as they can be automatically entered on the server if you use a proxy approach to restricting access. And this also provides single sign-on to all managed devices, as the administrator authenticates (presumably using multiple factors) to the proxy, which interfaces directly to the vault again, transparently to the user. So even an administration device teeming with malware cannot expose critical credentials. Similarly, an application can make a call to the vault, rather than hard-coding credentials into the app. Yes, the credentials still end up on the application server, but that’s still much better than hard-coding the password. So are you sold yet? If you worry about credentials being access and misused, a password vault provides a good mechanism for protecting them. Define Policies As with most things in security, using a vault involves both technology and process. We will tackle the process first, because without a good process even the best technology has no chance. So before you implement anything you need to define the rules of (credential) engagement. You need to answer some questions. Which systems and devices need to be involved in the password management system? This may involve servers (physical and/or virtual), network & security devices, infrastructure services (DNS, Directory, mail, etc.), databases, and/or applications. Ideally your vault will natively support most of your targets, but broad protection is likely to require some integration work on your end. So make sure any solution you look at has some kind of API to facilitate this integration. How does each target use the vault? Then you need to decide who (likely by group) can access each target, how long they are allowed to use the credentials (and manage the device), and whether they need to present additional authentication factors to access the device. You’ll also define whether multiple administrators can access managed devices simultaneously and whether to change the password after each check-in/check-out cycle. Finally, you may need to support external administrators (for third party management or business partner integration), so keep that in mind as you work through these decisions. What kind of administrator experience makes sense? Then you need to figure out the P-User interaction with the system. Will it be via a proxy login, where the user never sees the credentials, or will there be a secure agent on the device to receive and protect the credential? Figure out how the vault supports application-to-database and application-to-application interaction, as those are different than supporting human admins. You’ll also want to specify which activities are audited and how long audit logs are kept. Securing the Vault If you are putting the keys to the kingdom in this vault, make sure it’s secure. You probably will not bring a product in and set your application pen-test ninjas loose on it, so you are more likely to rely on what we call the sniff test. Ask questions to see whether the vendor has done their homework to protect the vault. You should understand the security architecture of the vault. Yes, you may have to sign a non-disclosure agreement to see the details, but it’s worth it. You need to know how they protect things. Discuss the threat model(s) the vendor uses to implement that security architecture. Make sure they didn’t miss any obvious attack vectors. You also need to poke around their development process a bit and make sure they have a proper SDLC and actually test for security defects before

Share:
Read Post
dinosaur-sidebar

Totally Transparent Research is the embodiment of how we work at Securosis. It’s our core operating philosophy, our research policy, and a specific process. We initially developed it to help maintain objectivity while producing licensed research, but its benefits extend to all aspects of our business.

Going beyond Open Source Research, and a far cry from the traditional syndicated research model, we think it’s the best way to produce independent, objective, quality research.

Here’s how it works:

  • Content is developed ‘live’ on the blog. Primary research is generally released in pieces, as a series of posts, so we can digest and integrate feedback, making the end results much stronger than traditional “ivory tower” research.
  • Comments are enabled for posts. All comments are kept except for spam, personal insults of a clearly inflammatory nature, and completely off-topic content that distracts from the discussion. We welcome comments critical of the work, even if somewhat insulting to the authors. Really.
  • Anyone can comment, and no registration is required. Vendors or consultants with a relevant product or offering must properly identify themselves. While their comments won’t be deleted, the writer/moderator will “call out”, identify, and possibly ridicule vendors who fail to do so.
  • Vendors considering licensing the content are welcome to provide feedback, but it must be posted in the comments - just like everyone else. There is no back channel influence on the research findings or posts.
    Analysts must reply to comments and defend the research position, or agree to modify the content.
  • At the end of the post series, the analyst compiles the posts into a paper, presentation, or other delivery vehicle. Public comments/input factors into the research, where appropriate.
  • If the research is distributed as a paper, significant commenters/contributors are acknowledged in the opening of the report. If they did not post their real names, handles used for comments are listed. Commenters do not retain any rights to the report, but their contributions will be recognized.
  • All primary research will be released under a Creative Commons license. The current license is Non-Commercial, Attribution. The analyst, at their discretion, may add a Derivative Works or Share Alike condition.
  • Securosis primary research does not discuss specific vendors or specific products/offerings, unless used to provide context, contrast or to make a point (which is very very rare).
    Although quotes from published primary research (and published primary research only) may be used in press releases, said quotes may never mention a specific vendor, even if the vendor is mentioned in the source report. Securosis must approve any quote to appear in any vendor marketing collateral.
  • Final primary research will be posted on the blog with open comments.
  • Research will be updated periodically to reflect market realities, based on the discretion of the primary analyst. Updated research will be dated and given a version number.
    For research that cannot be developed using this model, such as complex principles or models that are unsuited for a series of blog posts, the content will be chunked up and posted at or before release of the paper to solicit public feedback, and provide an open venue for comments and criticisms.
  • In rare cases Securosis may write papers outside of the primary research agenda, but only if the end result can be non-biased and valuable to the user community to supplement industry-wide efforts or advances. A “Radically Transparent Research” process will be followed in developing these papers, where absolutely all materials are public at all stages of development, including communications (email, call notes).
    Only the free primary research released on our site can be licensed. We will not accept licensing fees on research we charge users to access.
  • All licensed research will be clearly labeled with the licensees. No licensed research will be released without indicating the sources of licensing fees. Again, there will be no back channel influence. We’re open and transparent about our revenue sources.

In essence, we develop all of our research out in the open, and not only seek public comments, but keep those comments indefinitely as a record of the research creation process. If you believe we are biased or not doing our homework, you can call us out on it and it will be there in the record. Our philosophy involves cracking open the research process, and using our readers to eliminate bias and enhance the quality of the work.

On the back end, here’s how we handle this approach with licensees:

  • Licensees may propose paper topics. The topic may be accepted if it is consistent with the Securosis research agenda and goals, but only if it can be covered without bias and will be valuable to the end user community.
  • Analysts produce research according to their own research agendas, and may offer licensing under the same objectivity requirements.
  • The potential licensee will be provided an outline of our research positions and the potential research product so they can determine if it is likely to meet their objectives.
  • Once the licensee agrees, development of the primary research content begins, following the Totally Transparent Research process as outlined above. At this point, there is no money exchanged.
  • Upon completion of the paper, the licensee will receive a release candidate to determine whether the final result still meets their needs.
  • If the content does not meet their needs, the licensee is not required to pay, and the research will be released without licensing or with alternate licensees.
  • Licensees may host and reuse the content for the length of the license (typically one year). This includes placing the content behind a registration process, posting on white paper networks, or translation into other languages. The research will always be hosted at Securosis for free without registration.

Here is the language we currently place in our research project agreements:

Content will be created independently of LICENSEE with no obligations for payment. Once content is complete, LICENSEE will have a 3 day review period to determine if the content meets corporate objectives. If the content is unsuitable, LICENSEE will not be obligated for any payment and Securosis is free to distribute the whitepaper without branding or with alternate licensees, and will not complete any associated webcasts for the declining LICENSEE. Content licensing, webcasts and payment are contingent on the content being acceptable to LICENSEE. This maintains objectivity while limiting the risk to LICENSEE. Securosis maintains all rights to the content and to include Securosis branding in addition to any licensee branding.

Even this process itself is open to criticism. If you have questions or comments, you can email us or comment on the blog.